WorldWideScience

Sample records for server virtual machine

  1. Server virtualization solutions

    OpenAIRE

    Jonasts, Gusts

    2012-01-01

    Currently in the information technology sector that is responsible for a server infrastructure is a huge development in the field of server virtualization on x86 computer architecture. As a prerequisite for such a virtualization development is growth in server productivity and underutilization of available computing power. Several companies in the market are working on two virtualization architectures – hypervizor and hosting. In this paper several of virtualization products that use host...

  2. Microsoft Virtualization Master Microsoft Server, Desktop, Application, and Presentation Virtualization

    CERN Document Server

    Olzak, Thomas; Boomer, Jason; Keefer, Robert M

    2010-01-01

    Microsoft Virtualization helps you understand and implement the latest virtualization strategies available with Microsoft products. This book focuses on: Server Virtualization, Desktop Virtualization, Application Virtualization, and Presentation Virtualization. Whether you are managing Hyper-V, implementing desktop virtualization, or even migrating virtual machines, this book is packed with coverage on all aspects of these processes. Written by a talented team of Microsoft MVPs, Microsoft Virtualization is the leading resource for a full installation, migration, or integration of virtual syste

  3. A Capacity Supply Model for Virtualized Servers

    Directory of Open Access Journals (Sweden)

    Alexander PINNOW

    2009-01-01

    Full Text Available This paper deals with determining the capacity supply for virtualized servers. First, a server is modeled as a queue based on a Markov chain. Then, the effect of server virtualization on the capacity supply will be analyzed with the distribution function of the server load.

  4. APLIKASI SERVER VIRTUAL IP UNTUK MIKROKONTROLER

    OpenAIRE

    Ashari, Ahmad

    2008-01-01

    Selama ini mikrokontroler yang terhubung ke satu komputer hanya dapat diakses melalui satu IP saja, padahal kebanyakan sistem operasi sekarang dapat memperjanjikan lebih dari satu IP untuk setiap komputer dalam bentuk virtual IP. Penelitian ini mengkaji pemanfaatan virtual IP dari IP aliasing pada sistem operasi Linux sebagai Server Virtual IP untuk mikrokontroler. Prinsip dasar Server Virtual IP adalah pembuatan Virtual Host pada masing-masing IP untuk memproses paket-paket data dan menerjem...

  5. Analisis Perbandingan Load Balancing Web Server Tunggal Dengan Web Server Cluster Menggunakan Linux Virtual Server

    OpenAIRE

    Lukitasari, Desy; Oklilas, Ahmad Fali

    2010-01-01

    Virtual server adalah server yang mempunyai skalabilitas dan ketersedian yang tinggi yang dibangun diatas sebuah cluster dari beberapa real server. Real server dan load balancer akan saling terkoneksi baik dalam jaringan lokal kecepatan tinggi atau yang terpisah secara geografis. Load balancer dapat mengirim permintaan-permintaan ke server yang berbeda dan membuat paralel service dari sebuah cluster pada sebuah alamat IP tunggal dan meminta pengiriman dapat menggunakan teknologi IP load...

  6. Saving Money and Time with Virtual Server

    CERN Document Server

    Sanders, Chris

    2006-01-01

    Microsoft Virtual Server 2005 consistently proves to be worth its weight in gold, with new implementations thought up every day. With this product now a free download from Microsoft, scores of new users are able to experience what the power of virtualization can do for their networks. This guide is aimed at network administrators who are interested in ways that Virtual Server 2005 can be implemented in their organizations in order to save money and increase network productivity. It contains information on setting up a virtual network, virtual consolidation, virtual security, virtual honeypo

  7. Server virtualization management of corporate network with hyper-v

    OpenAIRE

    Kovalenko, Taras

    2012-01-01

    On a paper main tasks and problems of server virtualization are considerate. Practical value of virtualization in a corporate network, advantages and disadvantages of application of server virtualization are also considerate.

  8. Empirical Analysis of Server Consolidation and Desktop Virtualization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2013-01-01

    Full Text Available Physical server transited to virtual server infrastructure (VSI and desktop device to virtual desktop infrastructure (VDI have the crucial problems of server consolidation, virtualization performance, virtual machine density, total cost of ownership (TCO, and return on investments (ROI. Besides, how to appropriately choose hypervisor for the desired server/desktop virtualization is really challenging, because a trade-off between virtualization performance and cost is a hard decision to make in the cloud. This paper introduces five hypervisors to establish the virtual environment and then gives a careful assessment based on C/P ratio that is derived from composite index, consolidation ratio, virtual machine density, TCO, and ROI. As a result, even though ESX server obtains the highest ROI and lowest TCO in server virtualization and Hyper-V R2 gains the best performance of virtual machine management; both of them however cost too much. Instead the best choice is Proxmox Virtual Environment (Proxmox VE because it not only saves the initial investment a lot to own a virtual server/desktop infrastructure, but also obtains the lowest C/P ratio.

  9. Instant Hyper-v Server Virtualization starter

    CERN Document Server

    Eguibar, Vicente Rodriguez

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks.The approach would be in a tutorial manner that will guide the users in an orderly manner toward virtualization.This book is conceived for system administrator and advanced PC enthusiasts who want to venture into the virtualization world. Although this book goes from scratch up, knowledge on server Operative Systems, LAN and networking has to be in place. Having a good background on server administration is desirable, including networking service

  10. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  11. Quantum Virtual Machine (QVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  12. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  13. Formal modeling of virtual machines

    Science.gov (United States)

    Cremers, A. B.; Hibbard, T. N.

    1978-01-01

    Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.

  14. Advanced server virtualization VMware and Microsoft platforms in the virtual data center

    CERN Document Server

    Marshall, David; McCrory, Dave

    2006-01-01

    Executives of IT organizations are compelled to quickly implement server virtualization solutions because of significant cost savings. However, most IT professionals tasked with deploying virtualization solutions have little or no experience with the technology. This creates a high demand for information on virtualization and how to properly implement it in a datacenter. Advanced Server Virtualization: VMware® and Microsoft® Platforms in the Virtual Data Center focuses on the core knowledge needed to evaluate, implement, and maintain an environment that is using server virtualization. This boo

  15. Virtual Machine Language

    Science.gov (United States)

    Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John

    2005-01-01

    Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.

  16. PENGUKURAN KINERJA ROUND-ROBIN SCHEDULER UNTUK LINUX VIRTUAL SERVER PADA KASUS WEB SERVER

    Directory of Open Access Journals (Sweden)

    Royyana Muslim Ijtihadie

    2005-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Dengan meningkatnya perkembangan jumlah pengguna internet dan mulai diadopsinya penggunaan internet dalam kehidupan sehari-hari, maka lalulintas data di Internet telah meningkat secara signifikan. Sejalan dengan itu pula beban kerja server-server yang memberikan service di Internet juga mengalami kenaikan yang cukup signifikan. Hal tersebut dapat mengakibatkan suatu server mengalami kelebihan beban pada suatu saat. Untuk mengatasi hal tersebut maka diterapkan skema konfigurasi server cluster menggunakan konsep load balancing. Load balancing server menerapkan algoritma dalam melakukan pembagian tugas. Algoritma round robin telah digunakan pada Linux Virtual Server. Penelitian ini melakukan pengukuran kinerja terhadap Linux Virtual Server yang menggunakan algoritma round robin untuk melakukan penjadwalan pembagian beban terhadap server. Penelitian ini mengukur performa dari sisi client yang mencoba mengakses web server.performa yang diukur adalah jumlah request yang bisa diselesaikan perdetik (request per second, waktu untuk menyelesaikan per satu request, dan   throughput yang dihasilkan. Dari hasil percobaan didapatkan bahwa penggunaan LVS bisa meningkatkan performa, yaitu menaikkan jumlah request per detik

  17. Perspectives of IT Professionals on Employing Server Virtualization Technologies

    Science.gov (United States)

    Sligh, Darla

    2010-01-01

    Server virtualization enables a physical computer to support multiple applications logically by decoupling the application from the hardware layer, thereby reducing operational costs and competitive in delivering IT services to their enterprise organizations. IT organizations continually examine the efficiency of their internal IT systems and…

  18. Machine learning in virtual screening.

    Science.gov (United States)

    Melville, James L; Burke, Edmund K; Hirst, Jonathan D

    2009-05-01

    In this review, we highlight recent applications of machine learning to virtual screening, focusing on the use of supervised techniques to train statistical learning algorithms to prioritize databases of molecules as active against a particular protein target. Both ligand-based similarity searching and structure-based docking have benefited from machine learning algorithms, including naïve Bayesian classifiers, support vector machines, neural networks, and decision trees, as well as more traditional regression techniques. Effective application of these methodologies requires an appreciation of data preparation, validation, optimization, and search methodologies, and we also survey developments in these areas.

  19. Virtual Machine Logbook - Enabling virtualization for ATLAS

    International Nuclear Information System (INIS)

    Yao Yushu; Calafiura, Paolo; Leggett, Charles; Poffet, Julien; Cavalli, Andrea; Frederic, Bapst

    2010-01-01

    ATLAS software has been developed mostly on CERN linux cluster lxplus or on similar facilities at the experiment Tier 1 centers. The fast rise of virtualization technology has the potential to change this model, turning every laptop or desktop into an ATLAS analysis platform. In the context of the CernVM project we are developing a suite of tools and CernVM plug-in extensions to promote the use of virtualization for ATLAS analysis and software development. The Virtual Machine Logbook (VML), in particular, is an application to organize work of physicists on multiple projects, logging their progress, and speeding up ''context switches'' from one project to another. An important feature of VML is the ability to share with a single 'click' the status of a given project with other colleagues. VML builds upon the save and restore capabilities of mainstream virtualization software like VMware, and provides a technology-independent client interface to them. A lot of emphasis in the design and implementation has gone into optimizing the save and restore process to makepractical to store many VML entries on a typical laptop disk or to share a VML entry over the network. At the same time, taking advantage of CernVM's plugin capabilities, we are extending the CernVM platform to help increase the usability of ATLAS software. For example, we added the ability to start the ATLAS event display on any computer running CernVM simply by clicking a button in a web browser. We want to integrate seamlessly VML with CernVM unique file system design to distribute efficiently ATLAS software on every physicist computer. The CernVM File System (CVMFS) download files on-demand via HTTP, and cache it locally for future use. This reduces by one order of magnitude the download sizes, making practical for a developer to work with multiple software releases on a virtual machine.

  20. A proposal for improving data center management through strategic implementation of Server virtualization technology to support Malaysian Nuclear Agency's activities

    International Nuclear Information System (INIS)

    Mohamad Safuan Sulaiman; Abdul Muin Abdul Rahman; Raja Murzaferi Raja Moktar; Saaidi Ismail

    2010-01-01

    Management of servers in Nuclear Malaysia's data center poses a big challenge to IT Center as well as to the general management. Traditional server management techniques have been used to provide reliable and continuous support for the ever increasing services and applications demanded by researchers and the other staffs of Nuclear Malaysia. Data centers are cost centers which need logistical support such as electricity, air conditioning, room space, manpower and other resources. To save cost and comply with Green Technology while maintaining or improving the level of services, a new concept called server virtualization is proposed and a feasibility study of this technology has been initiated to explore its potential to accommodate IT centers ever demanding services while reducing the need for such logistical supports, hence adhering to the Green IT concept. Server virtualization is a new technology where a single high performance physical server can host multiple high processing services, and different types operating systems with different hardware and software requirements which are traditionally performed by multiple server machines. This paper briefly explains server virtualization concepts, tools and techniques and proposes an implementation strategy of the technology for Nuclear Malaysia's data center. (author)

  1. Utilization of Virtual Server Technology in Mission Operations

    Science.gov (United States)

    Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

  2. vSphere virtual machine management

    CERN Document Server

    Fitzhugh, Rebecca

    2014-01-01

    This book follows a step-by-step tutorial approach with some real-world scenarios that vSphere businesses will be required to overcome every day. This book also discusses creating and configuring virtual machines and also covers monitoring virtual machine performance and resource allocation options. This book is for VMware administrators who want to build their knowledge of virtual machine administration and configuration. It's assumed that you have some experience with virtualization administration and vSphere.

  3. Virtual Vector Machine for Bayesian Online Classification

    OpenAIRE

    Minka, Thomas P.; Xiang, Rongjing; Yuan; Qi

    2012-01-01

    In a typical online learning scenario, a learner is required to process a large data stream using a small memory buffer. Such a requirement is usually in conflict with a learner's primary pursuit of prediction accuracy. To address this dilemma, we introduce a novel Bayesian online classi cation algorithm, called the Virtual Vector Machine. The virtual vector machine allows you to smoothly trade-off prediction accuracy with memory size. The virtual vector machine summarizes the information con...

  4. Managing virtual machines with Vac and Vcycle

    Science.gov (United States)

    McNab, A.; Love, P.; MacMahon, E.

    2015-12-01

    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.

  5. (m, M) Machining system with two unreliable servers, mixed spares and common-cause failure

    OpenAIRE

    Jain, Madhu; Mittal, Ragini; Kumari, Rekha

    2015-01-01

    This paper deals with multi-component machine repair model having provision of warm standby units and repair facility consisting of two heterogeneous servers (primary and secondary) to provide repair to the failed units. The failure of operating and standby units may occur individually or due to some common cause. The primary server may fail partially following full failure whereas secondary server faces complete failure only. The life times of servers and operating/standby units and their re...

  6. Virtual Class Support at the Virtual Machine Level

    DEFF Research Database (Denmark)

    Nielsen, Anders Bach; Ernst, Erik

    2009-01-01

    This paper describes how virtual classes can be supported in a virtual machine.  Main-stream virtual machines such as the Java Virtual Machine and the .NET platform dominate the world today, and many languages are being executed on these virtual machines even though their embodied design choices...... conflict with the design choices of the virtual machine.  For instance, there is a non-trivial mismatch between the main-stream virtual machines mentioned above and dynamically typed languages.  One language concept that creates an even greater mismatch is virtual classes, in particular because fully...... general support for virtual classes requires generation of new classes at run-time by mixin composition.  Languages like CaesarJ and ObjectTeams can express virtual classes restricted to the subset that does not require run-time generation of classes, because of the restrictions imposed by the Java...

  7. Performance and portability of the SciBy virtual machine

    DEFF Research Database (Denmark)

    Andersen, Rasmus; Vinter, Brian

    2010-01-01

    The Scientific Bytecode Virtual Machine is a virtual machine designed specifically for performance, security, and portability of scientific applications deployed in a Grid environment. The performance overhead normally incurred by virtual machines is mitigated using native optimized scientific li...

  8. The architecture of a virtual grid GIS server

    Science.gov (United States)

    Wu, Pengfei; Fang, Yu; Chen, Bin; Wu, Xi; Tian, Xiaoting

    2008-10-01

    The grid computing technology provides the service oriented architecture for distributed applications. The virtual Grid GIS server is the distributed and interoperable enterprise application GIS architecture running in the grid environment, which integrates heterogeneous GIS platforms. All sorts of legacy GIS platforms join the grid as members of GIS virtual organization. Based on Microkernel we design the ESB and portal GIS service layer, which compose Microkernel GIS. Through web portals, portal GIS services and mediation of service bus, following the principle of SoC, we separate business logic from implementing logic. Microkernel GIS greatly reduces the coupling degree between applications and GIS platforms. The enterprise applications are independent of certain GIS platforms, and making the application developers to pay attention to the business logic. Via configuration and orchestration of a set of fine-grained services, the system creates GIS Business, which acts as a whole WebGIS request when activated. In this way, the system satisfies a business workflow directly and simply, with little or no new code.

  9. Blade runner. Blade server and virtualization technology can help hospitals save money--but they are far from silver bullets.

    Science.gov (United States)

    Lawrence, Daphne

    2009-03-01

    Blade servers and virtualization can reduce infrastructure, maintenance, heating, electric, cooling and equipment costs. Blade server technology is evolving and some elements may become obsolete. There is very little interoperability between blades. Hospitals can virtualize 40 to 60 percent of their servers, and old servers can be reused for testing. Not all applications lend themselves to virtualization--especially those with high memory requirements. CIOs should engage their vendors in virtualization discussions.

  10. Optimal Placement Algorithms for Virtual Machines

    OpenAIRE

    Bellur, Umesh; Rao, Chetan S; SD, Madhu Kumar

    2010-01-01

    Cloud computing provides a computing platform for the users to meet their demands in an efficient, cost-effective way. Virtualization technologies are used in the clouds to aid the efficient usage of hardware. Virtual machines (VMs) are utilized to satisfy the user needs and are placed on physical machines (PMs) of the cloud for effective usage of hardware resources and electricity in the cloud. Optimizing the number of PMs used helps in cutting down the power consumption by a substantial amo...

  11. An Automatic Decision-Making Mechanism for Virtual Machine Live Migration in Private Clouds

    Directory of Open Access Journals (Sweden)

    Ming-Tsung Kao

    2014-01-01

    Full Text Available Due to the increasing number of computer hosts deployed in an enterprise, automatic management of electronic applications is inevitable. To provide diverse services, there will be increases in procurement, maintenance, and electricity costs. Virtualization technology is getting popular in cloud computing environment, which enables the efficient use of computing resources and reduces the operating cost. In this paper, we present an automatic mechanism to consolidate virtual servers and shut down the idle physical machines during the off-peak hours, while activating more machines at peak times. Through the monitoring of system resources, heavy system loads can be evenly distributed over physical machines to achieve load balancing. By integrating the feature of load balancing with virtual machine live migration, we successfully develop an automatic private cloud management system. Experimental results demonstrate that, during the off-peak hours, we can save power consumption of about 69 W by consolidating the idle virtual servers. And the load balancing implementation has shown that two machines with 80% and 40% CPU loads can be uniformly balanced to 60% each. And, through the use of preallocated virtual machine images, the proposed mechanism can be easily applied to a large amount of physical machines.

  12. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    Science.gov (United States)

    Jain, Madhu; Meena, Rakesh Kumar

    2018-03-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  13. Implementing Citrix XenServer Quickstarter

    CERN Document Server

    Ahmed, Gohar

    2013-01-01

    Implementing Citrix XenServer Quick Starter is a practical, hands-on guide that will help you get started with the Citrix XenServer Virtualization technology with easy-to-follow instructions.Implementing Citrix XenServer Quick Starter is for system administrators who have little to no information on virtualization and specifically Citrix XenServer Virtualization. If you're managing a lot of physical servers and are tired of installing, deploying, updating, and managing physical machines on a daily basis over and over again, then you should probably explore your option of XenServer Virtualizati

  14. Untyped Memory in the Java Virtual Machine

    DEFF Research Database (Denmark)

    Gal, Andreas; Probst, Christian; Franz, Michael

    2005-01-01

    We have implemented a virtual execution environment that executes legacy binary code on top of the type-safe Java Virtual Machine by recompiling native code instructions to type-safe bytecode. As it is essentially impossible to infer static typing into untyped machine code, our system emulates...... untyped memory on top of Java’s type system. While this approach allows to execute native code on any off-the-shelf JVM, the resulting runtime performance is poor. We propose a set of virtual machine extensions that add type-unsafe memory objects to JVM. We contend that these JVM extensions do not relax...... Java’s type system as the same functionality can be achieved in pure Java, albeit much less efficiently....

  15. Experience with Server Self Service Center (S3C)

    CERN Multimedia

    Sucik, J

    2009-01-01

    CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft® Virtual Server 2005. With the introduction of Windows Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. This paper describes the architecture of the redesigned virtual Server Self Service based on Hyper-V which provides dynamically scalable virtualized resources on demand as needed and outlines the possible implications on the future use of virtual machines at CERN.

  16. Experience with Server Self Service Center (S3C)

    International Nuclear Information System (INIS)

    Sucik, Juraj; Bukowiec, Sebastian

    2010-01-01

    CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft (registered) Virtual Server 2005. With the introduction of Windows Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. This paper describes the architecture of the redesigned virtual Server Self Service based on Hyper-V which provides dynamically scalable virtualized resources on demand as needed and outlines the possible implications on the future use of virtual machines at CERN.

  17. Tensor Network Quantum Virtual Machine (TNQVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.

  18. Virtual NC machine model with integrated knowledge data

    International Nuclear Information System (INIS)

    Sidorenko, Sofija; Dukovski, Vladimir

    2002-01-01

    The concept of virtual NC machining was established for providing a virtual product that could be compared with an appropriate designed product, in order to make NC program correctness evaluation, without real experiments. This concept is applied in the intelligent CAD/CAM system named VIRTUAL MANUFACTURE. This paper presents the first intelligent module that enables creation of the virtual models of existed NC machines and virtual creation of new ones, applying modular composition. Creation of a virtual NC machine is carried out via automatic knowledge data saving (features of the created NC machine). (Author)

  19. Virtual Machine Language Controls Remote Devices

    Science.gov (United States)

    2014-01-01

    Kennedy Space Center worked with Blue Sun Enterprises, based in Boulder, Colorado, to enhance the company's virtual machine language (VML) to control the instruments on the Regolith and Environment Science and Oxygen and Lunar Volatiles Extraction mission. Now the NASA-improved VML is available for crewed and uncrewed spacecraft, and has potential applications on remote systems such as weather balloons, unmanned aerial vehicles, and submarines.

  20. Virtual Machine Lifecycle Management in Grid and Cloud Computing

    OpenAIRE

    Schwarzkopf, Roland

    2015-01-01

    Virtualization is the foundation for two important technologies: Virtualized Grid and Cloud Computing. Virtualized Grid Computing is an extension of the Grid Computing concept introduced to satisfy the security and isolation requirements of commercial Grid users. Applications are confined in virtual machines to isolate them from each other and the data they process from other users. Apart from these important requirements, Virtual...

  1. A Review of Virtual Machine Attack Based on Xen

    Directory of Open Access Journals (Sweden)

    Ren xun-yi

    2016-01-01

    Full Text Available Virtualization technology as the foundation of cloud computing gets more and more attention because the cloud computing has been widely used. Analyzing the threat with the security of virtual machine and summarizing attack about virtual machine based on XEN to predict visible security hidden recently. Base on this paper can provide a reference for the further research on the security of virtual machine.

  2. Development of Client-Server Application by Using UDP Socket Programming for Remotely Monitoring CNC Machine Environment in Fixture Process

    Directory of Open Access Journals (Sweden)

    Darmawan Darmawan

    2016-08-01

    Full Text Available The use of computer technology in manufacturing industries can improve manufacturing flexibility significantly, especially in manufacturing processes; many software applications have been utilized to improve machining performance. However, none of them has discussed the abilities to perform direct machining. In this paper, an integrated system for remote operation and monitoring of Computer Numerical Control (CNC machines is put into consideration. The integrated system includes computerization, network technology, and improved holding mechanism. The work proposed by this research is mainly on the software development for such integrated system. It uses Java three-dimensional (3D programming and Virtual Reality Modeling Language (VRML at the client side for visualization of machining environment. This research is aimed at developing a control system to remotely operate and monitor a self-reconfiguration fixture mechanism of a CNC milling machine through internet connection and integration of Personal Computer (PC-based CNC controller, a server side, a client side and CNC milling. The performance of the developed system was evaluated by testing with one type of common protocols particularly User Datagram Protocol (UDP.  Using UDP, the developed system requires 3.9 seconds to complete the close clamping, less than 1 second to release the clamping and it can deliver 463 KiloByte.

  3. Using a vision cognitive algorithm to schedule virtual machines

    OpenAIRE

    Zhao Jiaqi; Mhedheb Yousri; Tao Jie; Jrad Foued; Liu Qinghuai; Streit Achim

    2014-01-01

    Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM) scheduling problem on the...

  4. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Science.gov (United States)

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962

  5. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Directory of Open Access Journals (Sweden)

    Supriya Kinger

    2014-01-01

    Full Text Available Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  6. Prediction based proactive thermal virtual machine scheduling in green clouds.

    Science.gov (United States)

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  7. Virtualization in network and servers infrastructure to support dynamic system reconfiguration in ALMA

    Science.gov (United States)

    Shen, Tzu-Chiang; Ovando, Nicolás.; Bartsch, Marcelo; Simmond, Max; Vélez, Gastón; Robles, Manuel; Soto, Rubén.; Ibsen, Jorge; Saldias, Christian

    2012-09-01

    ALMA is the first astronomical project being constructed and operated under industrial approach due to the huge amount of elements involved. In order to achieve the maximum through put during the engineering and scientific commissioning phase, several production lines have been established to work in parallel. This decision required modification in the original system architecture in which all the elements are controlled and operated within a unique Standard Test Environment (STE). The advance in the network industry and together with the maturity of virtualization paradigm allows us to provide a solution which can replicate the STE infrastructure without changing their network address definition. This is only possible with Virtual Routing and Forwarding (VRF) and Virtual LAN (VLAN) concepts. The solution allows dynamic reconfiguration of antennas and other hardware across the production lines with minimum time and zero human intervention in the cabling. We also push the virtualization even further, classical rack mount servers are being replaced and consolidated by blade servers. On top of them virtualized server are centrally administrated with VMWare ESX. Hardware costs and system administration effort will be reduced considerably. This mechanism has been established and operated successfully during the last two years. This experience gave us confident to propose a solution to divide the main operation array into subarrays using the same concept which will introduce huge flexibility and efficiency for ALMA operation and eventually may simplify the complexity of ALMA core observing software since there will be no need to deal with subarrays complexity at software level.

  8. Virtual Machine Images Management in Cloud Environments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Nowadays, the demand for scalability in distributed systems has led a design philosophy in which virtual resources need to be configured in a flexible way to provide services to a large number of users. The configuration and management of such an architecture is challenging (e.g.: 100,000 compute cores on the private cloud together with thousands of cores on external cloud resources). There is the need to process CPU intensive work whilst ensuring that the resources are shared fairly between different users of the system, and guarantee that all nodes are up to date with new images containing the latest software configurations. Different types of automated systems can be used to facilitate the orchestration. CERN’s current system, composed of different technologies such as OpenStack, Packer, Puppet, Rundeck and Docker will be introduced and explained, together with the process used to create new Virtual Machines images at CERN.

  9. A study of factors affecting the adoption of server virtualization technology

    Science.gov (United States)

    Lu, Hsin-Ke; Lin, Peng-Chun; Chiang, Chang-Heng; Cho, Chien-An

    2018-04-01

    It has become a trend that worldwide enterprises and organizations apply new technologies to improve their operations; besides, it has higher cost and less flexibility to construct and manage traditional servers, therefore the current mainstream is to use server virtualization technology. However, from these new technology organizations will not necessarily get the expected benefits because each one has its own level of organizational complexity and abilities to accept changes. The researcher investigated key factors affecting the adoption of virtualization technology through two phases. In phase I, the researcher reviewed literature and then applied the dimensions of "Information Systems Success Model" (ISSM) to generalize the factors affecting the adoption of virtualization technology to be the preliminary theoretical framework and develop a questionnaire; in phase II, a three-round Delphi Method was used to integrate the opinions of experts from related fields which were then gradually converged in order to obtain a stable and objective questionnaire of key factors so that these results were expected to provide references for organizations' adoption of server virtualization technology and future studies.

  10. A modification of Java virtual machine for counting bytecode commands

    OpenAIRE

    Nikolaj, Janko

    2014-01-01

    The objective of the thesis was to implement or modify an existing Java virtual machine (JVM) in a way that it will allow insight into statistics of the executed Java instructions of an executed user program. The functionality will allow analysis of the algorithms in Java environment. After studying the theory of Java and Java virtual machine, we decided to modify an existing Java virtual machine. We chose JamVM which is a lightweight, open-source Java virtual machine under GNU license. The i...

  11. MODELS OF LIVE MIGRATION WITH ITERATIVE APPROACH AND MOVE OF VIRTUAL MACHINES

    Directory of Open Access Journals (Sweden)

    S. M. Aleksankov

    2015-11-01

    Full Text Available Subject of Research. The processes of live migration without shared storage with pre-copy approach and move migration are researched. Migration of virtual machines is an important opportunity of virtualization technology. It enables applications to move transparently with their runtime environments between physical machines. Live migration becomes noticeable technology for efficient load balancing and optimizing the deployment of virtual machines to physical hosts in data centres. Before the advent of live migration, only network migration (the so-called, «Move», has been used, that entails stopping the virtual machine execution while copying to another physical server, and, consequently, unavailability of the service. Method. Algorithms of live migration without shared storage with pre-copy approach and move migration of virtual machines are reviewed from the perspective of research of migration time and unavailability of services at migrating of virtual machines. Main Results. Analytical models are proposed predicting migration time of virtual machines and unavailability of services at migrating with such technologies as live migration with pre-copy approach without shared storage and move migration. In the latest works on the time assessment of unavailability of services and migration time using live migration without shared storage experimental results are described, that are applicable to draw general conclusions about the changes of time for unavailability of services and migration time, but not to predict their values. Practical Significance. The proposed models can be used for predicting the migration time and time of unavailability of services, for example, at implementation of preventive and emergency works on the physical nodes in data centres.

  12. Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center Networks

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2016-01-01

    Full Text Available Virtualization has been an efficient method to fully utilize computing resources such as servers. The way of placing virtual machines (VMs among a large pool of servers greatly affects the performance of data center networks (DCNs. As network resources have become a main bottleneck of the performance of DCNs, we concentrate on VM placement with Traffic-Aware Balancing to evenly utilize the links in DCNs. In this paper, we first proposed a Virtual Machine Placement Problem with Traffic-Aware Balancing (VMPPTB and then proved it to be NP-hard and designed a Longest Processing Time Based Placement algorithm (LPTBP algorithm to solve it. To take advantage of the communication locality, we proposed Locality-Aware Virtual Machine Placement Problem with Traffic-Aware Balancing (LVMPPTB, which is a multiobjective optimization problem of simultaneously minimizing the maximum number of VM partitions of requests and minimizing the maximum bandwidth occupancy on uplinks of Top of Rack (ToR switches. We also proved it to be NP-hard and designed a heuristic algorithm (Least-Load First Based Placement algorithm, LLBP algorithm to solve it. Through extensive simulations, the proposed heuristic algorithm is proven to significantly balance the bandwidth occupancy on uplinks of ToR switches, while keeping the number of VM partitions of each request small enough.

  13. A portable virtual machine target for proof-carrying code

    DEFF Research Database (Denmark)

    Franz, Michael; Chandra, Deepak; Gal, Andreas

    2005-01-01

    Virtual Machines (VMs) and Proof-Carrying Code (PCC) are two techniques that have been used independently to provide safety for (mobile) code. Existing virtual machines, such as the Java VM, have several drawbacks: First, the effort required for safety verification is considerable. Second and mor...... simultaneously providing efficient justin-time compilation and target-machine independence. In particular, our approach reduces the complexity of the required proofs, resulting in fewer proof obligations that need to be discharged at the target machine....

  14. Kualitas Jaringan Pada Jaringan Virtual Local Area Network (VLAN Yang Menerapkan Linux Terminal Server Project (LTSP

    Directory of Open Access Journals (Sweden)

    Lipur Sugiyanta

    2017-12-01

    Full Text Available Virtual Local Area Network (VLAN merupakan sebuah teknik dalam jaringan komputer untuk menciptakan beberapa jaringan yang berbeda tetapi masih merupakan sebuah jaringan lokal yang tidak terbatas pada lokasi fisik seperti LAN sedangkan Linux Terminal Server Project (LTSP merupakan sebuah teknik terminal server yang dapat memperbanyak workstation dengan hanya menggunakan sebuah Linux server. Dalam membangun sebuah jaringan komputer perlu memperhatikan beberapa hal dan salah satunya adalah kualitas jaringan dari jaringan yang dibangun. Pada penelitian ini bertujuan untuk mengetahui pengaruh jumlah client terhadap kualitas jaringan berdasarkan parameter delay dan packet loss pada jaringan VLAN yang menerapkan LTSP. Oleh karena itu, penelitian ini menggunakan jenis metode penelitian kualitatif dengan memperhatikan standar yang digunakan dalam penelitian yaitu standar International Telecommunication Union – Telecommunication (ITU-T. Penerapan penelitian ini menggunakan sistem operasi pada server adalah Ubuntu Desktop 14.04 LTS. Berdasarkan dari hasil penelitian yang ditemukan dapat disimpulkan bahwa benar terbukti bahwa makin banyak client yang dilayani oleh server maka akan menurunkan kualitas jaringan berdasarkan parameter Quality of Service (QoS yang digunakan yaitu delay dan packet loss.

  15. Virtual Machine Language 2.1

    Science.gov (United States)

    Riedel, Joseph E.; Grasso, Christopher A.

    2012-01-01

    VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that

  16. A Journey from Interpreters to Compilers and Virtual Machines

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2003-01-01

    We review a simple sequence of steps to stage a programming-language interpreter into a compiler and virtual machine. We illustrate the applicability of this derivation with a number of existing virtual machines, mostly for functional languages. We then outline its relevance for todays language...

  17. Making extreme computations possible with virtual machines

    International Nuclear Information System (INIS)

    Reuter, J.; Chokoufe Nejad, B.

    2016-02-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  18. On the Impossibility of Detecting Virtual Machine Monitors

    Science.gov (United States)

    Gueron, Shay; Seifert, Jean-Pierre

    Virtualization based upon Virtual Machines is a central building block of Trusted Computing, and it is believed to offer isolation and confinement of privileged instructions among other security benefits. However, it is not necessarily bullet-proof — some recent publications have shown that Virtual Machine technology could potentially allow the installation of undetectable malware root kits. As a result, it was suggested that such virtualization attacks could be mitigated by checking if a threatened system runs in a virtualized or in a native environment. This naturally raises the following problem: Can a program determine whether it is running in a virtualized environment, or in a native machine environment? We prove here that, under a classical VM model, this problem is not decidable. Further, although our result seems to be quite theoretic, we also show that it has practical implications on related virtualization problems.

  19. sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.

    Science.gov (United States)

    Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael

    2017-01-01

    High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.

  20. A Componentizable Server-Side Framework for Building Remote and Virtual Laboratories

    Directory of Open Access Journals (Sweden)

    Jesús Luis Muros-Cobos

    2012-12-01

    Full Text Available Abstract—Currently, virtual/remotes laboratories are often being built to improve learning and researching capabilities in some areas of knowledge. Generally these virtual/remotes laboratories are built from scratch again and again, instead of reusing software and hardware infrastructures. This paper presents a new framework, RVLab, to help developers building flexible and robust server-side virtual and remotes laboratories quickly. RVLab affords support for the basic requirements of these systems such as the user management or the resources (instruments and devices reservation. Unlike other lab systems, RVLab is adapted to devices and instruments of any real laboratory due to a secure and robust mechanism that allows the remote execution of lab programs. Moreover, it improves the user interaction with real labs, providing a real-time visualization of experiments and lab instruments by means of the control of video camera placed into lab, and the transmission of video streaming with different quality to users.

  1. A Performance Survey on Stack-based and Register-based Virtual Machines

    OpenAIRE

    Fang, Ruijie; Liu, Siqi

    2016-01-01

    Virtual machines have been widely adapted for high-level programming language implementations and for providing a degree of platform neutrality. As the overall use and adaptation of virtual machines grow, the overall performance of virtual machines has become a widely-discussed topic. In this paper, we present a survey on the performance differences of the two most widely adapted types of virtual machines - the stack-based virtual machine and the register-based virtual machine - using various...

  2. An object-oriented extension for debugging the virtual machine

    Energy Technology Data Exchange (ETDEWEB)

    Pizzi, Jr, Robert G. [Univ. of California, Davis, CA (United States)

    1994-12-01

    A computer is nothing more then a virtual machine programmed by source code to perform a task. The program`s source code expresses abstract constructs which are compiled into some lower level target language. When a virtual machine breaks, it can be very difficult to debug because typical debuggers provide only low-level target implementation information to the software engineer. We believe that the debugging task can be simplified by introducing aspects of the abstract design and data into the source code. We introduce OODIE, an object-oriented extension to programming languages that allows programmers to specify a virtual environment by describing the meaning of the design and data of a virtual machine. This specification is translated into symbolic information such that an augmented debugger can present engineers with a programmable debugging environment specifically tailored for the virtual machine that is to be debugged.

  3. VIRTUAL MACHINES IN EDUCATION – CNC MILLING MACHINE WITH SINUMERIK 840D CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    Ireneusz Zagórski

    2014-11-01

    Full Text Available Machining process nowadays could not be conducted without its inseparable element: cutting edge and frequently numerically controlled milling machines. Milling and lathe machining centres comprise standard equipment in many companies of the machinery industry, e.g. automotive or aircraft. It is for that reason that tertiary education should account for this rising demand. This entails the introduction into the curricula the forms which enable visualisation of machining, milling process and virtual production as well as virtual machining centres simulation. Siemens Virtual Machine (Virtual Workshop sets an example of such software, whose high functionality offers a range of learning experience, such as: learning the design of machine tools, their configuration, basic operation functions as well as basics of CNC.

  4. Virtualization Security Combining Mandatory Access Control and Virtual Machine Introspection

    OpenAIRE

    Win, Thu Yein; Tianfield, Huaglory; Mair, Quentin

    2014-01-01

    Virtualization has become a target for attacks in cloud computing environments. Existing approaches to protecting the virtualization environment against the attacks are limited in protection scope and are with high overheads. This paper proposes a novel virtualization security solution which aims to provide comprehensive protection of the virtualization environment.

  5. A Real-Time Java Virtual Machine for Avionics (Preprint)

    National Research Council Canada - National Science Library

    Armbruster, Austin; Pla, Edward; Baker, Jason; Cunei, Antonio; Flack, Chapman; Pizlo, Filip; Vitek, Jan; Proch zka, Marek; Holmes, David

    2006-01-01

    ...) in the DARPA Program Composition for Embedded System (PCES) program. Within the scope of PCES, Purdue University and the Boeing Company collaborated on the development of Ovm, an open source implementation of the RTSJ virtual machine...

  6. Virtual C Machine and Integrated Development Environment for ATMS Controllers.

    Science.gov (United States)

    2000-04-01

    The overall objective of this project is to develop a prototype virtual machine that fits on current Advanced Traffic Management Systems (ATMS) controllers and provides functionality for complex traffic operations.;Prepared in cooperation with Utah S...

  7. Server Operation and Virtualization to Save Energy and Cost in Future Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Jun-Ho Huh

    2018-06-01

    Full Text Available Since the introduction of the LTE (Long Term Evolution service, we have lived in a time of expanding amounts of data. The amount of data produced has increased every year with the increase of smart phone distribution in particular. Telecommunication service providers have to struggle to secure sufficient network capacity in order to maintain quick access to necessary data by consumers. Nonetheless, maintaining the maximum capacity and bandwidth at all times requires considerable cost and excessive equipment. Therefore, to solve such a problem, telecommunication service providers need to maintain an appropriate level of network capacity and to provide sustainable service to customers through a quick network development in case of shortage. So far, telecommunication service providers have bought and used the network equipment directly produced by network equipment manufacturers such as Ericsson, Nokia, Cisco, and Samsung. Since the equipment is specialized for networking, which satisfied consumers with their excellent performances, they are very costly because they are developed with advanced technologies. Moreover, it takes much time due to the purchase process wherein the telecommunication service providers place an order and the manufacturer produces and delivers. Accordingly, there are cases that require signaling and two-way data traffic as well as capacity because of the diversity of IoT devices. For these purposes, the need for NFV (Network Function Virtualization is raised. Equipment virtualization is performed so that it is operated on an x86-based compatible server instead of working on the network equipment manufacturer’s dedicated hardware. By operating in some compatible servers, it can reduce the wastage of hardware and cope with the change thanks to quick hardware development. This study proposed an efficient system of reducing cost in network server operation using such NFV technology and found that the cost was reduced by 24

  8. LHCb experience with running jobs in virtual machines

    Science.gov (United States)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  9. LHCb experience with running jobs in virtual machines

    CERN Document Server

    McNab, A; Luzzi, C

    2015-01-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites mana...

  10. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    Science.gov (United States)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  11. Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines

    Science.gov (United States)

    Ivanovic, Pavle; Richter, Harald

    2018-01-01

    High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.

  12. Porting Gravitational Wave Signal Extraction to Parallel Virtual Machine (PVM)

    Science.gov (United States)

    Thirumalainambi, Rajkumar; Thompson, David E.; Redmon, Jeffery

    2009-01-01

    Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA mission to be launched around 2012. The Gravitational Wave detection is fundamentally the determination of frequency, source parameters, and waveform amplitude derived in a specific order from the interferometric time-series of the rotating LISA spacecrafts. The LISA Science Team has developed a Mock LISA Data Challenge intended to promote the testing of complicated nested search algorithms to detect the 100-1 millihertz frequency signals at amplitudes of 10E-21. However, it has become clear that, sequential search of the parameters is very time consuming and ultra-sensitive; hence, a new strategy has been developed. Parallelization of existing sequential search algorithms of Gravitational Wave signal identification consists of decomposing sequential search loops, beginning with outermost loops and working inward. In this process, the main challenge is to detect interdependencies among loops and partitioning the loops so as to preserve concurrency. Existing parallel programs are based upon either shared memory or distributed memory paradigms. In PVM, master and node programs are used to execute parallelization and process spawning. The PVM can handle process management and process addressing schemes using a virtual machine configuration. The task scheduling and the messaging and signaling can be implemented efficiently for the LISA Gravitational Wave search process using a master and 6 nodes. This approach is accomplished using a server that is available at NASA Ames Research Center, and has been dedicated to the LISA Data Challenge Competition. Historically, gravitational wave and source identification parameters have taken around 7 days in this dedicated single thread Linux based server. Using PVM approach, the parameter extraction problem can be reduced to within a day. The low frequency computation and a proxy signal-to-noise ratio are calculated in separate nodes that are controlled by the master

  13. Analysis towards VMEM File of a Suspended Virtual Machine

    Science.gov (United States)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  14. Trustable Virtual Machine Scheduling in a Cloud

    OpenAIRE

    Hermenier , Fabien; Henrio , Ludovic

    2017-01-01

    International audience; In an Infrastructure As A Service (IaaS) cloud, the scheduler deploys VMs to servers according to service level objectives (SLOs). Clients and service providers must both trust the infrastructure. In particular they must be sure that the VM scheduler takes decisions that are consistent with its advertised behaviour. The difficulties to master every theoretical and practical aspects of a VM scheduler implementation leads however to faulty behaviours that break SLOs and ...

  15. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    Science.gov (United States)

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  16. Using a vision cognitive algorithm to schedule virtual machines

    Directory of Open Access Journals (Sweden)

    Zhao Jiaqi

    2014-09-01

    Full Text Available Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM scheduling problem on the cloud. Our primary concern with VM scheduling is the energy consumption, because the largest part of a cloud center operation cost goes to the kilowatts used. We designed a scheduling algorithm that allocates an incoming virtual machine instance on the host machine, which results in the lowest energy consumption of the entire system. More specifically, we developed a new algorithm, called vision cognition, to solve the global optimization problem. This algorithm is inspired by the observation of how human eyes see directly the smallest/largest item without comparing them pairwisely. We theoretically proved that the algorithm works correctly and converges fast. Practically, we validated the novel algorithm, together with the scheduling concept, using a simulation approach. The adopted cloud simulator models different cloud infrastructures with various properties and detailed runtime information that can usually not be acquired from real clouds. The experimental results demonstrate the benefit of our approach in terms of reducing the cloud center energy consumption

  17. Case study of virtual reality in CNC machine tool exhibition

    Directory of Open Access Journals (Sweden)

    Kao Yung-Chou

    2017-01-01

    Full Text Available Exhibition and demonstration are generally used in the promotion and sale-assistance of manufactured products. However, the transportation cost of the real goods from the vender factory to the exposition venue is generally expensive for huge and heavy commodity. With the advancement of computing, graphics, mobile apps, and mobile hardware the 3D visibility technology is getting more and more popular to be adopted in visual-assisted communication such as amusement games. Virtual reality (VR technology has therefore being paid great attention in emulating expensive small and/or huge and heavy equipment. Virtual reality can be characterized as 3D extension with Immersion, Interaction and Imagination. This paper was then be focused on the study of virtual reality in the assistance of CNC machine tool demonstration and exhibition. A commercial CNC machine tool was used in this study to illustrate the effectiveness and usability of using virtual reality for an exhibition. The adopted CNC machine tool is a large and heavy mill-turn machine with the width up to eleven meters and weighted about 35 tons. A head-mounted display (HMD was attached to the developed VR CNC machine tool for the immersion viewing. A user can see around the 3D scene of the large mill-turn machine and the operation of the virtual CNC machine can be actuated by bare hand. Coolant was added to demonstrate more realistic operation while collision detection function was also added to remind the operator. The developed VR demonstration system has been presented in the 2017 Taipei International Machine Tool Show (TIMTOS 2017. This case study has shown that young engineers and/or students are very impressed by the VR-based demonstration while elder persons could not adapt themselves easily to the VR-based scene because of eyesight issues. However, virtual reality has successfully being adopted and integrated with the CNC machine tool in an international show. Another machine tool on

  18. VIRTUAL MODELING OF A NUMERICAL CONTROL MACHINE TOOL USED FOR COMPLEX MACHINING OPERATIONS

    Directory of Open Access Journals (Sweden)

    POPESCU Adrian

    2015-11-01

    Full Text Available This paper presents the 3D virtual model of the numerical control machine Modustar 100, in terms of machine elements. This is a CNC machine of modular construction, all components allowing the assembly in various configurations. The paper focused on the design of the subassemblies specific to the axes numerically controlled by means of CATIA v5, which contained different drive kinematic chains of different translation modules that ensures translation on X, Y and Z axis. Machine tool development for high speed and highly precise cutting demands employment of advanced simulation techniques witch it reflect on cost of total development of the machine.

  19. Data preparation for municipal virtual assistant using machine learning

    OpenAIRE

    Jovan, Leon Noe

    2016-01-01

    The main goal of this master’s thesis was to develop a procedure that will automate the construction of the knowledge base for a virtual assistant that answers questions about municipalities in Slovenia. The aim of the procedure is to replace or facilitate manual preparation of the virtual assistant's knowledge base. Theoretical backgrounds of different machine learning fields, such as multilabel classification, text mining and learning from weakly labeled data were examined to gain a better ...

  20. Human Machine Interfaces for Teleoperators and Virtual Environments

    Science.gov (United States)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  1. System Center 2012 R2 Virtual Machine Manager cookbook

    CERN Document Server

    Cardoso, Edvaldo Alessandro

    2014-01-01

    This book is a step-by-step guide packed with recipes that cover architecture design and planning. The book is also full of deployment tips, techniques, and solutions. If you are a solutions architect, technical consultant, administrator, or any other virtualization enthusiast who needs to use Microsoft System Center Virtual Machine Manager in a real-world environment, then this is the book for you. We assume that you have previous experience with Windows 2012 R2 and Hyper-V.

  2. A Flattened Hierarchical Scheduler for Real-Time Virtual Machines

    OpenAIRE

    Drescher, Michael Stuart

    2015-01-01

    The recent trend of migrating legacy computer systems to a virtualized, cloud-based environment has expanded to real-time systems. Unfortunately, modern hypervisors have no mechanism in place to guarantee the real-time performance of applications running on virtual machines. Past solutions to this problem rely on either spatial or temporal resource partitioning, both of which under-utilize the processing capacity of the host system. Paravirtualized solutions in which the guest communicates it...

  3. Virtual machining considering dimensional, geometrical and tool deflection errors in three-axis CNC milling machines

    OpenAIRE

    Soori, Mohsen; Arezoo, Behrooz; Habibi, Mohsen

    2014-01-01

    Virtual manufacturing systems can provide useful means for products to be manufactured without the need of physical testing on the shop floor. As a result, the time and cost of part production can be decreased. There are different error sources in machine tools such as tool deflection, geometrical deviations of moving axis and thermal distortions of machine tool structures. Some of these errors can be decreased by controlling the machining process and environmental parameters. However other e...

  4. Virtual machining considering dimensional, geometrical and tool deflection errors in three-axis CNC milling machines

    OpenAIRE

    Soori, Mohsen; Arezoo, Behrooz; Habibi, Mohsen

    2016-01-01

    Virtual manufacturing systems can provide useful means for products to be manufactured without the need of physical testing on the shop floor. As a result, the time and cost of part production can be decreased. There are different error sources in machine tools such as tool deflection, geometrical deviations of moving axis and thermal distortions of machine tool structures. Some of these errors can be decreased by controlling the machining process and environmental parameters. However other e...

  5. Development of Web-based Virtual Training Environment for Machining

    Science.gov (United States)

    Yang, Zhixin; Wong, S. F.

    2010-05-01

    With the booming in the manufacturing sector of shoe, garments and toy, etc. in pearl region, training the usage of various facilities and design the facility layout become crucial for the success of industry companies. There is evidence that the use of virtual training may provide benefits in improving the effect of learning and reducing risk in the physical work environment. This paper proposed an advanced web-based training environment that could demonstrate the usage of a CNC machine in terms of working condition and parameters selection. The developed virtual environment could provide training at junior level and advanced level. Junior level training is to explain machining knowledge including safety factors, machine parameters (ex. material, speed, feed rate). Advanced level training enables interactive programming of NG coding and effect simulation. Operation sequence was used to assist the user to choose the appropriate machining condition. Several case studies were also carried out with animation of milling and turning operations.

  6. Composable processor virtualization for embedded systems

    NARCIS (Netherlands)

    Molnos, A.M.; Milutinovic, A.; She, D.; Goossens, K.G.W.

    2010-01-01

    Processor virtualization divides a physical processor's time among a set of virual machines, enabling efficient hardware utilization, application security and allowing co-existence of different operating systems on the same processor. Through initially intended for the server domain, virtualization

  7. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    Science.gov (United States)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  8. Virtual Things for Machine Learning Applications

    OpenAIRE

    Bovet , Gérôme; Ridi , Antonio; Hennebert , Jean

    2014-01-01

    International audience; Internet-of-Things (IoT) devices, especially sensors are pro-ducing large quantities of data that can be used for gather-ing knowledge. In this field, machine learning technologies are increasingly used to build versatile data-driven models. In this paper, we present a novel architecture able to ex-ecute machine learning algorithms within the sensor net-work, presenting advantages in terms of privacy and data transfer efficiency. We first argument that some classes of ...

  9. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  10. An incremental anomaly detection model for virtual machines.

    Directory of Open Access Journals (Sweden)

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  11. An Embeddable Virtual Machine for State Space Generation

    NARCIS (Netherlands)

    Weber, M.; Bosnacki, D.; Edelkamp, S.

    2007-01-01

    The semantics of modelling languages are not always specified in a precise and formal way, and their rather complex underlying models make it a non-trivial exercise to reuse them in newly developed tools. We report on experiments with a virtual machine-based approach for state space generation. The

  12. Adapting Virtual Machine Techniques for Seamless Aspect Support

    NARCIS (Netherlands)

    Bockisch, Christoph; Arnold, Matthew; Dinkelaker, Tom; Mezini, Mira

    2006-01-01

    Current approaches to compiling aspect-oriented programs are inefficient. This inefficiency has negative effects on the productivity of the development process and is especially prohibitive for dynamic aspect deployment. In this work, we present how well-known virtual machine techniques can be used

  13. New approach for virtual machines consolidation in heterogeneous computing systems

    Czech Academy of Sciences Publication Activity Database

    Fesl, Jan; Cehák, J.; Doležalová, Marie; Janeček, J.

    2016-01-01

    Roč. 9, č. 12 (2016), s. 321-332 ISSN 1738-9968 Institutional support: RVO:60077344 Keywords : consolidation * virtual machine * distributed Subject RIV: JD - Computer Applications, Robotics http://www.sersc.org/journals/IJHIT/vol9_no12_2016/29.pdf

  14. Detecting System of Nested Hardware Virtual Machine Monitor

    Directory of Open Access Journals (Sweden)

    Artem Vladimirovich Iuzbashev

    2015-03-01

    Full Text Available Method of nested hardware virtual machine monitor detection was proposed in this work. The method is based on HVM timing attack. In case of HVM presence in system, the number of different instruction sequences execution time values will increase. We used this property as indicator in our detection.

  15. Concept of Operations for a Virtual Machine for C3I Applications

    National Research Council Canada - National Science Library

    Bagrodia, Rajive

    1997-01-01

    .... This 12-month research endeavor, entitled "Concept of Operations for a Virtual Machine for C31 Applications," examined issues in using a concurrent virtual machine for the design of C31 applications...

  16. CFCC: A Covert Flows Confinement Mechanism for Virtual Machine Coalitions

    Science.gov (United States)

    Cheng, Ge; Jin, Hai; Zou, Deqing; Shi, Lei; Ohoussou, Alex K.

    Normally, virtualization technology is adopted to construct the infrastructure of cloud computing environment. Resources are managed and organized dynamically through virtual machine (VM) coalitions in accordance with the requirements of applications. Enforcing mandatory access control (MAC) on the VM coalitions will greatly improve the security of VM-based cloud computing. However, the existing MAC models lack the mechanism to confine the covert flows and are hard to eliminate the convert channels. In this paper, we propose a covert flows confinement mechanism for virtual machine coalitions (CFCC), which introduces dynamic conflicts of interest based on the activity history of VMs, each of which is attached with a label. The proposed mechanism can be used to confine the covert flows between VMs in different coalitions. We implement a prototype system, evaluate its performance, and show that our mechanism is practical.

  17. A Comprehensive Sensitivity Analysis of a Data Center Network with Server Virtualization for Business Continuity

    Directory of Open Access Journals (Sweden)

    Tuan Anh Nguyen

    2015-01-01

    Full Text Available Sensitivity assessment of availability for data center networks (DCNs is of paramount importance in design and management of cloud computing based businesses. Previous work has presented a performance modeling and analysis of a fat-tree based DCN using queuing theory. In this paper, we present a comprehensive availability modeling and sensitivity analysis of a DCell-based DCN with server virtualization for business continuity using stochastic reward nets (SRN. We use SRN in modeling to capture complex behaviors and dependencies of the system in detail. The models take into account (i two DCell configurations, respectively, composed of two and three physical hosts in a DCell0 unit, (ii failure modes and corresponding recovery behaviors of hosts, switches, and VMs, and VM live migration mechanism within and between DCell0s, and (iii dependencies between subsystems (e.g., between a host and VMs and between switches and VMs in the same DCell0. The constructed SRN models are analyzed in detail with regard to various metrics of interest to investigate system’s characteristics. A comprehensive sensitivity analysis of system availability is carried out in consideration of the major impacting parameters in order to observe the system’s complicated behaviors and find the bottlenecks of system availability. The analysis results show the availability improvement, capability of fault tolerance, and business continuity of the DCNs complying with DCell network topology. This study provides a basis of designing and management of DCNs for business continuity.

  18. Abdominal aortic aneurysms: virtual imaging and analysis through a remote web server

    International Nuclear Information System (INIS)

    Neri, Emanuele; Bargellini, Irene; Vignali, Claudio; Bartolozzi, Carlo; Rieger, Michael; Jaschke, Werner; Giachetti, Andrea; Tuveri, Massimiliano

    2005-01-01

    The study describes the application of a web-based software in the planning of the endovascular treatment of abdominal aortic aneurysms (AAA). The software has been developed in the framework of a 2-year research project called Aneurysm QUAntification Through an Internet Collaborative System (AQUATICS); it allows to manage remotely Virtual Reality Modeling Language (VRML) models of the abdominal aorta, derived from multirow computed tomography angiography (CTA) data sets, and to obtain measurements of diameters, angles and centerline lengths. To test the reliability of measurements, two radiologists performed a detailed analysis of multiple 3D models generated from a synthetic phantom, mimicking an AAA. The system was tested on 30 patients with AAA; CTA data sets were mailed and the time required for segmentation and measurement were collected for each case. The Bland-Altman plot analysis showed that the mean intra- and inter-observer differences in measures on phantoms were clinically acceptable. The mean time required for segmentation was 1 h (range 45-120 min). The mean time required for measurements on the web was 7 min (range 4-11 min). The AQUATICS web server may provide a rapid, standardized and accurate tool for the evaluation of AAA prior to the endovascular treatment. (orig.)

  19. INFORMATION INFRASTRUCTURE OF THE EDUCATIONAL ENVIRONMENT WITH VIRTUAL MACHINE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Artem D. Beresnev

    2014-09-01

    Full Text Available Subject of research. Information infrastructure for the training environment with application of technology of virtual computers for small pedagogical systems (separate classes, author's courses is created and investigated. Research technique. The life cycle model of information infrastructure for small pedagogical systems with usage of virtual computers in ARIS methodology is constructed. The technique of information infrastructure formation with virtual computers on the basis of process approach is offered. The model of an event chain in combination with the environment chart is used as the basic model. For each function of the event chain the necessary set of means of information and program support is defined. Technique application is illustrated on the example of information infrastructure design for the educational environment taking into account specific character of small pedagogical systems. Advantages of the designed information infrastructure are: the maximum usage of open or free components; the usage of standard protocols (mainly, HTTP and HTTPS; the maximum portability (application servers can be started up on any of widespread operating systems; uniform interface to management of various virtualization platforms, possibility of inventory of contents of the virtual computer without its start, flexible inventory management of the virtual computer by means of adjusted chains of rules. Approbation. Approbation of obtained results was carried out on the basis of training center "Institute of Informatics and Computer Facilities" (Tallinn, Estonia. Technique application within the course "Computer and Software Usage" gave the possibility to get half as much the number of refusals for components of the information infrastructure demanding intervention of the technical specialist, and also the time for elimination of such malfunctions. Besides, the pupils who have got broader experience with computer and software, showed better results

  20. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  1. Automatic Generation of Machine Emulators: Efficient Synthesis of Robust Virtual Machines for Legacy Software Migration

    DEFF Research Database (Denmark)

    Franz, Michael; Gal, Andreas; Probst, Christian

    2006-01-01

    As older mainframe architectures become obsolete, the corresponding le- gacy software is increasingly executed via platform emulators running on top of more modern commodity hardware. These emulators are virtual machines that often include a combination of interpreters and just-in-time compilers....... Implementing interpreters and compilers for each combination of emulated and target platform independently of each other is a redundant and error-prone task. We describe an alternative approach that automatically synthesizes specialized virtual-machine interpreters and just-in-time compilers, which...... then execute on top of an existing software portability platform such as Java. The result is a considerably reduced implementation effort....

  2. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    Science.gov (United States)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  3. A discrete Fourier transform for virtual memory machines

    Science.gov (United States)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  4. Dynamic Placement of Virtual Machines with Both Deterministic and Stochastic Demands for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wenying Yue

    2014-01-01

    Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.

  5. A Cooperative Approach to Virtual Machine Based Fault Injection

    Energy Technology Data Exchange (ETDEWEB)

    Naughton III, Thomas J [ORNL; Engelmann, Christian [ORNL; Vallee, Geoffroy R [ORNL; Aderholdt, William Ferrol [ORNL; Scott, Stephen L [Tennessee Technological University (TTU)

    2017-01-01

    Resilience investigations often employ fault injection (FI) tools to study the effects of simulated errors on a target system. It is important to keep the target system under test (SUT) isolated from the controlling environment in order to maintain control of the experiement. Virtual machines (VMs) have been used to aid these investigations due to the strong isolation properties of system-level virtualization. A key challenge in fault injection tools is to gain proper insight and context about the SUT. In VM-based FI tools, this challenge of target con- text is increased due to the separation between host and guest (VM). We discuss an approach to VM-based FI that leverages virtual machine introspection (VMI) methods to gain insight into the target s context running within the VM. The key to this environment is the ability to provide basic information to the FI system that can be used to create a map of the target environment. We describe a proof- of-concept implementation and a demonstration of its use to introduce simulated soft errors into an iterative solver benchmark running in user-space of a guest VM.

  6. An RTT-Aware Virtual Machine Placement Method

    Directory of Open Access Journals (Sweden)

    Li Quan

    2017-12-01

    Full Text Available Virtualization is a key technology for mobile cloud computing (MCC and the virtual machine (VM is a core component of virtualization. VM provides a relatively independent running environment for different applications. Therefore, the VM placement problem focuses on how to place VMs on optimal physical machines, which ensures efficient use of resources and the quality of service, etc. Most previous work focuses on energy consumption, network traffic between VMs and so on and rarely consider the delay for end users’ requests. In contrast, the latency between requests and VMs is considered in this paper for the scenario of optimal VM placement in MCC. In order to minimize average RTT for all requests, the round-trip time (RTT is first used as the metric for the latency of requests. Based on our proposed RTT metric, an RTT-Aware VM placement algorithm is then proposed to minimize the average RTT. Furthermore, the case in which one of the core switches does not work is considered. A VM rescheduling algorithm is proposed to keep the average RTT lower and reduce the fluctuation of the average RTT. Finally, in the simulation study, our algorithm shows its advantage over existing methods, including random placement, the traffic-aware VM placement algorithm and the remaining utilization-aware algorithm.

  7. Preserving access to ALEPH computing environment via virtual machines

    International Nuclear Information System (INIS)

    Coscetti, Simone; Boccali, Tommaso; Arezzini, Silvia; Maggi, Marcello

    2014-01-01

    The ALEPH Collaboration [1] took data at the LEP (CERN) electron-positron collider in the period 1989-2000, producing more than 300 scientific papers. While most of the Collaboration activities stopped in the last years, the data collected still has physics potential, with new theoretical models emerging, which ask checks with data at the Z and WW production energies. An attempt to revive and preserve the ALEPH Computing Environment is presented; the aim is not only the preservation of the data files (usually called bit preservation), but of the full environment a physicist would need to perform brand new analyses. Technically, a Virtual Machine approach has been chosen, using the VirtualBox platform. Concerning simulated events, the full chain from event generators to physics plots is possible, and reprocessing of data events is also functioning. Interactive tools like the DALI event display can be used on both data and simulated events. The Virtual Machine approach is suited for both interactive usage, and for massive computing using Cloud like approaches.

  8. Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility

    Science.gov (United States)

    Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.

    2017-10-01

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  9. Virtual Machine Provisioning, Code Management, and Data Movement Design for the Fermilab HEPCloud Facility

    Energy Technology Data Exchange (ETDEWEB)

    Timm, S. [Fermilab; Cooper, G. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Holzman, B. [Fermilab; Kennedy, R. [Fermilab; Grassano, D. [Fermilab; Tiradani, A. [Fermilab; Krishnamurthy, R. [IIT, Chicago; Vinayagam, S. [IIT, Chicago; Raicu, I. [IIT, Chicago; Wu, H. [IIT, Chicago; Ren, S. [IIT, Chicago; Noh, S. Y. [KISTI, Daejeon

    2017-11-22

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  10. Integrating Heuristic and Machine-Learning Methods for Efficient Virtual Machine Allocation in Data Centers

    OpenAIRE

    Pahlevan, Ali; Qu, Xiaoyu; Zapater Sancho, Marina; Atienza Alonso, David

    2017-01-01

    Modern cloud data centers (DCs) need to tackle efficiently the increasing demand for computing resources and address the energy efficiency challenge. Therefore, it is essential to develop resource provisioning policies that are aware of virtual machine (VM) characteristics, such as CPU utilization and data communication, and applicable in dynamic scenarios. Traditional approaches fall short in terms of flexibility and applicability for large-scale DC scenarios. In this paper we propose a heur...

  11. Modeling and simulation of five-axis virtual machine based on NX

    Science.gov (United States)

    Li, Xiaoda; Zhan, Xianghui

    2018-04-01

    Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.

  12. Virtual reality solutions for the design of machine tools in practice

    OpenAIRE

    Zickner, H.; Neugebauer, Reimund; Weidlich, D.

    2006-01-01

    At the Virtual Reality Centre Production Engineering (VRCP) the Institute for Machine Tools and Production Processes (IWP) of the Chemnitz University of Technology and the Fraunhofer Institute for Machine Tools and Forming Technology (IWU) have developed several practical Virtual Reality (VR) based solutions for the industry. Some practical examples will show the benefits gained by the application of Virtual Reality techniques in the design process of machine tools and assembly lines.

  13. A Virtual Inertia Control Strategy for DC Microgrids Analogized with Virtual Synchronous Machines

    DEFF Research Database (Denmark)

    Wu, Wenhua; Chen, Yandong; Luo, An

    2017-01-01

    In a DC microgrid (DC-MG), the dc bus voltage is vulnerable to power fluctuation derived from the intermittent distributed energy or local loads variation. In this paper, a virtual inertia control strategy for DC-MG through bidirectional grid-connected converters (BGCs) analogized with virtual...... synchronous machine (VSM) is proposed to enhance the inertia of the DC-MG, and to restrain the dc bus voltage fluctuation. The small-signal model of the BGC system is established, and the small-signal transfer function between the dc bus voltage and the dc output current of the BGC is deduced. The dynamic...... for the BGC is introduced to smooth the dynamic response of the dc bus voltage. By analyzing the control system stability, the appropriate virtual inertia control parameters are selected. Finally, simulations and experiments verified the validity of the proposed control strategy....

  14. Mobile virtual synchronous machine for vehicle-to-grid applications

    Energy Technology Data Exchange (ETDEWEB)

    Pelczar, Christopher

    2012-03-20

    The Mobile Virtual Synchronous Machine (VISMA) is a power electronics device for Vehicle to Grid (V2G) applications which behaves like an electromechanical synchronous machine and offers the same beneficial properties to the power network, increasing the inertia in the system, stabilizing the grid voltage, and providing a short-circuit current in case of grid faults. The VISMA performs a real-time simulation of a synchronous machine and calculates the phase currents that an electromagnetic synchronous machine would produce under the same local grid conditions. An inverter with a current controller feeds the currents calculated by the VISMA into the grid. In this dissertation, the requirements for a machine model suitable for the Mobile VISMA are set, and a mathematical model suitable for use in the VISMA algorithm is found and tested in a custom-designed simulation environment prior to implementation on the Mobile VISMA hardware. A new hardware architecture for the Mobile VISMA based on microcontroller and FPGA technologies is presented, and experimental hardware is designed, implemented, and tested. The new architecture is designed in such a way that allows reducing the size and cost of the VISMA, making it suitable for installation in an electric vehicle. A simulation model of the inverter hardware and hysteresis current controller is created, and the simulations are verified with various experiments. The verified model is then used to design a new type of PWM-based current controller for the Mobile VISMA. The performance of the hysteresis- and PWM-based current controllers is evaluated and compared for different operational modes of the VISMA and configurations of the inverter hardware. Finally, the behavior of the VISMA during power network faults is examined. A desired behavior of the VISMA during network faults is defined, and experiments are performed which verify that the VISMA, inverter hardware, and current controllers are capable of supporting this

  15. Virtual machine migration in an over-committed cloud

    KAUST Repository

    Zhang, Xiangliang

    2012-04-01

    While early emphasis of Infrastructure as a Service (IaaS) clouds was on providing resource elasticity to end users, providers are increasingly interested in over-committing their resources to maximize the utilization and returns of their capital investments. In principle, over-committing resources hedges that users - on average - only need a small portion of their leased resources. When such hedge fails (i.e., resource demand far exceeds available physical capacity), providers must mitigate this provider-induced overload, typically by migrating virtual machines (VMs) to underutilized physical machines. Recent works on VM placement and migration assume the availability of target physical machines [1], [2]. However, in an over-committed cloud data center, this is not the case. VM migration can even trigger cascading overloads if performed haphazardly. In this paper, we design a new VM migration algorithm (called Scattered) that minimizes VM migrations in over-committed data centers. Compared to a traditional implementation, our algorithm can balance host utilization across all time epochs. Using real-world data traces from an enterprise cloud, we show that our migration algorithm reduces the risk of overload, minimizes the number of needed migrations, and has minimal impact on communication cost between VMs. © 2012 IEEE.

  16. Virtualization Technologies for the Business

    OpenAIRE

    Sabina POPESCU

    2011-01-01

    There is a new trend of change in today's IT industry. It's called virtualization. In datacenter virtualization can occur on several levels, but the type of virtualization has created this trend change is the operating system offered or server virtualization. OS virtualization technologies come in two forms. First, there is a software component that is used to simulate a natural machine that has total control of an operating system operating on the host equipment. The second is a hypervisor, ...

  17. A Cross-Entropy-Based Admission Control Optimization Approach for Heterogeneous Virtual Machine Placement in Public Clouds

    Directory of Open Access Journals (Sweden)

    Li Pan

    2016-03-01

    Full Text Available Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs. Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several types of VM instances, which are associated with varying configurations and performance, as well as different prices. In such a heterogeneous virtual machine placement process, one significant problem faced by a cloud provider is how to optimally accept and place multiple VM service requests into its cloud data centers to achieve revenue maximization. To address this issue, in this paper, we first formulate such a revenue maximization problem during VM admission control as a multiple-dimensional knapsack problem, which is known to be NP-hard to solve. Then, we propose to use a cross-entropy-based optimization approach to address this revenue maximization problem, by obtaining a near-optimal eligible set for the provider to accept into its data centers, from the waiting VM service requests in the system. Finally, through extensive experiments and measurements in a simulated environment with the settings of VM instance classes derived from real-world cloud systems, we show that our proposed cross-entropy-based admission control optimization algorithm is efficient and effective in maximizing cloud providers’ revenue in a public cloud computing environment.

  18. VirtualSpace: A vision of a machine-learned virtual space environment

    Science.gov (United States)

    Bortnik, J.; Sarno-Smith, L. K.; Chu, X.; Li, W.; Ma, Q.; Angelopoulos, V.; Thorne, R. M.

    2017-12-01

    Space borne instrumentation tends to come and go. A typical instrument will go through a phase of design and construction, be deployed on a spacecraft for several years while it collects data, and then be decommissioned and fade into obscurity. The data collected from that instrument will typically receive much attention while it is being collected, perhaps in the form of event studies, conjunctions with other instruments, or a few statistical surveys, but once the instrument or spacecraft is decommissioned, the data will be archived and receive progressively less attention with every passing year. This is the fate of all historical data, and will be the fate of data being collected by instruments even at the present time. But what if those instruments could come alive, and all be simultaneously present at any and every point in time and space? Imagine the scientific insights, and societal gains that could be achieved with a grand (virtual) heliophysical observatory that consists of every current and historical mission ever deployed? We propose that this is not just fantasy but is imminently doable with the data currently available, with the present computational resources, and with currently available algorithms. This project revitalizes existing data resources and lays the groundwork for incorporating data from every future mission to expand the scope and refine the resolution of the virtual observatory. We call this project VirtualSpace: a machine-learned virtual space environment.

  19. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    Science.gov (United States)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  20. RBMK full scope simulator gets virtual refuelling machine

    International Nuclear Information System (INIS)

    Khoudiakov, M.; Slonimsky, V.; Mitrofanov, S.

    2006-01-01

    The paper describes a continuation of efforts of an international Russian-Norwegian joint team to drastically increase operational safety during the refuelling process of an RBMK-type reactor by implementing a training simulator based on an innovative Virtual Reality (VR) approach. During the preceding stage of the project a display-based simulator was extended with VR models of the real Refueling Machine (RM) and its environment in order to improve both the learning process and operation's effectiveness. The simulator's challenge is to support the performance (operational activity) of RM operational staff firstly and to take major part in developing basic knowledge and skills as well as to keep skilled staff in close touch with the complex machinery of the Refueling Machine. At the given 2nd stage the functional scope of the VR-simulator was greatly enhanced - firstly, by connecting to the RBMK-unit full-scope simulator, and, secondly, by a training program and simulator model upgrade. (author)

  1. Simulation of machine-maintenance training in virtual environment

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Tezuka, Tetsuo; Kashiwa, Ken-ichiro; Ishii, Hirotake

    1997-01-01

    The periodical inspection of nuclear power plants needs a lot of workforces with a high degree of technical skill for the maintenance of various sorts of machines. Therefore, a new type of maintenance training system is required, where trainees can get training safely, easily and effectively. In this study we developed a training simulation system for disassembling a check valve in virtual environment (VE). The features of this system are as follows: Firstly, the trainees can execute tasks even in wrong order, and can experience the resultant conditions. In order to realize this environment, we developed a new Petri-net model for representing the objects' states in VE. This Petri-net model has several original characteristics, which make it easier to manage the change of the objects' states. Furthermore, we made a support system for constructing the Petri-net model of machine-disassembling training, because the Petri-net model is apt to become of large size. The effectiveness of this support system is shown through the system development. Secondly, this system can perform appropriate tasks to be done next in VE whenever the trainee wants even after some mistakes have been made. The effectiveness of this function has also been confirmed by experiments. (author)

  2. HAVmS: Highly Available Virtual Machine Computer System Fault Tolerant with Automatic Failback and Close to Zero Downtime

    Directory of Open Access Journals (Sweden)

    Memmo Federici

    2014-12-01

    Full Text Available In scientic computing, systems often manage computations that require continuous acquisition of of satellite data and the management of large databases, as well as the execution of analysis software and simulation models (e.g. Monte Carlo or molecular dynamics cell simulations which may require several weeks of continuous run. These systems, consequently, should ensure the continuity of operation even in case of serious faults. HAVmS (High Availability Virtual machine System is a highly available, "fault tolerant" system with zero downtime in case of fault. It is based on the use of Virtual Machines and implemented by two servers with similar characteristics. HAVmS, thanks to the developed software solutions, is unique in its kind since it automatically failbacks once faults have been fixed. The system has been designed to be used both with professional or inexpensive hardware and supports the simultaneous execution of multiple services such as: web, mail, computing and administrative services, uninterrupted computing, data base management. Finally the system is cost effective adopting exclusively open source solutions, is easily manageable and for general use.

  3. A Reference Model for Virtual Machine Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young

    2016-07-01

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.

  4. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    Science.gov (United States)

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  5. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  6. Modeling the Virtual Machine Launching Overhead under Fermicloud

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele [Fermilab; Wu, Hao [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Bernabeu, Gerard [Fermilab; Noh, Seo-Young [KISTI, Daejeon

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  7. Minimizing Total Busy Time with Application to Energy-efficient Scheduling of Virtual Machines in IaaS clouds

    OpenAIRE

    Quang-Hung, Nguyen; Thoai, Nam

    2016-01-01

    Infrastructure-as-a-Service (IaaS) clouds have become more popular enabling users to run applications under virtual machines. Energy efficiency for IaaS clouds is still challenge. This paper investigates the energy-efficient scheduling problems of virtual machines (VMs) onto physical machines (PMs) in IaaS clouds along characteristics: multiple resources, fixed intervals and non-preemption of virtual machines. The scheduling problems are NP-hard. Most of existing works on VM placement reduce ...

  8. A critical survey of live virtual machine migration techniques

    Directory of Open Access Journals (Sweden)

    Anita Choudhary

    2017-11-01

    Full Text Available Abstract Virtualization techniques effectively handle the growing demand for computing, storage, and communication resources in large-scale Cloud Data Centers (CDC. It helps to achieve different resource management objectives like load balancing, online system maintenance, proactive fault tolerance, power management, and resource sharing through Virtual Machine (VM migration. VM migration is a resource-intensive procedure as VM’s continuously demand appropriate CPU cycles, cache memory, memory capacity, and communication bandwidth. Therefore, this process degrades the performance of running applications and adversely affects efficiency of the data centers, particularly when Service Level Agreements (SLA and critical business objectives are to be met. Live VM migration is frequently used because it allows the availability of application service, while migration is performed. In this paper, we make an exhaustive survey of the literature on live VM migration and analyze the various proposed mechanisms. We first classify the types of Live VM migration (single, multiple and hybrid. Next, we categorize VM migration techniques based on duplication mechanisms (replication, de-duplication, redundancy, and compression and awareness of context (dependency, soft page, dirty page, and page fault and evaluate the various Live VM migration techniques. We discuss various performance metrics like application service downtime, total migration time and amount of data transferred. CPU, memory and storage data is transferred during the process of VM migration and we identify the category of data that needs to be transferred in each case. We present a brief discussion on security threats in live VM migration and categories them in three different classes (control plane, data plane, and migration module. We also explain the security requirements and existing solutions to mitigate possible attacks. Specific gaps are identified and the research challenges in improving

  9. Two Approaches for the Management of Virtual Machines on Grid Infrastructures

    International Nuclear Information System (INIS)

    Tapiador, D.; Rubio-Montero, A. J.; Juedo, E.; Montero, R. S.; Llorente, I. M.

    2007-01-01

    Virtual machines are a promising technology to overcome some of the problems found in current Grid infrastructures, like heterogeneity, performance partitioning or application isolation. This work shows a comparison between two strategies to manage virtual machines in Globus Grids. The first alternative is a straightforward deployment that does not require additional middle ware to be installed. It is only based on standard Grid services and is not bound to a given virtualization technology. Although this option is fully functional, it is only suitable for single process batch jobs. The second solution makes use of the Virtual Workspace Service which allows a remote client to securely negotiate and manage a virtual resource. This approach better exploits the potential benefits offered by the virtualization technology and provides a wider application range. (Author)

  10. Elevating Virtual Machine Introspection for Fine-Grained Process Monitoring: Techniques and Applications

    Science.gov (United States)

    Srinivasan, Deepa

    2013-01-01

    Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside VMs to the outside, the out-of-VM solutions securely isolate the anti-malware…

  11. PERANCANGAN VIRTUAL PRIVATE NETWORK DENGAN SERVER LINUX PADA PT. DHARMA GUNA SAKTI

    Directory of Open Access Journals (Sweden)

    Siswa Trihadi

    2008-05-01

    Full Text Available Purpose of this research is to analyze and design a network between head and branch office, andcompany mobile user, which can be used to increase performance and effectiveness of company in doingtheir business process. There were 3 main methods used in this research, which were: library study, analysis,and design method. Library study method was done by searching theoretical sources, knowledge, and otherinformation from books, articles in library, and internet pages. Analysis method was done by doing anobservation on company network, and an interview to acquire description of current business process andidentify problems which can be solved by using a network technology. Meanwhile, the design method wasdone by making a topology network diagram, and determining elements needed to design a VPN technology,then suggesting a configuration system, and testing to know whether the suggested system could run well ornot. The result is that network between the head and branch office, and the mobile user can be connectedsuccessfully using a VPN technology. In conclusion, with the connected network between the head andbranch office can create a centralization of company database, and a suggested VPN network has run well byencapsulating data packages had been sent.Keywords: network, Virtual Private Network (VPN, library study, analysis, design

  12. Complementary Machine Intelligence and Human Intelligence in Virtual Teaching Assistant for Tutoring Program Tracing

    Science.gov (United States)

    Chou, Chih-Yueh; Huang, Bau-Hung; Lin, Chi-Jen

    2011-01-01

    This study proposes a virtual teaching assistant (VTA) to share teacher tutoring tasks in helping students practice program tracing and proposes two mechanisms of complementing machine intelligence and human intelligence to develop the VTA. The first mechanism applies machine intelligence to extend human intelligence (teacher answers) to evaluate…

  13. High availability using virtualization

    International Nuclear Information System (INIS)

    Calzolari, Federico; Arezzini, Silvia; Ciampa, Alberto; Mazzoni, Enrico; Domenici, Andrea; Vaglini, Gigliola

    2010-01-01

    High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization. Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This new approach to high availability allows the running virtual machines to be distributed over a small number of servers, by exploiting the features of the virtualization layer: start, stop and move virtual machines between physical hosts. The 3RC system is based on a finite state machine, providing the possibility to restart each virtual machine over any physical host, or reinstall it from scratch. A complete infrastructure has been developed to install operating system and middleware in a few minutes. To virtualize the main servers of a data center, a new procedure has been developed to migrate physical to virtual hosts. The whole Grid data center SNS-PISA is running at the moment in virtual environment under the high availability system.

  14. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    Science.gov (United States)

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  15. Electrical Machines Laminations Magnetic Properties: A Virtual Instrument Laboratory

    OpenAIRE

    Martínez-Román, Javier; Pérez Cruz, Juan; Pineda Sánchez, Manuel; Puche Panadero, Rubén; Roger Folch, José; Riera Guasp, Martín Víctor; Sapena Bañó, Ángel

    2015-01-01

    "© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.” Upon publication, authors are asked to include either a link to the abstract of the published article in IEEE X...

  16. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    Science.gov (United States)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  17. A self-calibrating robot based upon a virtual machine model of parallel kinematics

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Eiríksson, Eyþór Rúnar; Hansen, Hans Nørgaard

    2016-01-01

    A delta-type parallel kinematics system for Additive Manufacturing has been created, which through a probing system can recognise its geometrical deviations from nominal and compensate for these in the driving inverse kinematic model of the machine. Novelty is that this model is derived from...... a virtual machine of the kinematics system, built on principles from geometrical metrology. Relevant mathematically non-trivial deviations to the ideal machine are identified and decomposed into elemental deviations. From these deviations, a routine is added to a physical machine tool, which allows...

  18. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan.

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  19. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan

    Science.gov (United States)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  20. Providing Virtual Execution Environments: A Twofold Illustration

    CERN Document Server

    Grehant, Xavier

    2008-01-01

    Platform virtualization helps solving major grid computing challenges: share resource with flexible, user-controlled and custom execution environments and in the meanwhile, isolate failures and malicious code. Grid resource management tools will evolve to embrace support for virtual resource. We present two open source projects that transparently supply virtual execution environments. Tycoon has been developed at HP Labs to optimise resource usage in creating an economy where users bid to access virtual machines and compete for CPU cycles. SmartDomains provides a peer-to-peer layer that automates virtual machines deployment using a description language and deployment engine from HP Labs. These projects demonstrate both client-server and peer-to-peer approaches to virtual resource management. The first case makes extensive use of virtual machines features for dynamic resource allocation. The second translates virtual machines capabilities into a sophisticated language where resource management components can b...

  1. Holistic virtual machine scheduling in cloud datacenters towards minimizing total energy

    OpenAIRE

    Li, Xiang; Garraghan, Peter; Jiang, Xiaohong; Wu, Zhaohui; Xu, Jie

    2018-01-01

    Energy consumed by Cloud datacenters has dramatically increased, driven by rapid uptake of applications and services globally provisioned through virtualization. By applying energy-aware virtual machine scheduling, Cloud providers are able to achieve enhanced energy efficiency and reduced operation cost. Energy consumption of datacenters consists of computing energy and cooling energy. However, due to the complexity of energy and thermal modeling of realistic Cloud datacenter operation, tradi...

  2. Protecting Files Hosted on Virtual Machines With Out-of-Guest Access Control

    Science.gov (United States)

    2017-12-01

    of the system call, we additionally check for 35 a match on the newname. As enforced by our SACL, the first part ensures that if the user or group...file, as per the SACL- enforced policy. Figure 3.8 shows the code for the permission checks done in the case of the open() and openat() system calls...maximum 200 words) When an operating system (OS) runs on a virtual machine (VM), a hypervisor, the software that facilitates virtualization of computer

  3. Motion in Human and Machine: A Virtual Fatigue Approach

    NARCIS (Netherlands)

    Potkonjak, V.; Kostic, D.; Rasic, M.; Djordjevic, G.

    2002-01-01

    Achieving human-like behavior of a robot is a key issue of the paper. Redundancy in the inverse kinematics problem is resolved using a biological analogue. It is shown that by means of "virtual fatigue" functions, it is possible to generate robot movements similar to movements of a human arm subject

  4. Electrical Machines Laminations Magnetic Properties: A Virtual Instrument Laboratory

    Science.gov (United States)

    Martinez-Roman, Javier; Perez-Cruz, Juan; Pineda-Sanchez, Manuel; Puche-Panadero, Ruben; Roger-Folch, Jose; Riera-Guasp, Martin; Sapena-Baño, Angel

    2015-01-01

    Undergraduate courses in electrical machines often include an introduction to their magnetic circuits and to the various magnetic materials used in their construction and their properties. The students must learn to be able to recognize and compare the permeability, saturation, and losses of these magnetic materials, relate each material to its…

  5. Liberating Virtual Machines from Physical Boundaries through Execution Knowledge

    Science.gov (United States)

    2015-12-01

    trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for

  6. Exploiting GPUs in Virtual Machine for BioCloud

    OpenAIRE

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that ena...

  7. CloudGC: Recycling Idle Virtual Machines in the Cloud

    OpenAIRE

    Zhang , Bo; Al-Dhuraibi , Yahya; Rouvoy , Romain; Paraiso , Fawaz; Seinturier , Lionel

    2017-01-01

    International audience; Cloud computing conveys the image of a pool of unlimited virtual resources that can be quickly and easily provisioned to accommodate the user requirements. However, this flexibility may require to adjust physical resources at the infrastructure level to keep the pace of user requests. While elasticity can be considered as the de facto solution to support this issue, this elasticity can still be broken by budget requirements or physical limitations of a private cloud. I...

  8. Reducing Deadline Miss Rate for Grid Workloads running in Virtual Machines: a deadline-aware and adaptive approach

    CERN Document Server

    Khalid, Omer; Anthony, Richard; Petridis, Miltos

    2011-01-01

    This thesis explores three major areas of research; integration of virutalization into sci- entific grid infrastructures, evaluation of the virtualization overhead on HPC grid job’s performance, and optimization of job execution times to increase their throughput by reducing job deadline miss rate. Integration of the virtualization into the grid to deploy on-demand virtual machines for jobs in a way that is transparent to the end users and have minimum impact on the existing system poses a significant challenge. This involves the creation of virtual machines, decompression of the operating system image, adapting the virtual environ- ment to satisfy software requirements of the job, constant update of the job state once it’s running with out modifying batch system or existing grid middleware, and finally bringing the host machine back to a consistent state. To facilitate this research, an existing and in production pilot job framework has been modified to deploy virtual machines on demand on the grid using...

  9. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, R.; Verhoeven, S.; Vass, M.; Vriend, G.; Esch, I.J. de; Lusher, S.J.; Leurs, R.; Ridder, L.; Kooistra, A.J.; Ritschel, T.; Graaf, C. de

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  10. 3D-e-Chem-VM : Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; De Esch, Iwan J P; Lusher, Scott J.; Leurs, Rob; Ridder, Lars; Kooistra, Albert J.; Ritschel, Tina; de Graaf, C.

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  11. Some measurements of Java-to-bytecode compiler performance in the Java Virtual Machine

    OpenAIRE

    Daly, Charles; Horgan, Jane; Power, James; Waldron, John

    2001-01-01

    In this paper we present a platform independent analysis of the dynamic profiles of Java programs when executing on the Java Virtual Machine. The Java programs selected are taken from the Java Grande Forum benchmark suite, and five different Java-to-bytecode compilers are analysed. The results presented describe the dynamic instruction usage frequencies.

  12. VMIL 2011 : the 5th Workshop on Virtual Machines and Intermediate Languages

    NARCIS (Netherlands)

    Rajan, Hridesh; Hauptmann, Michael; Bockisch, Christoph; Dyer, Robert

    2011-01-01

    The VMIL workshop is a forum for research in virtual machines and intermediate languages. It is dedicated to identifying programming mechanisms and constructs that are currently realized as code transformations or implemented in libraries but should rather be supported at VM level. Candidates for

  13. 6th Workshop on Virtual Machines and Intermediate Languages (VMIL’12)

    NARCIS (Netherlands)

    Rajan, Hridesh; Hauptmann, Michael; Bockisch, Christoph; Blackburn, Steve

    2012-01-01

    The VMIL workshop is a forum for research in virtual machines and intermediate languages. It is dedicated to identifying programming mechanisms and constructs that are currently realized as code transformations or implemented in libraries but should rather be supported at VM level. Candidates for

  14. Seamless live migration of virtual machines over the MAN/WAN

    NARCIS (Netherlands)

    Travostino, F.; Daspit, P.; Gommans, L.; Jog, C.; de Laat, C.; Mambretti, J.; Monga, I.; van Oudenaarde, B.; Raghunath, S.; Wang, P.Y.

    2006-01-01

    The “VM Turntable” demonstrator at iGRID 2005 pioneered the integration of Virtual Machines (VMs) with deterministic “lightpath” network services across a MAN/WAN. The results provide for a new stage of virtualization—one for which computation is no longer localized within a data center but rather

  15. Slot Machines: Pursuing Responsible Gaming Practices for Virtual Reels and Near Misses

    Science.gov (United States)

    Harrigan, Kevin A.

    2009-01-01

    Since 1983, slot machines in North America have used a computer and virtual reels to determine the odds. Since at least 1988, a technique called clustering has been used to create a high number of near misses, failures that are close to wins. The result is that what the player sees does not represent the underlying probabilities and randomness,…

  16. Software architecture standard for simulation virtual machine, version 2.0

    Science.gov (United States)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  17. Micro-CernVM: slashing the cost of building and deploying virtual machines

    International Nuclear Information System (INIS)

    Blomer, J; Berzano, D; Buncic, P; Charalampidis, I; Ganis, G; Lestaris, G; Meusel, R; Nicolaou, V

    2014-01-01

    The traditional virtual machine (VM) building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM File System (CernVM-FS) has been established in order to decouple the distribution from the experiment software from the building and distribution of the VM hard disk images. We show how to get rid of such pre-built hard disk images altogether. Due to the high requirements on POSIX compliance imposed by HEP application software, CernVM-FS can also be used to host and boot a Linux operating system. This allows the use of a tiny bootable CD image that comprises only a Linux kernel while the rest of the operating system is provided on demand by CernVM-FS. This approach speeds up the initial instantiation time and reduces virtual machine image sizes by an order of magnitude. Furthermore, security updates can be distributed instantaneously through CernVM-FS. By leveraging the fact that CernVM-FS is a versioning file system, a historic analysis environment can be easily re-spawned by selecting the corresponding CernVM-FS file system snapshot.

  18. Automated Analysis of ARM Binaries using the Low-Level Virtual Machine Compiler Framework

    Science.gov (United States)

    2011-03-01

    Maintenance ABACAS offers a level of flexibility in software development that would be very useful later in the software engineering life cycle. New... Blackjacking : security threats to blackberry devices, PDAs and cell phones in the enterprise. Indianapolis, Indiana, U.S.A.: Wiley Publishing, 2007...AUTOMATED ANALYSIS OF ARM BINARIES USING THE LOW- LEVEL VIRTUAL MACHINE COMPILER FRAMEWORK THESIS Jeffrey B. Scott

  19. The man, the machine and the sacred: when the virtual reality reenchants the world

    Directory of Open Access Journals (Sweden)

    Olivier NANNIPIERI

    2011-01-01

    Full Text Available The rationality associated with the technical progress were able to let believe that the world became disillusioned. Now, far from disillusioning the world, certain technical devices reveal the sacred dimension inherent to any human activity. Indeed, paradoxically, we shall show that the human-machine interaction producing virtual environments is an experience of the sacred.

  20. Managing the Virtual Machine Lifecycle of the CernVM Project

    International Nuclear Information System (INIS)

    Charalampidis, I; Blomer, J; Buncic, P; Harutyunyan, A; Larsen, D

    2012-01-01

    CernVM is a virtual software appliance designed to support the development cycle and provide a runtime environment for LHC applications. It consists of a minimal Linux distribution, a specially tuned file system designed to deliver application software on demand, and contextualization tools. The maintenance of these components involves a variety of different procedures and tools that cannot always connect with each other. Additionally, most of these procedures need to be performed frequently. Currently, in the CernVM project, every time we build a new virtual machine image, we have to perform the whole process manually, because of the heterogeneity of the tools involved. The overall process is error-prone and time-consuming. Therefore, to simplify and aid this continuous maintenance process, we are developing a framework that combines these virtually unrelated tools with a single, coherent interface. To do so, we identified all the involved procedures and their tools, tracked their dependencies and organized them into logical groups (e.g. build, test, instantiate). These groups define the procedures that are performed throughout the lifetime of a virtual machine. In this paper we describe the Virtual Machine Lifecycle and the framework we developed (iAgent) in order to simplify the maintenance process.

  1. Protection of Mission-Critical Applications from Untrusted Execution Environment: Resource Efficient Replication and Migration of Virtual Machines

    Science.gov (United States)

    2015-09-28

    in the same LAN ; this setup resembles the typical setup in a virtualized datacenter where protected and backup hosts are connected by an internal LAN ... Virtual Machines 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0393 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Kang G. Shin 5d. PROJECT...Distribution A - Approved for Public Release 13. SUPPLEMENTARY NOTES None 14. ABSTRACT Continuous replication and live migration of Virtual Machines (VMs

  2. DISTRIBUTED SYSTEM FOR HUMAN MACHINE INTERACTION IN VIRTUAL ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    Abraham Obed Chan-Canche

    2017-07-01

    Full Text Available The communication networks built by multiple devices and sensors are becoming more frequent. These device networks allow human-machine interaction development which aims to improve the human performance generating an adaptive environment in response to the information provided by it. The problem of this work is the quick integration of a device network that allows the development of a flexible immersive environment for different uses.

  3. Efficient Hybrid Genetic Based Multi Dimensional Host Load Aware Algorithm for Scheduling and Optimization of Virtual Machines

    OpenAIRE

    Thiruvenkadam, T; Karthikeyani, V

    2014-01-01

    Mapping the virtual machines to the physical machines cluster is called the VM placement. Placing the VM in the appropriate host is necessary for ensuring the effective resource utilization and minimizing the datacenter cost as well as power. Here we present an efficient hybrid genetic based host load aware algorithm for scheduling and optimization of virtual machines in a cluster of Physical hosts. We developed the algorithm based on two different methods, first initial VM packing is done by...

  4. Applying machine learning techniques for forecasting flexibility of virtual power plants

    DEFF Research Database (Denmark)

    MacDougall, Pamela; Kosek, Anna Magdalena; Bindner, Henrik W.

    2016-01-01

    network as well as the multi-variant linear regression. It is found that it is possible to estimate the longevity of flexibility with machine learning. The linear regression algorithm is, on average, able to estimate the longevity with a 15% error. However, there was a significant improvement with the ANN...... approach to investigating the longevity of aggregated response of a virtual power plant using historic bidding and aggregated behaviour with machine learning techniques. The two supervised machine learning techniques investigated and compared in this paper are, multivariate linear regression and single...... algorithm achieving, on average, a 5.3% error. This is lowered 2.4% when learning for the same virtual power plant. With this information it would be possible to accurately offer residential VPP flexibility for market operations to safely avoid causing further imbalances and financial penalties....

  5. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  6. HEP specific benchmarks of virtual machines on multi-core CPU architectures

    International Nuclear Information System (INIS)

    Alef, M; Gable, I

    2010-01-01

    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.

  7. Exploiting the ALICE HLT for PROOF by scheduling of Virtual Machines

    International Nuclear Information System (INIS)

    Meoni, Marco; Boettger, Stefan; Zelnicek, Pierre; Kebschull, Udo; Lindenstruth, Volker

    2011-01-01

    The HLT (High-Level Trigger) group of the ALICE experiment at the LHC has prepared a virtual Parallel ROOT Facility (PROOF) enabled cluster (HAF - HLT Analysis Facility) for fast physics analysis, detector calibration and reconstruction of data samples. The HLT-Cluster currently consists of 2860 CPU cores and 175TB of storage. Its purpose is the online filtering of the relevant part of data produced by the particle detector. However, data taking is not running continuously and exploiting unused cluster resources for other applications is highly desirable and improves the usage-cost ratio of the HLT cluster. As such, unused computing resources are dedicated to a PROOF-enabled virtual cluster available to the entire collaboration. This setup is especially aimed at the prototyping phase of analyses that need a high number of development iterations and a short response time, e.g. tuning of analysis cuts, calibration and alignment. HAF machines are enabled and disabled upon user request to start or complete analysis tasks. This is achieved by a virtual machine scheduling framework which dynamically assigns and migrates virtual machines running PROOF workers to unused physical resources. Using this approach we extend the HLT usage scheme to running both online and offline computing, thereby optimizing the resource usage.

  8. Virtual machine consolidation enhancement using hybrid regression algorithms

    Directory of Open Access Journals (Sweden)

    Amany Abdelsamea

    2017-11-01

    Full Text Available Cloud computing data centers are growing rapidly in both number and capacity to meet the increasing demands for highly-responsive computing and massive storage. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. The reason for this extremely high energy consumption is not just the quantity of computing resources and the power inefficiency of hardware, but rather lies in the inefficient usage of these resources. VM consolidation involves live migration of VMs hence the capability of transferring a VM between physical servers with a close to zero down time. It is an effective way to improve the utilization of resources and increase energy efficiency in cloud data centers. VM consolidation consists of host overload/underload detection, VM selection and VM placement. Most of the current VM consolidation approaches apply either heuristic-based techniques, such as static utilization thresholds, decision-making based on statistical analysis of historical data; or simply periodic adaptation of the VM allocation. Most of those algorithms rely on CPU utilization only for host overload detection. In this paper we propose using hybrid factors to enhance VM consolidation. Specifically we developed a multiple regression algorithm that uses CPU utilization, memory utilization and bandwidth utilization for host overload detection. The proposed algorithm, Multiple Regression Host Overload Detection (MRHOD, significantly reduces energy consumption while ensuring a high level of adherence to Service Level Agreements (SLA since it gives a real indication of host utilization based on three parameters (CPU, Memory, Bandwidth utilizations instead of one parameter only (CPU utilization. Through simulations we show that our approach reduces power consumption by 6 times compared to single factor algorithms using random workload. Also using PlanetLab workload traces we show that MRHOD improves

  9. Round-Trip Delay Estimation in OPC UA Server-Client Communication Channel

    OpenAIRE

    Nakutis, Zilvinas; Deksnys, Vytautas; Jarusevicius, Ignas; Dambrauskas, Vilius; Cincikas, Gediminas; Kriauceliunas, Alenas

    2017-01-01

    In this paper an estimation of round-trip delay (RTD) in OPC UA server-client channel was investigated in various data communication networks including Ethernet, WiFi, and 3G. Testing was carried out using the developed IoT gateway device running OPC UA server and remote computer running OPC UA client. The server and the client machines were configured to operate in Virtual Private Network powered by OpenVPN. Experimental analysis revealed that RTD values are distributed in the wide range exh...

  10. Developing Parametric Models for the Assembly of Machine Fixtures for Virtual Multiaxial CNC Machining Centers

    Science.gov (United States)

    Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.

    2018-01-01

    This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.

  11. A Load Balancing Scheme Using Federate Migration Based on Virtual Machines for Cloud Simulations

    Directory of Open Access Journals (Sweden)

    Xiao Song

    2015-01-01

    Full Text Available A maturing and promising technology, Cloud computing can benefit large-scale simulations by providing on-demand, anywhere simulation services to users. In order to enable multitask and multiuser simulation systems with Cloud computing, Cloud simulation platform (CSP was proposed and developed. To use key techniques of Cloud computing such as virtualization to promote the running efficiency of large-scale military HLA systems, this paper proposes a new type of federate container, virtual machine (VM, and its dynamic migration algorithm considering both computation and communication cost. Experiments show that the migration scheme effectively improves the running efficiency of HLA system when the distributed system is not saturated.

  12. Exploiting GPUs in Virtual Machine for BioCloud

    Science.gov (United States)

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465

  13. Exploiting GPUs in Virtual Machine for BioCloud

    Directory of Open Access Journals (Sweden)

    Heeseung Jo

    2013-01-01

    Full Text Available Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  14. Exploiting GPUs in virtual machine for BioCloud.

    Science.gov (United States)

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  15. Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach.

    Science.gov (United States)

    Pasupa, Kitsuchart; Kudisthalert, Wasu

    2018-01-01

    Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets-Maximum Unbiased Validation Dataset-which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6.

  16. Performance of machine learning methods for ligand-based virtual screening.

    Science.gov (United States)

    Plewczynski, Dariusz; Spieser, Stéphane A H; Koch, Uwe

    2009-05-01

    Computational screening of compound databases has become increasingly popular in pharmaceutical research. This review focuses on the evaluation of ligand-based virtual screening using active compounds as templates in the context of drug discovery. Ligand-based screening techniques are based on comparative molecular similarity analysis of compounds with known and unknown activity. We provide an overview of publications that have evaluated different machine learning methods, such as support vector machines, decision trees, ensemble methods such as boosting, bagging and random forests, clustering methods, neuronal networks, naïve Bayesian, data fusion methods and others.

  17. Server consolidation for heterogeneous computer clusters using Colored Petri Nets and CPN Tools

    Directory of Open Access Journals (Sweden)

    Issam Al-Azzoni

    2015-10-01

    Full Text Available In this paper, we present a new approach to server consolidation in heterogeneous computer clusters using Colored Petri Nets (CPNs. Server consolidation aims to reduce energy costs and improve resource utilization by reducing the number of servers necessary to run the existing virtual machines in the cluster. It exploits the emerging technology of live migration which allows migrating virtual machines between servers without stopping their provided services. Server consolidation approaches attempt to find migration plans that aim to minimize the necessary size of the cluster. Our approach finds plans which not only minimize the overall number of used servers, but also minimize the total data migration overhead. The latter objective is not taken into consideration by other approaches and heuristics. We explore the use of CPN Tools in analyzing the state spaces of the CPNs. Since the state space of the CPN model can grow exponentially with the size of the cluster, we examine different techniques to generate and analyze the state space in order to find good plans to server consolidation within acceptable time and computing power.

  18. A mathematical framework for virtual IMRT QA using machine learning.

    Science.gov (United States)

    Valdes, G; Scheuermann, R; Hung, C Y; Olszanski, A; Bellerive, M; Solberg, T D

    2016-07-01

    It is common practice to perform patient-specific pretreatment verifications to the clinical delivery of IMRT. This process can be time-consuming and not altogether instructive due to the myriad sources that may produce a failing result. The purpose of this study was to develop an algorithm capable of predicting IMRT QA passing rates a priori. From all treatment, 498 IMRT plans sites were planned in eclipse version 11 and delivered using a dynamic sliding window technique on Clinac iX or TrueBeam Linacs. 3%/3 mm local dose/distance-to-agreement (DTA) was recorded using a commercial 2D diode array. Each plan was characterized by 78 metrics that describe different aspects of their complexity that could lead to disagreements between the calculated and measured dose. A Poisson regression with Lasso regularization was trained to learn the relation between the plan characteristics and each passing rate. Passing rates 3%/3 mm local dose/DTA can be predicted with an error smaller than 3% for all plans analyzed. The most important metrics to describe the passing rates were determined to be the MU factor (MU per Gy), small aperture score, irregularity factor, and fraction of the plan delivered at the corners of a 40 × 40 cm field. The higher the value of these metrics, the worse the passing rates. The Virtual QA process predicts IMRT passing rates with a high likelihood, allows the detection of failures due to setup errors, and it is sensitive enough to detect small differences between matched Linacs.

  19. Using simulation and virtual machines to identify information assurance requirements

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2010-04-01

    The US military is changing its philosophy, approach, and technologies used for warfare. In the process of achieving this vision for high-speed, highly mobile warfare, there are a number of issues that must be addressed and solved; issues that are not addressed by commercial systems because Department of Defense (DoD) Information Technology (IT) systems operate in an environment different from the commercial world. The differences arise from the differences in the scope and skill used in attacks upon DoD systems, the interdependencies between DoD software systems used for network centric warfare (NCW), and the need to rely upon commercial software components in virtually every DoD system. As a result, while NCW promises more effective and efficient means for employing DoD resources, it also increases the vulnerability and allure of DoD systems to cyber attack. A further challenge arises due to the rapid changes in software and information assurance (IA) requirements and technologies over the course of a project. Therefore, the four challenges that must be addressed are determining how to specify the information assurance requirements for a DoD system, minimizing changes to commercial software, incorporation of new system and IA requirements in a timely manner with minimal impact, and insuring that the interdependencies between systems do not result in cyber attack vulnerabilities. In this paper, we address all four issues. In addition to addressing the four challenges outlined above, the interdependencies and interconnections between systems indicate that the IA requirements for a system must consider two important facets of a system's IA defensive capabilities. The facets are the types of IA attacks that the system must repel and the ability of a system to insure that any IA attack that penetrates the system is contained within the system and does not spread. The IA requirements should be derived from threat assessments for the system as well as for the need to

  20. Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.

    Science.gov (United States)

    Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi

    2017-10-20

    Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.

  1. Simulation of Digital Control Computer of Nuclear Power Plant Based on Virtual Machine Technology

    International Nuclear Information System (INIS)

    Hou, Xue Yan; Li, Shu; Li, Qing

    2011-01-01

    Based on analyzing DCC (Digital Control Computer) instruction sets, memory map, display controllers and I/O system, virtual machine of DCC (abbr. VM DCC) has been developed. The executive and control programs, same as running on NPP (Nuclear Power Plant) unit's DCC, can run on the VM DCC smoothly and get same control results. Dual VM DCC system has been successfully applied in NPP FSS(Full Scope Simulator) training. It not only improves FSS's fidelity but also makes maintaining easier

  2. Mastering Citrix XenServer

    CERN Document Server

    Reed, Martez

    2014-01-01

    If you are an administrator who is looking to gain a greater understanding of how to design and implement a virtualization solution based on Citrix® XenServer®, then this book is for you. The book will serve as an excellent resource for those who are already familiar with other virtualization platforms, such as Microsoft Hyper-V or VMware vSphere.The book assumes that you have a good working knowledge of servers, networking, and storage technologies.

  3. The Design and Realization of Virtual Machine of Embedded Soft PLC Running System

    Directory of Open Access Journals (Sweden)

    Qingzhao Zeng

    2014-11-01

    Full Text Available Currently soft PLC has been the focus of study object for many countries. Soft PLC system consists of the developing system and running system. A Virtual Machine is an important part in running system even in the whole soft PLC system. It explains and performs intermediate code generated by the developing system and updates I/O status of PLC in order to complete its control function. This paper introduced the implementation scheme and execution process of the embedded soft PLC running system Virtual Machine, and mainly introduced its software implementation method, including the realization of the input sampling program, the realization of the instruction execution program and the realization of output refresh program. Besides, an operation code matching method was put forward in the instruction execution program design. Finally, the test takes PowerPC/P1010 (Freescale as the hardware platform and Vxworks as the operating system, the system test result shows that accuracy, the real-time performance and reliability of Virtual Machine.

  4. A Virtual Astronomical Research Machine in No Time (VARMiNT)

    Science.gov (United States)

    Beaver, John

    2012-05-01

    We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.

  5. Toward Confirming a Framework for Securing the Virtual Machine Image in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Raid Khalid Hussein

    2017-04-01

    Full Text Available The concept of cloud computing has arisen thanks to academic work in the fields of utility computing, distributed computing, virtualisation, and web services. By using cloud computing, which can be accessed from anywhere, newly-launched businesses can minimise their start-up costs. Among the most important notions when it comes to the construction of cloud computing is virtualisation. While this concept brings its own security risks, these risks are not necessarily related to the cloud. The main disadvantage of using cloud computing is linked to safety and security. This is because anybody which chooses to employ cloud computing will use someone else’s hard disk and CPU in order to sort and store data. In cloud environments, a great deal of importance is placed on guaranteeing that the virtual machine image is safe and secure. Indeed, a previous study has put forth a framework with which to protect the virtual machine image in cloud computing. As such, the present study is primarily concerned with confirming this theoretical framework so as to ultimately secure the virtual machine image in cloud computing. This will be achieved by carrying out interviews with experts in the field of cloud security.

  6. Effective Cost Mechanism for Cloudlet Retransmission and Prioritized VM Scheduling Mechanism over Broker Virtual Machine Communication Framework

    OpenAIRE

    Raj, Gaurav; Setia, Sonika

    2012-01-01

    In current scenario cloud computing is most widely increasing platform for task execution. Lot of research is going on to cut down the cost and execution time. In this paper, we propose an efficient algorithm to have an effective and fast execution of task assigned by the user. We proposed an effective communication framework between broker and virtual machine for assigning the task and fetching the results in optimum time and cost using Broker Virtual Machine Communication Framework (BVCF). ...

  7. SVM-Prot 2016: A Web-Server for Machine Learning Prediction of Protein Functional Families from Sequence Irrespective of Similarity.

    Science.gov (United States)

    Li, Ying Hong; Xu, Jing Yu; Tao, Lin; Li, Xiao Feng; Li, Shuang; Zeng, Xian; Chen, Shang Ying; Zhang, Peng; Qin, Chu; Zhang, Cheng; Chen, Zhe; Zhu, Feng; Chen, Yu Zong

    2016-01-01

    Knowledge of protein function is important for biological, medical and therapeutic studies, but many proteins are still unknown in function. There is a need for more improved functional prediction methods. Our SVM-Prot web-server employed a machine learning method for predicting protein functional families from protein sequences irrespective of similarity, which complemented those similarity-based and other methods in predicting diverse classes of proteins including the distantly-related proteins and homologous proteins of different functions. Since its publication in 2003, we made major improvements to SVM-Prot with (1) expanded coverage from 54 to 192 functional families, (2) more diverse protein descriptors protein representation, (3) improved predictive performances due to the use of more enriched training datasets and more variety of protein descriptors, (4) newly integrated BLAST analysis option for assessing proteins in the SVM-Prot predicted functional families that were similar in sequence to a query protein, and (5) newly added batch submission option for supporting the classification of multiple proteins. Moreover, 2 more machine learning approaches, K nearest neighbor and probabilistic neural networks, were added for facilitating collective assessment of protein functions by multiple methods. SVM-Prot can be accessed at http://bidd2.nus.edu.sg/cgi-bin/svmprot/svmprot.cgi.

  8. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  9. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach.

    Science.gov (United States)

    Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi

    2016-12-01

    Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. The influence of negative training set size on machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  11. Issues of Application of Machine Learning Models for Virtual and Real-Life Buildings

    Directory of Open Access Journals (Sweden)

    Young Min Kim

    2016-06-01

    Full Text Available The current Building Energy Performance Simulation (BEPS tools are based on first principles. For the correct use of BEPS tools, simulationists should have an in-depth understanding of building physics, numerical methods, control logics of building systems, etc. However, it takes significant time and effort to develop a first principles-based simulation model for existing buildings—mainly due to the laborious process of data gathering, uncertain inputs, model calibration, etc. Rather than resorting to an expert’s effort, a data-driven approach (so-called “inverse” approach has received growing attention for the simulation of existing buildings. This paper reports a cross-comparison of three popular machine learning models (Artificial Neural Network (ANN, Support Vector Machine (SVM, and Gaussian Process (GP for predicting a chiller’s energy consumption in a virtual and a real-life building. The predictions based on the three models are sufficiently accurate compared to the virtual and real measurements. This paper addresses the following issues for the successful development of machine learning models: reproducibility, selection of inputs, training period, outlying data obtained from the building energy management system (BEMS, and validation of the models. From the result of this comparative study, it was found that SVM has a disadvantage in computation time compared to ANN and GP. GP is the most sensitive to a training period among the three models.

  12. NExT server

    CERN Document Server

    1989-01-01

    The first website at CERN - and in the world - was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer. The website described the basic features of the web; how to access other people's documents and how to set up your own server. This NeXT machine - the original web server - is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world's first website to its original address.

  13. Virtual reality hardware and graphic display options for brain-machine interfaces.

    Science.gov (United States)

    Marathe, Amar R; Carey, Holle L; Taylor, Dawn M

    2008-01-15

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target-matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing.

  14. Global detection of live virtual machine migration based on cellular neural networks.

    Science.gov (United States)

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  15. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    Directory of Open Access Journals (Sweden)

    Kang Xie

    2014-01-01

    Full Text Available In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM migration detection algorithm based on the cellular neural networks (CNNs, is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation allowing the VM migration detection to be performed better.

  16. Simulation of Digital Control Computer of Nuclear Power Plant Based on Virtual Machine Technology

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Xue Yan; Li, Shu; Li, Qing [China Nuclear Power Operation Technology Co., Wuhan (China)

    2011-08-15

    Based on analyzing DCC (Digital Control Computer) instruction sets, memory map, display controllers and I/O system, virtual machine of DCC (abbr. VM DCC) has been developed. The executive and control programs, same as running on NPP (Nuclear Power Plant) unit's DCC, can run on the VM DCC smoothly and get same control results. Dual VM DCC system has been successfully applied in NPP FSS(Full Scope Simulator) training. It not only improves FSS's fidelity but also makes maintaining easier.

  17. The Needs of Virtual Machines Implementation in Private Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Edy Kristianto

    2015-12-01

    Full Text Available The Internet of Things (IOT becomes the purpose of the development of information and communication technology. Cloud computing has a very important role in supporting the IOT, because cloud computing allows to provide services in the form of infrastructure (IaaS, platform (PaaS, and Software (SaaS for its users. One of the fundamental services is infrastructure as a service (IaaS. This study analyzed the requirement that there must be based on a framework of NIST to realize infrastructure as a service in the form of a virtual machine to be built in a cloud computing environment.

  18. Thinking computers and virtual persons essays on the intentionality of machines

    CERN Document Server

    Dietrich, Eric

    1994-01-01

    Thinking Computers and Virtual Persons: Essays on the Intentionality of Machines explains how computations are meaningful and how computers can be cognitive agents like humans. This book focuses on the concept that cognition is computation.Organized into four parts encompassing 13 chapters, this book begins with an overview of the analogy between intentionality and phlogiston, the 17th-century principle of burning. This text then examines the objection to computationalism that it cannot prevent arbitrary attributions of content to the various data structures and representations involved in a c

  19. Application of virtual machine technology to real-time mapping of Thomson scattering data to flux coordinates for the LHD

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yoshida, Masanobu; Suzuki, Chihiro; Suzuki, Yasuhiro; Ida, Katsumi; Nagayama, Yoshio; Akiyama, Tsuyoshi; Kawahata, Kazuo; Narihara, Kazumichi; Tokuzawa, Tokihiko; Yamada, Ichihiro

    2012-01-01

    Highlights: ► We have developed a mapping system of the electron temperature profile to the flux coordinates. ► To increases the performance, multiple virtual machines are used. ► The virtual machine technology is flexible when increasing the number of computers. - Abstract: This paper presents a system called “TSMAP” that maps electron temperature profiles to flux coordinates for the Large Helical Device (LHD). Considering the flux surface is isothermal, TSMAP searches an equilibrium database for the LHD equilibrium that fits the electron temperature profile. The equilibrium database is built through many VMEC computations of the helical equilibria. Because the number of equilibria is large, the most important technical issue for realizing the TSMAP system is computational performance. Therefore, we use multiple personal computers to enhance performance when building the database for TSMAP. We use virtual machines on multiple Linux computers to run the TSMAP program. Virtual machine technology is flexible, allowing the number of computers to be easily increased. This paper discusses how the use of virtual machine technology enhances the performance of TSMAP calculations when multiple CPU cores are used.

  20. An efficient approach for improving virtual machine placement in cloud computing environment

    Science.gov (United States)

    Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.

    2017-11-01

    The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.

  1. Application of a virtual coordinate measuring machine for measurement uncertainty estimation of aspherical lens parameters

    International Nuclear Information System (INIS)

    Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf

    2014-01-01

    Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters. (paper)

  2. A Study of Applications of Machine Learning Based Classification Methods for Virtual Screening of Lead Molecules.

    Science.gov (United States)

    Vyas, Renu; Bapat, Sanket; Jain, Esha; Tambe, Sanjeev S; Karthikeyan, Muthukumarasamy; Kulkarni, Bhaskar D

    2015-01-01

    The ligand-based virtual screening of combinatorial libraries employs a number of statistical modeling and machine learning methods. A comprehensive analysis of the application of these methods for the diversity oriented virtual screening of biological targets/drug classes is presented here. A number of classification models have been built using three types of inputs namely structure based descriptors, molecular fingerprints and therapeutic category for performing virtual screening. The activity and affinity descriptors of a set of inhibitors of four target classes DHFR, COX, LOX and NMDA have been utilized to train a total of six classifiers viz. Artificial Neural Network (ANN), k nearest neighbor (k-NN), Support Vector Machine (SVM), Naïve Bayes (NB), Decision Tree--(DT) and Random Forest--(RF). Among these classifiers, the ANN was found as the best classifier with an AUC of 0.9 irrespective of the target. New molecular fingerprints based on pharmacophore, toxicophore and chemophore (PTC), were used to build the ANN models for each dataset. A good accuracy of 87.27% was obtained using 296 chemophoric binary fingerprints for the COX-LOX inhibitors compared to pharmacophoric (67.82%) and toxicophoric (70.64%). The methodology was validated on the classical Ames mutagenecity dataset of 4337 molecules. To evaluate it further, selectivity and promiscuity of molecules from five drug classes viz. anti-anginal, anti-convulsant, anti-depressant, anti-arrhythmic and anti-diabetic were studied. The TPC fingerprints computed for each category were able to capture the drug-class specific features using the k-NN classifier. These models can be useful for selecting optimal molecules for drug design.

  3. Machine-learning scoring functions to improve structure-based binding affinity prediction and virtual screening.

    Science.gov (United States)

    Ain, Qurrat Ul; Aleksandrova, Antoniya; Roessler, Florian D; Ballester, Pedro J

    2015-01-01

    Docking tools to predict whether and how a small molecule binds to a target can be applied if a structural model of such target is available. The reliability of docking depends, however, on the accuracy of the adopted scoring function (SF). Despite intense research over the years, improving the accuracy of SFs for structure-based binding affinity prediction or virtual screening has proven to be a challenging task for any class of method. New SFs based on modern machine-learning regression models, which do not impose a predetermined functional form and thus are able to exploit effectively much larger amounts of experimental data, have recently been introduced. These machine-learning SFs have been shown to outperform a wide range of classical SFs at both binding affinity prediction and virtual screening. The emerging picture from these studies is that the classical approach of using linear regression with a small number of expert-selected structural features can be strongly improved by a machine-learning approach based on nonlinear regression allied with comprehensive data-driven feature selection. Furthermore, the performance of classical SFs does not grow with larger training datasets and hence this performance gap is expected to widen as more training data becomes available in the future. Other topics covered in this review include predicting the reliability of a SF on a particular target class, generating synthetic data to improve predictive performance and modeling guidelines for SF development. WIREs Comput Mol Sci 2015, 5:405-424. doi: 10.1002/wcms.1225 For further resources related to this article, please visit the WIREs website.

  4. The effective use of virtualization for selection of data centers in a cloud computing environment

    Science.gov (United States)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  5. Extending the features of RBMK refuelling machine simulator with a training tool based on virtual reality

    International Nuclear Information System (INIS)

    Khoudiakov, M.; Slonimsky, V.; Mitrofanov, S.

    2004-01-01

    The paper describes a continuation of efforts of an international Russian - Norwegian joint team to improve operational safety during the refuelling process of an RBMK-type reactor by implementing a training simulator based on an innovative Virtual Reality (VR) approach. During the preceding 1st stage of the project a display-based simulator was extended with VR models of the real Refuelling Machine (RM) and its environment in order to improve both the learning process and operation's effectiveness. The simulator's challenge is to support the performance (operational activity) of RM operational staff firstly by helping them to develop basic knowledge and skills as well as to keep skilled staff in close touch with the complex machinery of the Refuelling Machine. During the 2nd stage of the joint project the functional scope of the VR-simulator was greatly enhanced - firstly, by connecting to the RBMK-unit full-scope simulator, and, secondly, by including a training program and simulator model upgrade. The present 3rd stage of the Project is primarily oriented towards the improvement of the training process for maintenance and operational personnel by means of a development of the Training Support Methodology and Courses (TSMC) to be based on Virtual Reality and enlarged functionality of 3D and process modelling. The TMSC development is based on Russian and International Regulatory Bodies requirements and recommendations. Design, development and creation of a specialised VR-based Training System for RM Maintenance Personnel are very important for the Russian RBMK plants. The main goal is to create a powerful, autonomous VR-based simulator for training technical maintenance personnel on the Refuelling Machine. VR based training is expected to improve the effect of training compared to the current training based on traditional methods using printed documentation. The LNPP management and the Regulatory Bodies supported this goal. The VR-based Training System should

  6. myChEMBL: a virtual machine implementation of open data and cheminformatics tools.

    Science.gov (United States)

    Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P

    2014-01-15

    myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.

  7. Energy Efficient Multiresource Allocation of Virtual Machine Based on PSO in Cloud Data Center

    Directory of Open Access Journals (Sweden)

    An-ping Xiong

    2014-01-01

    Full Text Available Presently, massive energy consumption in cloud data center tends to be an escalating threat to the environment. To reduce energy consumption in cloud data center, an energy efficient virtual machine allocation algorithm is proposed in this paper based on a proposed energy efficient multiresource allocation model and the particle swarm optimization (PSO method. In this algorithm, the fitness function of PSO is defined as the total Euclidean distance to determine the optimal point between resource utilization and energy consumption. This algorithm can avoid falling into local optima which is common in traditional heuristic algorithms. Compared to traditional heuristic algorithms MBFD and MBFH, our algorithm shows significantly energy savings in cloud data center and also makes the utilization of system resources reasonable at the same time.

  8. Controlling a virtual forehand prosthesis using an adaptive and affective Human-Machine Interface.

    Science.gov (United States)

    Rezazadeh, I Mohammad; Firoozabadi, S M P; Golpayegani, S M R Hashemi; Hu, H

    2011-01-01

    This paper presents the design of an adaptable Human-Machine Interface (HMI) for controlling virtual forearm prosthesis. Direct physical performance measures (obtained score and completion time) for the requested tasks were calculated. Furthermore, bioelectric signals from the forehead were recorded using one pair of electrodes placed on the frontal region of the subject head to extract the mental (affective) measures while performing the tasks. By employing the proposed algorithm and above measures, the proposed HMI can adapt itself to the subject's mental states, thus improving the usability of the interface. The quantitative results from 15 subjects show that the proposed HMI achieved better physical performance measures in comparison to a conventional non-adaptive myoelectric controller (p < 0.001).

  9. Indicators of ADHD symptoms in virtual learning context using machine learning technics

    Directory of Open Access Journals (Sweden)

    Laura Patricia Mancera Valetts

    2015-12-01

    Full Text Available Rev.esc.adm.neg This paper presents a user model for students performing virtual learning processes. This model is used to infer the presence of Attention Deficit Hyperactivity Disorder (ADHD indicators in a student. The user model is built considering three user characteristics, which can be also used as variables in different contexts. These variables are: behavioral conduct (BC, executive functions performance (EFP, and emotional state (ES. For inferring the ADHD symptomatic profile of a student and his/her emotional alterations, these features are used as input in a set of classification rules. Based on the testing of the proposed model, training examples are obtained. These examples are used to prepare a classification machine learning algorithm for performing, and improving, the task of profiling a student. The proposed user model can provide the first step to adapt learning resources in e-learning platforms to people with attention problems, specifically, young-adult students with ADHD.

  10. An Adaptive Method For Texture Characterization In Medical Images Implemented on a Parallel Virtual Machine

    Directory of Open Access Journals (Sweden)

    Socrates A. Mylonas

    2003-06-01

    Full Text Available This paper describes the application of a new texture characterization algorithm for the segmentation of medical ultrasound images. The morphology of these images poses significant problems for the application of traditional image processing techniques and their analysis has been the subject of research for several years. The basis of the algorithm is an optimum signal modelling algorithm (Least Mean Squares-based, which estimates a set of parameters from small image regions. The algorithm has been converted to a structure suitable for implementation on a Parallel Virtual Machine (PVM consisting of a Network of Workstations (NoW, to improve processing speed. Tests were initially carried out on standard textured images. This paper describes preliminary results of the application of the algorithm in texture discrimination and segmentation of medical ultrasound images. The images examined are primarily used in the diagnosis of carotid plaques, which are linked to the risk of stroke.

  11. Performance of machine-learning scoring functions in structure-based virtual screening.

    Science.gov (United States)

    Wójcikowski, Maciej; Ballester, Pedro J; Siedlecki, Pawel

    2017-04-25

    Classical scoring functions have reached a plateau in their performance in virtual screening and binding affinity prediction. Recently, machine-learning scoring functions trained on protein-ligand complexes have shown great promise in small tailored studies. They have also raised controversy, specifically concerning model overfitting and applicability to novel targets. Here we provide a new ready-to-use scoring function (RF-Score-VS) trained on 15 426 active and 893 897 inactive molecules docked to a set of 102 targets. We use the full DUD-E data sets along with three docking tools, five classical and three machine-learning scoring functions for model building and performance assessment. Our results show RF-Score-VS can substantially improve virtual screening performance: RF-Score-VS top 1% provides 55.6% hit rate, whereas that of Vina only 16.2% (for smaller percent the difference is even more encouraging: RF-Score-VS top 0.1% achieves 88.6% hit rate for 27.5% using Vina). In addition, RF-Score-VS provides much better prediction of measured binding affinity than Vina (Pearson correlation of 0.56 and -0.18, respectively). Lastly, we test RF-Score-VS on an independent test set from the DEKOIS benchmark and observed comparable results. We provide full data sets to facilitate further research in this area (http://github.com/oddt/rfscorevs) as well as ready-to-use RF-Score-VS (http://github.com/oddt/rfscorevs_binary).

  12. Parallelization of MCNP Monte Carlo neutron and photon transport code in parallel virtual machine and message passing interface

    International Nuclear Information System (INIS)

    Deng Li; Xie Zhongsheng

    1999-01-01

    The coupled neutron and photon transport Monte Carlo code MCNP (version 3B) has been parallelized in parallel virtual machine (PVM) and message passing interface (MPI) by modifying a previous serial code. The new code has been verified by solving sample problems. The speedup increases linearly with the number of processors and the average efficiency is up to 99% for 12-processor. (author)

  13. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    Science.gov (United States)

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  14. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    Science.gov (United States)

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  15. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    Directory of Open Access Journals (Sweden)

    Yu-Shuang Dong

    2014-01-01

    Full Text Available The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  16. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine.

    Science.gov (United States)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; de Esch, Iwan J P; Lusher, Scott J; Leurs, Rob; Ridder, Lars; Kooistra, Albert J; Ritschel, Tina; de Graaf, Chris

    2017-02-27

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein-ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb).

  17. A novel artificial bee colony approach of live virtual machine migration policy using Bayes theorem.

    Science.gov (United States)

    Xu, Gaochao; Ding, Yan; Zhao, Jia; Hu, Liang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC) idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration's ability and local exploitation's ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  18. A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing

    Directory of Open Access Journals (Sweden)

    Jia Zhao

    2013-01-01

    Full Text Available Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA. This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  19. A location selection policy of live virtual machine migration for power saving and load balancing.

    Science.gov (United States)

    Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong

    2013-01-01

    Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  20. A Novel Artificial Bee Colony Approach of Live Virtual Machine Migration Policy Using Bayes Theorem

    Directory of Open Access Journals (Sweden)

    Gaochao Xu

    2013-01-01

    Full Text Available Green cloud data center has become a research hotspot of virtualized cloud computing architecture. Since live virtual machine (VM migration technology is widely used and studied in cloud computing, we have focused on the VM placement selection of live migration for power saving. We present a novel heuristic approach which is called PS-ABC. Its algorithm includes two parts. One is that it combines the artificial bee colony (ABC idea with the uniform random initialization idea, the binary search idea, and Boltzmann selection policy to achieve an improved ABC-based approach with better global exploration’s ability and local exploitation’s ability. The other one is that it uses the Bayes theorem to further optimize the improved ABC-based process to faster get the final optimal solution. As a result, the whole approach achieves a longer-term efficient optimization for power saving. The experimental results demonstrate that PS-ABC evidently reduces the total incremental power consumption and better protects the performance of VM running and migrating compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.

  1. Virtual Machine Replication on Achieving Energy-Efficiency in a Cloud

    Directory of Open Access Journals (Sweden)

    Subrota K. Mondal

    2016-07-01

    Full Text Available The rapid growth in cloud service demand has led to the establishment of large-scale virtualized data centers in which virtual machines (VMs are used to handle user requests for service. A user’s request cannot be completed if the VM fails. Replication mechanisms can be used to mitigate the impact of failures. Further, data centers consume a large amount of energy resulting in high operating costs and contributing to significant greenhouse gas (GHG emissions. In this paper, we focus on Infrastructure as a Service (IaaS cloud where user job requests are processed by VMs and analyze the effectiveness of VM replications in terms of job completion time performance as well as energy consumption. Three different schemes: cold, warm, and hot replications are considered. The trade-offs between job completion time and energy consumption in different replication schemes are characterized through comprehensive analytical models which capture VM state transitions and associated power consumption patterns. The effectiveness of replication schemes are demonstrated through experimental results. To verify the validity of the proposed analytical models, we extend the widely used cloud simulator CloudSim and compare the simulation results with analytical solutions.

  2. Optimizing virtual machine placement for energy and SLA in clouds using utility functions

    Directory of Open Access Journals (Sweden)

    Abdelkhalik Mosa

    2016-10-01

    Full Text Available Abstract Cloud computing provides on-demand access to a shared pool of computing resources, which enables organizations to outsource their IT infrastructure. Cloud providers are building data centers to handle the continuous increase in cloud users’ demands. Consequently, these cloud data centers consume, and have the potential to waste, substantial amounts of energy. This energy consumption increases the operational cost and the CO2 emissions. The goal of this paper is to develop an optimized energy and SLA-aware virtual machine (VM placement strategy that dynamically assigns VMs to Physical Machines (PMs in cloud data centers. This placement strategy co-optimizes energy consumption and service level agreement (SLA violations. The proposed solution adopts utility functions to formulate the VM placement problem. A genetic algorithm searches the possible VMs-to-PMs assignments with a view to finding an assignment that maximizes utility. Simulation results using CloudSim show that the proposed utility-based approach reduced the average energy consumption by approximately 6 % and the overall SLA violations by more than 38 %, using fewer VM migrations and PM shutdowns, compared to a well-known heuristics-based approach.

  3. Comparison of confirmed inactive and randomly selected compounds as negative training examples in support vector machine-based virtual screening.

    Science.gov (United States)

    Heikamp, Kathrin; Bajorath, Jürgen

    2013-07-22

    The choice of negative training data for machine learning is a little explored issue in chemoinformatics. In this study, the influence of alternative sets of negative training data and different background databases on support vector machine (SVM) modeling and virtual screening has been investigated. Target-directed SVM models have been derived on the basis of differently composed training sets containing confirmed inactive molecules or randomly selected database compounds as negative training instances. These models were then applied to search background databases consisting of biological screening data or randomly assembled compounds for available hits. Negative training data were found to systematically influence compound recall in virtual screening. In addition, different background databases had a strong influence on the search results. Our findings also indicated that typical benchmark settings lead to an overestimation of SVM-based virtual screening performance compared to search conditions that are more relevant for practical applications.

  4. Virtualization for the LHCb experiment

    International Nuclear Information System (INIS)

    Bonaccorsi, E.; Brarda, L.; Chebbi, M.; Neufeld, N.; Sborzacci, F.

    2012-01-01

    The LHCb experiment, one of the 4 large particle detector at CERN, counts in its Online System more than 2000 servers and embedded systems. As a result of ever-increasing CPU performance in modern servers, many of the applications in the controls system are excellent candidates for virtualization technologies. We see virtualization as an approach to cut down cost, optimize resource usage and manage the complexity of the IT infrastructure of LHCb. Recently we have added a Kernel Virtual Machine (KVM) cluster based on Red Hat Enterprise Virtualization for Servers (RHEV) complementary to the existing Hyper-V cluster devoted only to the virtualization of the windows guests. This paper describes the architecture of our solution based on KVM and RHEV as along with its integration with the existing Hyper-V infrastructure and the Quattor cluster management tools and in particular how we use to run controls applications on a virtualized infrastructure. We present performance results of both the KVM and Hyper-V solutions, problems encountered and a description of the management tools developed for the integration with the Online cluster and LHCb SCADA control system based on PVSS. (authors)

  5. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    Directory of Open Access Journals (Sweden)

    Fechner Nikolas

    2010-03-01

    Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening

  6. Estimation of the applicability domain of kernel-based machine learning models for virtual screening.

    Science.gov (United States)

    Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas

    2010-03-11

    The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations

  7. CPU Server

    CERN Multimedia

    The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.

  8. Exam 70-411 administering Windows Server 2012

    CERN Document Server

    Course, Microsoft Official Academic

    2014-01-01

    Microsoft Windows Server is a multi-purpose server designed to increase reliability and flexibility of  a network infrastructure. Windows Server is the paramount tool used by enterprises in their datacenter and desktop strategy. The most recent versions of Windows Server also provide both server and client virtualization. Its ubiquity in the enterprise results in the need for networking professionals who know how to plan, design, implement, operate, and troubleshoot networks relying on Windows Server. Microsoft Learning is preparing the next round of its Windows Server Certification program

  9. SciServer Compute brings Analysis to Big Data in the Cloud

    Science.gov (United States)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts

  10. Machine Learning-based Virtual Screening and Its Applications to Alzheimer's Drug Discovery: A Review.

    Science.gov (United States)

    Carpenter, Kristy A; Huang, Xudong

    2018-06-07

    Virtual Screening (VS) has emerged as an important tool in the drug development process, as it conducts efficient in silico searches over millions of compounds, ultimately increasing yields of potential drug leads. As a subset of Artificial Intelligence (AI), Machine Learning (ML) is a powerful way of conducting VS for drug leads. ML for VS generally involves assembling a filtered training set of compounds, comprised of known actives and inactives. After training the model, it is validated and, if sufficiently accurate, used on previously unseen databases to screen for novel compounds with desired drug target binding activity. The study aims to review ML-based methods used for VS and applications to Alzheimer's disease (AD) drug discovery. To update the current knowledge on ML for VS, we review thorough backgrounds, explanations, and VS applications of the following ML techniques: Naïve Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Random Forests (RF), and Artificial Neural Networks (ANN). All techniques have found success in VS, but the future of VS is likely to lean more heavily toward the use of neural networks - and more specifically, Convolutional Neural Networks (CNN), which are a subset of ANN that utilize convolution. We additionally conceptualize a work flow for conducting ML-based VS for potential therapeutics of for AD, a complex neurodegenerative disease with no known cure and prevention. This both serves as an example of how to apply the concepts introduced earlier in the review and as a potential workflow for future implementation. Different ML techniques are powerful tools for VS, and they have advantages and disadvantages albeit. ML-based VS can be applied to AD drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Virtualization, The next step for online services

    Directory of Open Access Journals (Sweden)

    Haller Piroska

    2013-06-01

    Full Text Available Virtualization allows sharing and allocating the hardware resources to more virtual machines thus increasing their usage rate. There are multiple solutions available today such as VMware vSphere, Microsoft Hyper-V, Xen Server and Red Hat KVM each with its own advantages and disadvantages. Choosing the right virtualization solution largely depends on the used applications and their resources requirements. The comparative analysis of the available virtualization solutions shows that it is essential to establish performance criteria’s and minimum and maximum resources usage thresholds over a given period of time. The coexistence of different services in different virtual machines that use different amount of resources allows a more efficient use of the available hardware resources.

  12. Machine learning-based assessment tool for imbalance and vestibular dysfunction with virtual reality rehabilitation system.

    Science.gov (United States)

    Yeh, Shih-Ching; Huang, Ming-Chun; Wang, Pa-Chun; Fang, Te-Yung; Su, Mu-Chun; Tsai, Po-Yi; Rizzo, Albert

    2014-10-01

    Dizziness is a major consequence of imbalance and vestibular dysfunction. Compared to surgery and drug treatments, balance training is non-invasive and more desired. However, training exercises are usually tedious and the assessment tool is insufficient to diagnose patient's severity rapidly. An interactive virtual reality (VR) game-based rehabilitation program that adopted Cawthorne-Cooksey exercises, and a sensor-based measuring system were introduced. To verify the therapeutic effect, a clinical experiment with 48 patients and 36 normal subjects was conducted. Quantified balance indices were measured and analyzed by statistical tools and a Support Vector Machine (SVM) classifier. In terms of balance indices, patients who completed the training process are progressed and the difference between normal subjects and patients is obvious. Further analysis by SVM classifier show that the accuracy of recognizing the differences between patients and normal subject is feasible, and these results can be used to evaluate patients' severity and make rapid assessment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Representability of algebraic topology for biomolecules in machine learning based scoring and virtual screening.

    Science.gov (United States)

    Cang, Zixuan; Mu, Lin; Wei, Guo-Wei

    2018-01-01

    This work introduces a number of algebraic topology approaches, including multi-component persistent homology, multi-level persistent homology, and electrostatic persistence for the representation, characterization, and description of small molecules and biomolecular complexes. In contrast to the conventional persistent homology, multi-component persistent homology retains critical chemical and biological information during the topological simplification of biomolecular geometric complexity. Multi-level persistent homology enables a tailored topological description of inter- and/or intra-molecular interactions of interest. Electrostatic persistence incorporates partial charge information into topological invariants. These topological methods are paired with Wasserstein distance to characterize similarities between molecules and are further integrated with a variety of machine learning algorithms, including k-nearest neighbors, ensemble of trees, and deep convolutional neural networks, to manifest their descriptive and predictive powers for protein-ligand binding analysis and virtual screening of small molecules. Extensive numerical experiments involving 4,414 protein-ligand complexes from the PDBBind database and 128,374 ligand-target and decoy-target pairs in the DUD database are performed to test respectively the scoring power and the discriminatory power of the proposed topological learning strategies. It is demonstrated that the present topological learning outperforms other existing methods in protein-ligand binding affinity prediction and ligand-decoy discrimination.

  14. A four-dimensional virtual hand brain-machine interface using active dimension selection.

    Science.gov (United States)

    Rouse, Adam G

    2016-06-01

    Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  15. Representability of algebraic topology for biomolecules in machine learning based scoring and virtual screening

    Science.gov (United States)

    Mu, Lin

    2018-01-01

    This work introduces a number of algebraic topology approaches, including multi-component persistent homology, multi-level persistent homology, and electrostatic persistence for the representation, characterization, and description of small molecules and biomolecular complexes. In contrast to the conventional persistent homology, multi-component persistent homology retains critical chemical and biological information during the topological simplification of biomolecular geometric complexity. Multi-level persistent homology enables a tailored topological description of inter- and/or intra-molecular interactions of interest. Electrostatic persistence incorporates partial charge information into topological invariants. These topological methods are paired with Wasserstein distance to characterize similarities between molecules and are further integrated with a variety of machine learning algorithms, including k-nearest neighbors, ensemble of trees, and deep convolutional neural networks, to manifest their descriptive and predictive powers for protein-ligand binding analysis and virtual screening of small molecules. Extensive numerical experiments involving 4,414 protein-ligand complexes from the PDBBind database and 128,374 ligand-target and decoy-target pairs in the DUD database are performed to test respectively the scoring power and the discriminatory power of the proposed topological learning strategies. It is demonstrated that the present topological learning outperforms other existing methods in protein-ligand binding affinity prediction and ligand-decoy discrimination. PMID:29309403

  16. A security-awareness virtual machine management scheme based on Chinese wall policy in cloud computing.

    Science.gov (United States)

    Yu, Si; Gui, Xiaolin; Lin, Jiancai; Tian, Feng; Zhao, Jianqiang; Dai, Min

    2014-01-01

    Cloud computing gets increasing attention for its capacity to leverage developers from infrastructure management tasks. However, recent works reveal that side channel attacks can lead to privacy leakage in the cloud. Enhancing isolation between users is an effective solution to eliminate the attack. In this paper, to eliminate side channel attacks, we investigate the isolation enhancement scheme from the aspect of virtual machine (VM) management. The security-awareness VMs management scheme (SVMS), a VMs isolation enhancement scheme to defend against side channel attacks, is proposed. First, we use the aggressive conflict of interest relation (ACIR) and aggressive in ally with relation (AIAR) to describe user constraint relations. Second, based on the Chinese wall policy, we put forward four isolation rules. Third, the VMs placement and migration algorithms are designed to enforce VMs isolation between the conflict users. Finally, based on the normal distribution, we conduct a series of experiments to evaluate SVMS. The experimental results show that SVMS is efficient in guaranteeing isolation between VMs owned by conflict users, while the resource utilization rate decreases but not by much.

  17. Ghost Whisperer's Ghost in the Machine: An example of pop cultural representation of virtual worlds

    DEFF Research Database (Denmark)

    Reinhard, CarrieLynn D.

    2009-01-01

    Analysis of an episode of the CBS series "Ghost Whisperer" for how it depicts a) what is a virtual world and b) the tensions that are involved in discussing the uses and effects of a virtual world.  Discussion focuses on the overriding negative reception of virtual worlds in popular culture due...

  18. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-06-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  19. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye

    2016-01-01

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840

  20. Virtual Planning, Control, and Machining for a Modular-Based Automated Factory Operation in an Augmented Reality Environment.

    Science.gov (United States)

    Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye

    2016-06-07

    This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.

  1. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    Science.gov (United States)

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  2. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    Science.gov (United States)

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  3. The Development of Mobile Server for Language Courses

    OpenAIRE

    Tokumoto, Hiroko; Yoshida, Mitsunobu

    2009-01-01

    The aim of this paper is to introduce the conceptual design of the mobile server software "MY Server" for language teaching drafted by Tokumoto. It is to report how this software is designed and adopted effectively to Japanese language teaching. Most of the current server systems for education require facilities in a big scale including high-spec server machines, professional administrators, which naturally result in big budget projects that individual teachers or small size schools canno...

  4. Software platform virtualization in chemistry research and university teaching.

    Science.gov (United States)

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  5. Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.

    Science.gov (United States)

    Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J

    2011-11-01

    To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Virtualisasi Server Sederhana Menggunakan Proxmox

    Directory of Open Access Journals (Sweden)

    Teguh Prasandy

    2015-05-01

    Penggunaan proxmox sebagai virtual server bahwa proxmox menyediakan sebuah desktop local dan beberapa node. Di dalam node tersebut sistem operasi akan diinstal sesuai dengan kebutuhan dari user. Routing IP supaya sistem operasi yang berada di dalam proxmox dapat terkoneksi ke internet dengan IP Desktop Virtual box 192.168.56.102 IP ini digunakan sebagai gateway proxmox dan sistem operasi di dalamnya, IP Proxmox 192.168.56.105, IP Linux Ubuntu 192.168.56.109, IP Linux Debian 192.168.56.108.

  7. Virtual screening approach to identifying influenza virus neuraminidase inhibitors using molecular docking combined with machine-learning-based scoring function.

    Science.gov (United States)

    Zhang, Li; Ai, Hai-Xin; Li, Shi-Meng; Qi, Meng-Yuan; Zhao, Jian; Zhao, Qi; Liu, Hong-Sheng

    2017-10-10

    In recent years, an epidemic of the highly pathogenic avian influenza H7N9 virus has persisted in China, with a high mortality rate. To develop novel anti-influenza therapies, we have constructed a machine-learning-based scoring function (RF-NA-Score) for the effective virtual screening of lead compounds targeting the viral neuraminidase (NA) protein. RF-NA-Score is more accurate than RF-Score, with a root-mean-square error of 1.46, Pearson's correlation coefficient of 0.707, and Spearman's rank correlation coefficient of 0.707 in a 5-fold cross-validation study. The performance of RF-NA-Score in a docking-based virtual screening of NA inhibitors was evaluated with a dataset containing 281 NA inhibitors and 322 noninhibitors. Compared with other docking-rescoring virtual screening strategies, rescoring with RF-NA-Score significantly improved the efficiency of virtual screening, and a strategy that averaged the scores given by RF-NA-Score, based on the binding conformations predicted with AutoDock, AutoDock Vina, and LeDock, was shown to be the best strategy. This strategy was then applied to the virtual screening of NA inhibitors in the SPECS database. The 100 selected compounds were tested in an in vitro H7N9 NA inhibition assay, and two compounds with novel scaffolds showed moderate inhibitory activities. These results indicate that RF-NA-Score improves the efficiency of virtual screening for NA inhibitors, and can be used successfully to identify new NA inhibitor scaffolds. Scoring functions specific for other drug targets could also be established with the same method.

  8. Sending servers to Morocco

    CERN Multimedia

    Joannah Caborn Wengler

    2012-01-01

    Did you know that computer centres are like people? They breathe air in and out like a person, they have to be kept at the right temperature, and they can even be organ donors. As part of a regular cycle of equipment renewal, the CERN Computer Centre has just donated 161 retired servers to universities in Morocco.   Prof. Abdeslam Hoummada and CERN DG Rolf Heuer seeing off the servers on the beginning of their journey to Morocco. “Many people don’t realise, but the Computer Centre is like a living thing. You don’t just install equipment and it runs forever. We’re continually replacing machines, broken parts and improving things like the cooling.” Wayne Salter, Leader of the IT Computing Facilities Group, watches over the Computer Centre a bit like a nurse monitoring a patient’s temperature, especially since new international recommendations for computer centre environmental conditions were released. “A new international s...

  9. Using Servers to Enhance Control System Capability

    International Nuclear Information System (INIS)

    Bickley, M.; Bowling, B. A.; Bryan, D. A.; Zeijts, J. van; White, K. S.; Witherspoon, S.

    1999-01-01

    Many traditional control systems include a distributed collection of front end machines to control hardware. Backend tools are used to view, modify, and record the signals generated by these front end machines. Software servers, which are a middleware layer between the front and back ends, can improve a control system in several ways. Servers can enable on-line processing of raw data, and consolidation of functionality. It many cases data retrieved from the front end must be processed in order to convert the raw data into useful information. These calculations are often redundantly performance by different programs, frequently offline. Servers can monitor the raw data and rapidly perform calculations, producing new signals which can be treated like any other control system signal, and can be used by any back end application. Algorithms can be incorporated to actively modify signal values in the control system based upon changes of other signals, essentially producing feedback in a control system. Servers thus increase the flexibility of a control system. Lastly, servers running on inexpensive UNIXworkstations can relay or cache frequently needed information, reducing the load on front end hardware by functioning as concentrators. Rather than many back end tools connecting directly to the front end machines, increasing the work load of these machines, they instead connect to the server. Servers like those discussed above have been used successfully at the Thomas Jefferson National Accelerator Facility to provide functionality such as beam steering, fault monitoring, storage of machine parameters, and on-line data processing. The authors discuss the potential uses of such servers, and share the results of work performed to date

  10. Usage of OpenStack Virtual Machine and MATLAB HPC Add-on leads to faster turnaround

    KAUST Repository

    Van Waveren, Matthijs

    2017-03-16

    We need to run hundreds of MATLAB® simulations while changing the parameters between each simulation. These simulations need to be run sequentially, and the parameters are defined manually from one simulation to the next. This makes this type of workload unsuitable for a shared cluster. For this reason we are using a cluster running in an OpenStack® Virtual Machine and are using the MATLAB HPC Add-on for submitting jobs to the cluster. As a result we are now able to have a turnaround time for the simulations of the order of a few hours, instead of the 24 hours needed on a local workstation.

  11. Enhancing MINIX 3 Input/Output performance using a virtual machine approach

    OpenAIRE

    Pessolani, Pablo Andrés; González, César Daniel

    2010-01-01

    MINIX 3 is an open-source operating system designed to be highly reliable, flexible, and secure. The kernel is extremely small and user processes, specialized servers and device drivers run as user-mode insulated processes. These features, the tiny amount of kernel code, and other aspects greatly enhance system reliability. The drawbacks of running device drivers in usermode are the performance penalties on input/output ports access, kernel data structures access, interrupt indirect manage...

  12. From the Symbolic Analysis of Virtual Faces to a Smiles Machine.

    Science.gov (United States)

    Ochs, Magalie; Diday, Edwin; Afonso, Filipe

    2016-02-01

    In this paper, we present an application of symbolic data processing for the design of virtual character's smiling facial expressions. A collected database of virtual character's smiles directly created by users has been explored using symbolic data analysis methods. An unsupervised analysis has enabled us to identify the morphological and dynamic characteristics of different types of smiles as well as of combinations of smiles. Based on the symbolic data analysis, to generate different smiling faces, we have developed procedures to automatically reconstitute smiling virtual faces from a point in a multidimensional space corresponding to a principal component analysis plane.

  13. Secure data aggregation in heterogeneous and disparate networks using stand off server architecture

    Science.gov (United States)

    Vimalathithan, S.; Sudarsan, S. D.; Seker, R.; Lenin, R. B.; Ramaswamy, S.

    2009-04-01

    The emerging global reach of technology presents myriad challenges and intricacies as Information Technology teams aim to provide anywhere, anytime and anyone access, for service providers and customers alike. The world is fraught with stifling inequalities, both from an economic as well as socio-political perspective. The net result has been large capability gaps between various organizational locations that need to work together, which has raised new challenges for information security teams. Similar issues arise, when mergers and acquisitions among and between organizations take place. While integrating remote business locations with mainstream operations, one or more of the issues including the lack of application level support, computational capabilities, communication limitations, and legal requirements cause a serious impediment thereby complicating integration while not violating the organizations' security requirements. Often resorted techniques like IPSec, tunneling, secure socket layer, etc. may not be always techno-economically feasible. This paper addresses such security issues by introducing an intermediate server between corporate central server and remote sites, called stand-off-server. We present techniques such as break-before-make connection, break connection after transfer, multiple virtual machine instances with different operating systems using the concept of a stand-off-server. Our experiments show that the proposed solution provides sufficient isolation for the central server/site from attacks arising out of weak communication and/or computing links and is simple to implement.

  14. Environment server. Digital field information archival technology

    International Nuclear Information System (INIS)

    Kita, Nobuyuki; Kita, Yasuyo; Yang, Hai-quan

    2002-01-01

    For the safety operation of nuclear power plants, it is important to store various information about plants for a long period and visualize those stored information as desired. The system called Environment Server is developed for realizing it. In this paper, the general concepts of Environment Server is explained and its partial implementation for archiving the image information gathered by inspection mobile robots into virtual world and visualizing them is described. An extension of Environment Server for supporting attention sharing is also briefly introduced. (author)

  15. Planificación del proceso de fresado de una pieza compleja utilizando una máquina herramienta virtual//Milling process planning of a complex workpiece using a virtual machine tool

    Directory of Open Access Journals (Sweden)

    Jorge‐Andrés García‐Barbosa

    2014-08-01

    Full Text Available Se diseñó y se fabricó exitosamente una pieza experimental compleja compuesta de superficies con curvatura cero, positiva y negativa. Se planificó y se ejecutó el proceso de fabricación por maquinado usando el proceso de fresado con herramientas de punta esférica en un centro de maquinado vertical equipado con un cuarto eje de rotación externo. Para la planificación, simulación y verificación del proceso se desarrolló un modelo virtual de la máquina herramienta disponible y sus accesorios en un sistema comercial de maquinado asistido por computador. Se implementó el montaje virtual del sistema de manufactura con el que se verificó y se ajustó el proceso hasta observar un buen desempeño. Se comprobaron así las ventajas de utilizar los recientes métodos virtuales ofrecidos por varios sistemas de maquinado asistido por computador para la simulación del proceso, especialmente cuando se trata de componentes complejos procesados en máquinas herramienta de más de tres ejes.Palabras claves: máquinas herramienta virtuales, planificación de procesos, maquinado de piezas complejas, simulación y verificación de procesos, maquinado multiejes.______________________________________________________________________________AbstractWe designed and successfully manufactured a complex experimental piece composed of surfaces with zero, positive and negative curvatures. We planned and executed the machining manufacturing process by using milling process with end ball nose tools on a vertical machining center equipped with a fourth external rotational axis. For planning, simulation and verification of the machiningprocess, we developed a virtual model of the machine tool and its accessories in a commercial system for computer aided machining. By mounting the virtual manufacturing system, we verified the process and adjusted it until observe a good performance. We tested and confirmed the advantages of using the recent virtual methods for

  16. MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation

    Energy Technology Data Exchange (ETDEWEB)

    Valdes, G; Scheuermann, R; Solberg, T [University of Pennsylvania, Philadelphia, PA (United States); Chan, M; Deasy, J [Memorial Sloan-Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poison regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.

  17. MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation

    International Nuclear Information System (INIS)

    Valdes, G; Scheuermann, R; Solberg, T; Chan, M; Deasy, J

    2016-01-01

    Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poison regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.

  18. Integrated Real-Virtuality System and Environments for Advanced Control System Developers and Machines Builders

    OpenAIRE

    Hussein, Mohamed

    2008-01-01

    The pace of technological change is increasing and sophisticated customer driven markets are forcing rapid machine evolution, increasing complexity and quality, and faster response. To survive and thrive in these markets, machine builders/suppliers require absolute customer and market orientation, focusing on .. rapid provision of solutions rather than products. Their production systems will need to accommodate unpredictable changes while maintaining financial and operational efficiency with ...

  19. Virtual network computing: cross-platform remote display and collaboration software.

    Science.gov (United States)

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  20. Web server attack analyzer

    OpenAIRE

    Mižišin, Michal

    2013-01-01

    Web server attack analyzer - Abstract The goal of this work was to create prototype of analyzer of injection flaws attacks on web server. Proposed solution combines capabilities of web application firewall and web server log analyzer. Analysis is based on configurable signatures defined by regular expressions. This paper begins with summary of web attacks, followed by detection techniques analysis on web servers, description and justification of selected implementation. In the end are charact...

  1. Examining Effects of Virtual Machine Settings on Voice over Internet Protocol in a Private Cloud Environment

    Science.gov (United States)

    Liao, Yuan

    2011-01-01

    The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…

  2. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    Science.gov (United States)

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  3. The Development and Evaluation of a Virtual Radiotherapy Treatment Machine Using an Immersive Visualisation Environment

    Science.gov (United States)

    Bridge, P.; Appleyard, R. M.; Ward, J. W.; Philips, R.; Beavis, A. W.

    2007-01-01

    Due to the lengthy learning process associated with complicated clinical techniques, undergraduate radiotherapy students can struggle to access sufficient time or patients to gain the level of expertise they require. By developing a hybrid virtual environment with real controls, it was hoped that group learning of these techniques could take place…

  4. GeoServer cookbook

    CERN Document Server

    Iacovella, Stefano

    2014-01-01

    This book is ideal for GIS experts, developers, and system administrators who have had a first glance at GeoServer and who are eager to explore all its features in order to configure professional map servers. Basic knowledge of GIS and GeoServer is required.

  5. Dynamically allocated virtual clustering management system

    Science.gov (United States)

    Marcus, Kelvin; Cannata, Jess

    2013-05-01

    The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.

  6. Formalizing the Safety of Java, the Java Virtual Machine and Java Card

    NARCIS (Netherlands)

    Hartel, Pieter H.; Moreau, Luc

    2001-01-01

    We review the existing literature on Java safety, emphasizing formal approaches, and the impact of Java safety on small footprint devices such as smart cards. The conclusion is that while a lot of good work has been done, a more concerted effort is needed to build a coherent set of machine readable

  7. Employing a virtual reality tool to explicate tacit knowledge of machine operations

    NARCIS (Netherlands)

    Vasenev, Alexandr; Hartmann, Timo; Doree, Andries G.; Hassani, F.

    2013-01-01

    The quality and durability of asphalted roads strongly depends on the final step in the road construction process; the profiling and compaction of the fresh spread asphalt. During compaction machine operators continuously make decisions on how to proceed with the compaction accounting for

  8. Using an open-source PACS virtual machine for a digital angiography unit: methods and initial impressions.

    Science.gov (United States)

    Kagadis, George C; Alexakos, Christos; Langer, Steve G; French, Todd

    2012-02-01

    The productivity gains, diagnostic benefit, and enhanced data availability to clinicians enabled by picture archiving and communication systems (PACS) are no longer in doubt. However, commercial PACS offerings are often extremely expensive initially and require ongoing support contracts with vendors to maintain them. Recently, several open-source offerings have become available that put PACS within reach of more users. However, they can be resource-intensive to install and assure that they have room for future growth--both for computational and storage capacity. An alternate approach, which we describe herein, is to use PACS built on virtual machines which can be moved from smaller to larger hardware as needed in a just-in-time manner. This leverages the cost benefits of Moore's Law for both storage and compute costs. We describe the approach and current results in this paper.

  9. Using a Virtual Tablet Machine to Improve Student Understanding of the Complex Processes Involved in Tablet Manufacturing.

    Science.gov (United States)

    Mattsson, Sofia; Sjöström, Hans-Erik; Englund, Claire

    2016-06-25

    Objective. To develop and implement a virtual tablet machine simulation to aid distance students' understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students' perceptions, the use of the tablet simulation contributed to their understanding of the compaction process.

  10. Mastering Lync Server 2010

    CERN Document Server

    Winters, Nathan

    2012-01-01

    An in-depth guide on the leading Unified Communications platform Microsoft Lync Server 2010 maximizes communication capabilities in the workplace like no other Unified Communications (UC) solution. Written by experts who know Lync Server inside and out, this comprehensive guide shows you step by step how to administer the newest and most robust version of Lync Server. Along with clear and detailed instructions, learning is aided by exercise problems and real-world examples of established Lync Server environments. You'll gain the skills you need to effectively deploy Lync Server 2010 and be on

  11. Virtual machines & volunteer computing: Experience from LHC@Home: Test4Theory project

    CERN Document Server

    Lombraña González, Daniel; Blomer, Jakob; Buncic, Predrag; Harutyunyan, Artem; Marquina, Miguel; Segal, Ben; Skands, Peter; Karneyeu, Anton

    2012-01-01

    Volunteer desktop grids are nowadays becoming more and more powerful thanks to improved high end components: multi-core CPUs, larger RAM memories and hard disks, better network connectivity and bandwidth, etc. As a result, desktop grid systems can run more complex experiments or simulations, but some problems remain: the heterogeneity of hardware architectures and software (library dependencies, code length, big repositories, etc.) make it very difficult for researchers and developers to deploy and maintain a software stack for all the available platforms. In this paper, the employment of virtualization is shown to be the key to solve these problems. It provides a homogeneous layer allowing researchers to focus their efforts on running their experiments. Inside virtual custom execution environments, researchers can control and deploy very complex experiments or simulations running on heterogeneous grids of high-end computers. The following work presents the latest results from CERN’s LHC@home Test4Theory p...

  12. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries

    Directory of Open Access Journals (Sweden)

    Han Bucong

    2012-11-01

    Full Text Available Abstract Background Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. Results We evaluated support vector machines (SVM as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33% of 13.56M PubChem, 1,496 (0.89% of 168 K MDDR, and 719 (7.73% of 9,305 MDDR compounds similar to the known inhibitors. Conclusions SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  13. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries.

    Science.gov (United States)

    Han, Bucong; Ma, Xiaohua; Zhao, Ruiying; Zhang, Jingxian; Wei, Xiaona; Liu, Xianghui; Liu, Xin; Zhang, Cunlong; Tan, Chunyan; Jiang, Yuyang; Chen, Yuzong

    2012-11-23

    Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. We evaluated support vector machines (SVM) as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33%) of 13.56M PubChem, 1,496 (0.89%) of 168 K MDDR, and 719 (7.73%) of 9,305 MDDR compounds similar to the known inhibitors. SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  14. How the choice of Operating System can affect databases on a Virtual Machine

    OpenAIRE

    Karlsson, Jan; Eriksson, Patrik

    2014-01-01

    As databases grow in size, the need for optimizing databases is becoming a necessity. Choosing the right operating system to support your database becomes paramount to ensure that the database is fully utilized. Furthermore with the virtualization of operating systems becoming more commonplace, we find ourselves with more choices than we ever faced before. This paper demonstrates why the choice of operating system plays an integral part in deciding the right database for your system in a virt...

  15. New Web Server - the Java Version of Tempest - Produced

    Science.gov (United States)

    York, David W.; Ponyik, Joseph G.

    2000-01-01

    A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.

  16. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    Directory of Open Access Journals (Sweden)

    Yukun Huang

    2014-01-01

    Full Text Available JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  17. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    Science.gov (United States)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  18. Combinatorial support vector machines approach for virtual screening of selective multi-target serotonin reuptake inhibitors from large compound libraries.

    Science.gov (United States)

    Shi, Z; Ma, X H; Qin, C; Jia, J; Jiang, Y Y; Tan, C Y; Chen, Y Z

    2012-02-01

    Selective multi-target serotonin reuptake inhibitors enhance antidepressant efficacy. Their discovery can be facilitated by multiple methods, including in silico ones. In this study, we developed and tested an in silico method, combinatorial support vector machines (COMBI-SVMs), for virtual screening (VS) multi-target serotonin reuptake inhibitors of seven target pairs (serotonin transporter paired with noradrenaline transporter, H(3) receptor, 5-HT(1A) receptor, 5-HT(1B) receptor, 5-HT(2C) receptor, melanocortin 4 receptor and neurokinin 1 receptor respectively) from large compound libraries. COMBI-SVMs trained with 917-1951 individual target inhibitors correctly identified 22-83.3% (majority >31.1%) of the 6-216 dual inhibitors collected from literature as independent testing sets. COMBI-SVMs showed moderate to good target selectivity in misclassifying as dual inhibitors 2.2-29.8% (majority virtual hits correlate with the reported effects of their predicted targets. COMBI-SVM is potentially useful for searching selective multi-target agents without explicit knowledge of these agents. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Commonality and Variability Analysis for Xenon Family of Separation Virtual Machine Monitors (CVAX)

    Science.gov (United States)

    2017-07-18

    the sponsor (e.g., military, intelligence community, other government, commercial, medical ) and upon the type of system (e.g., application in the...loads. • Machine memory. Xen’s terminology for hardware memory present on a chip. • Misuse case. Abuse case. Attacker-product interaction that the...on connections between domains. • Physical memory. Xen’s terminology , short for pseudo-physical memory. Physical memory is the Xen term for the

  20. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    Science.gov (United States)

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  1. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    Science.gov (United States)

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  2. Upnp-Based Discovery And Management Of Hypervisors And Virtual Machines

    Directory of Open Access Journals (Sweden)

    Sławomir Zieliński

    2011-01-01

    Full Text Available The paper introduces a Universal Plug and Play based discovery and management toolkitthat facilitates collaboration between cloud infrastructure providers and users. The presentedtools construct a unified hierarchy of devices and their management-related services, thatrepresents the current deployment of users’ (virtual infrastructures in the provider’s (physicalinfrastructure as well as the management interfaces of respective devices. The hierarchycan be used to enhance the capabilities of the provider’s infrastructure management system.To maintain user independence, the set of management operations exposed by a particulardevice is always defined by the device owner (either the provider or user.

  3. Improving virtual screening predictive accuracy of Human kallikrein 5 inhibitors using machine learning models.

    Science.gov (United States)

    Fang, Xingang; Bagui, Sikha; Bagui, Subhash

    2017-08-01

    The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  5. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  6. Formalising the Safety of Java, the Java Virtual Machine and Java Card

    OpenAIRE

    Hartel, Pieter H.; Moreau, Luc

    2001-01-01

    We review the existing literature on Java safety, emphasizing formal approaches, and the impact of Java safety on small footprint devices such as smart cards. The conclusion is that while a lot of good work has been done, a more concerted effort is needed to build a coherent set of machine readable formal models of the whole of Java and its implementation. This is a formidable task but we believe it is essential to building trust in Java safety, and thence to achieve ITSEC level 6 or Common C...

  7. A study on constructing a machine-maintenance training system based on virtual reality technology

    International Nuclear Information System (INIS)

    Ishii, H.; Kashiwa, K.; Tezuka, T.; Yoshikawa, H.

    1997-01-01

    The development of a VR based training system are presented for teaching disassembling procedures of mechanical machines used in nuclear power plant. The methods of Petri net model for describing trainees' plausible actions in the disassembling process and reduce a right sequence of action sequence are developed as well as realization of the related Petri net editor and the demonstration of the developed VR based training system was demonstrated by example practice of disassembly simulation of check valve. Moreover, the needed future works are also discussed

  8. Testing an Open Source installation and server provisioning tool for the INFN CNAF Tierl Storage system

    International Nuclear Information System (INIS)

    Pezzi, M; Favaro, M; Gregori, D; Ricci, P P; Sapunenko, V

    2014-01-01

    In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.

  9. Virtual screening for cytochromes p450: successes of machine learning filters.

    Science.gov (United States)

    Burton, Julien; Ijjaali, Ismail; Petitet, François; Michel, André; Vercauteren, Daniel P

    2009-05-01

    Cytochromes P450 (CYPs) are crucial targets when predicting the ADME properties (absorption, distribution, metabolism, and excretion) of drugs in development. Particularly, CYPs mediated drug-drug interactions are responsible for major failures in the drug design process. Accurate and robust screening filters are thus needed to predict interactions of potent compounds with CYPs as early as possible in the process. In recent years, more and more 3D structures of various CYP isoforms have been solved, opening the gate of accurate structure-based studies of interactions. Nevertheless, the ligand-based approach still remains popular. This success can be explained by the growing number of available data and the satisfying performances of existing machine learning (ML) methods. The aim of this contribution is to give an overview of the recent achievements in ML applications to CYP datasets. Particularly, popular methods such as support vector machine, decision trees, artificial neural networks, k-nearest neighbors, and partial least squares will be compared as well as the quality of the datasets and the descriptors used. Consensus of different methods will also be discussed. Often reaching 90% of accuracy, the models will be analyzed to highlight the key descriptors permitting the good prediction of CYPs binding.

  10. Virtualization and cloud computing in dentistry.

    Science.gov (United States)

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  11. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  12. Disk Storage Server

    CERN Multimedia

    This model was a disk storage server used in the Data Centre up until 2012. Each tray contains a hard disk drive (see the 5TB hard disk drive on the main disk display section - this actually fits into one of the trays). There are 16 trays in all per server. There are hundreds of these servers mounted on racks in the Data Centre, as can be seen.

  13. Group-Server Queues

    OpenAIRE

    Li, Quan-Lin; Ma, Jing-Yu; Xie, Mingzhou; Xia, Li

    2017-01-01

    By analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times ...

  14. NeuroDebian Virtual Machine Deployment Facilitates Trainee-Driven Bedside Neuroimaging Research.

    Science.gov (United States)

    Cohen, Alexander; Kenney-Jung, Daniel; Botha, Hugo; Tillema, Jan-Mendelt

    2017-01-01

    Freely available software, derived from the past 2 decades of neuroimaging research, is significantly more flexible for research purposes than presently available clinical tools. Here, we describe and demonstrate the utility of rapidly deployable analysis software to facilitate trainee-driven translational neuroimaging research. A recipe and video tutorial were created to guide the creation of a NeuroDebian-based virtual computer that conforms to current neuroimaging research standards and can exist within a HIPAA-compliant system. This allows for retrieval of clinical imaging data, conversion to standard file formats, and rapid visualization and quantification of individual patients' cortical and subcortical anatomy. As an example, we apply this pipeline to a pediatric patient's data to illustrate the advantages of research-derived neuroimaging tools in asking quantitative questions "at the bedside." Our goal is to provide a path of entry for trainees to become familiar with common neuroimaging tools and foster an increased interest in translational research.

  15. Efficient operating system level virtualization techniques for cloud resources

    Science.gov (United States)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  16. SciServer: An Online Collaborative Environment for Big Data in Research and Education

    Science.gov (United States)

    Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr

    2017-01-01

    For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast

  17. Virtual overlay metrology for fault detection supported with integrated metrology and machine learning

    Science.gov (United States)

    Lee, Hong-Goo; Schmitt-Weaver, Emil; Kim, Min-Suk; Han, Sang-Jun; Kim, Myoung-Soo; Kwon, Won-Taik; Park, Sung-Ki; Ryan, Kevin; Theeuwes, Thomas; Sun, Kyu-Tae; Lim, Young-Wan; Slotboom, Daan; Kubis, Michael; Staecker, Jens

    2015-03-01

    While semiconductor manufacturing moves toward the 7nm node for logic and 15nm node for memory, an increased emphasis has been placed on reducing the influence known contributors have toward the on product overlay budget. With a machine learning technique known as function approximation, we use a neural network to gain insight to how known contributors, such as those collected with scanner metrology, influence the on product overlay budget. The result is a sufficiently trained function that can approximate overlay for all wafers exposed with the lithography system. As a real world application, inline metrology can be used to measure overlay for a few wafers while using the trained function to approximate overlay vector maps for the entire lot of wafers. With the approximated overlay vector maps for all wafers coming off the track, a process engineer can redirect wafers or lots with overlay signatures outside the standard population to offline metrology for excursion validation. With this added flexibility, engineers will be given more opportunities to catch wafers that need to be reworked, resulting in improved yield. The quality of the derived corrections from measured overlay metrology feedback can be improved using the approximated overlay to trigger, which wafers should or shouldn't be, measured inline. As a development or integration engineer the approximated overlay can be used to gain insight into lots and wafers used for design of experiments (DOE) troubleshooting. In this paper we will present the results of a case study that follows the machine learning function approximation approach to data analysis, with production overlay measured on an inline metrology system at SK hynix.

  18. Machine Learning Consensus Scoring Improves Performance Across Targets in Structure-Based Virtual Screening.

    Science.gov (United States)

    Ericksen, Spencer S; Wu, Haozhen; Zhang, Huikun; Michael, Lauren A; Newton, Michael A; Hoffmann, F Michael; Wildman, Scott A

    2017-07-24

    In structure-based virtual screening, compound ranking through a consensus of scores from a variety of docking programs or scoring functions, rather than ranking by scores from a single program, provides better predictive performance and reduces target performance variability. Here we compare traditional consensus scoring methods with a novel, unsupervised gradient boosting approach. We also observed increased score variation among active ligands and developed a statistical mixture model consensus score based on combining score means and variances. To evaluate performance, we used the common performance metrics ROCAUC and EF1 on 21 benchmark targets from DUD-E. Traditional consensus methods, such as taking the mean of quantile normalized docking scores, outperformed individual docking methods and are more robust to target variation. The mixture model and gradient boosting provided further improvements over the traditional consensus methods. These methods are readily applicable to new targets in academic research and overcome the potentially poor performance of using a single docking method on a new target.

  19. Virtualize Me!

    Science.gov (United States)

    Waters, John K.

    2009-01-01

    John Abdelmalak, director of technology for the School District of the Chathams, was pretty sure it was time to jump on the virtualization bandwagon last year when he invited Dell to conduct a readiness assessment of his district's servers. When he saw just how little of their capacity was being used, he lost all doubt. Abdelmalak is one of many…

  20. A Heuristic Placement Selection of Live Virtual Machine Migration for Energy-Saving in Cloud Computing Environment

    Science.gov (United States)

    Zhao, Jia; Hu, Liang; Ding, Yan; Xu, Gaochao; Hu, Ming

    2014-01-01

    The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable. PMID:25251339

  1. On the Parallel Elliptic Single/Multigrid Solutions about Aligned and Nonaligned Bodies Using the Virtual Machine for Multiprocessors

    Directory of Open Access Journals (Sweden)

    A. Averbuch

    1994-01-01

    Full Text Available Parallel elliptic single/multigrid solutions around an aligned and nonaligned body are presented and implemented on two multi-user and single-user shared memory multiprocessors (Sequent Symmetry and MOS and on a distributed memory multiprocessor (a Transputer network. Our parallel implementation uses the Virtual Machine for Muli-Processors (VMMP, a software package that provides a coherent set of services for explicitly parallel application programs running on diverse multiple instruction multiple data (MIMD multiprocessors, both shared memory and message passing. VMMP is intended to simplify parallel program writing and to promote portable and efficient programming. Furthermore, it ensures high portability of application programs by implementing the same services on all target multiprocessors. The performance of our algorithm is investigated in detail. It is seen to fit well the above architectures when the number of processors is less than the maximal number of grid points along the axes. In general, the efficiency in the nonaligned case is higher than in the aligned case. Alignment overhead is observed to be up to 200% in the shared-memory case and up to 65% in the message-passing case. We have demonstrated that when using VMMP, the portability of the algorithms is straightforward and efficient.

  2. Detection of Stress Levels from Biosignals Measured in Virtual Reality Environments Using a Kernel-Based Extreme Learning Machine.

    Science.gov (United States)

    Cho, Dongrae; Ham, Jinsil; Oh, Jooyoung; Park, Jeanho; Kim, Sayup; Lee, Nak-Kyu; Lee, Boreom

    2017-10-24

    Virtual reality (VR) is a computer technique that creates an artificial environment composed of realistic images, sounds, and other sensations. Many researchers have used VR devices to generate various stimuli, and have utilized them to perform experiments or to provide treatment. In this study, the participants performed mental tasks using a VR device while physiological signals were measured: a photoplethysmogram (PPG), electrodermal activity (EDA), and skin temperature (SKT). In general, stress is an important factor that can influence the autonomic nervous system (ANS). Heart-rate variability (HRV) is known to be related to ANS activity, so we used an HRV derived from the PPG peak interval. In addition, the peak characteristics of the skin conductance (SC) from EDA and SKT variation can also reflect ANS activity; we utilized them as well. Then, we applied a kernel-based extreme-learning machine (K-ELM) to correctly classify the stress levels induced by the VR task to reflect five different levels of stress situations: baseline, mild stress, moderate stress, severe stress, and recovery. Twelve healthy subjects voluntarily participated in the study. Three physiological signals were measured in stress environment generated by VR device. As a result, the average classification accuracy was over 95% using K-ELM and the integrated feature (IT = HRV + SC + SKT). In addition, the proposed algorithm can embed a microcontroller chip since K-ELM algorithm have very short computation time. Therefore, a compact wearable device classifying stress levels using physiological signals can be developed.

  3. A heuristic placement selection of live virtual machine migration for energy-saving in cloud computing environment.

    Science.gov (United States)

    Zhao, Jia; Hu, Liang; Ding, Yan; Xu, Gaochao; Hu, Ming

    2014-01-01

    The field of live VM (virtual machine) migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization) idea with the SA (simulated annealing) idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable.

  4. A heuristic placement selection of live virtual machine migration for energy-saving in cloud computing environment.

    Directory of Open Access Journals (Sweden)

    Jia Zhao

    Full Text Available The field of live VM (virtual machine migration has been a hotspot problem in green cloud computing. Live VM migration problem is divided into two research aspects: live VM migration mechanism and live VM migration policy. In the meanwhile, with the development of energy-aware computing, we have focused on the VM placement selection of live migration, namely live VM migration policy for energy saving. In this paper, a novel heuristic approach PS-ES is presented. Its main idea includes two parts. One is that it combines the PSO (particle swarm optimization idea with the SA (simulated annealing idea to achieve an improved PSO-based approach with the better global search's ability. The other one is that it uses the Probability Theory and Mathematical Statistics and once again utilizes the SA idea to deal with the data obtained from the improved PSO-based process to get the final solution. And thus the whole approach achieves a long-term optimization for energy saving as it has considered not only the optimization of the current problem scenario but also that of the future problem. The experimental results demonstrate that PS-ES evidently reduces the total incremental energy consumption and better protects the performance of VM running and migrating compared with randomly migrating and optimally migrating. As a result, the proposed PS-ES approach has capabilities to make the result of live VM migration events more high-effective and valuable.

  5. Virtualization for the LHCb Online system

    International Nuclear Information System (INIS)

    Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko

    2011-01-01

    Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R and D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.

  6. The influence of the negative-positive ratio and screening database size on the performance of machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Bojarski, Andrzej J

    2017-01-01

    The machine learning-based virtual screening of molecular databases is a commonly used approach to identify hits. However, many aspects associated with training predictive models can influence the final performance and, consequently, the number of hits found. Thus, we performed a systematic study of the simultaneous influence of the proportion of negatives to positives in the testing set, the size of screening databases and the type of molecular representations on the effectiveness of classification. The results obtained for eight protein targets, five machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest), two types of molecular fingerprints (MACCS and CDK FP) and eight screening databases with different numbers of molecules confirmed our previous findings that increases in the ratio of negative to positive training instances greatly influenced most of the investigated parameters of the ML methods in simulated virtual screening experiments. However, the performance of screening was shown to also be highly dependent on the molecular library dimension. Generally, with the increasing size of the screened database, the optimal training ratio also increased, and this ratio can be rationalized using the proposed cost-effectiveness threshold approach. To increase the performance of machine learning-based virtual screening, the training set should be constructed in a way that considers the size of the screening database.

  7. Linux Server Security

    CERN Document Server

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  8. Web Server Embedded System

    Directory of Open Access Journals (Sweden)

    Adharul Muttaqin

    2014-07-01

    Full Text Available Abstrak Embedded sistem saat ini menjadi perhatian khusus pada teknologi komputer, beberapa sistem operasi linux dan web server yang beraneka ragam juga sudah dipersiapkan untuk mendukung sistem embedded, salah satu aplikasi yang dapat digunakan dalam operasi pada sistem embedded adalah web server. Pemilihan web server pada lingkungan embedded saat ini masih jarang dilakukan, oleh karena itu penelitian ini dilakukan dengan menitik beratkan pada dua buah aplikasi web server yang tergolong memiliki fitur utama yang menawarkan “keringanan” pada konsumsi CPU maupun memori seperti Light HTTPD dan Tiny HTTPD. Dengan menggunakan parameter thread (users, ramp-up periods, dan loop count pada stress test embedded system, penelitian ini menawarkan solusi web server manakah diantara Light HTTPD dan Tiny HTTPD yang memiliki kecocokan fitur dalam penggunaan embedded sistem menggunakan beagleboard ditinjau dari konsumsi CPU dan memori. Hasil penelitian menunjukkan bahwa dalam hal konsumsi CPU pada beagleboard embedded system lebih disarankan penggunaan Light HTTPD dibandingkan dengan tiny HTTPD dikarenakan terdapat perbedaan CPU load yang sangat signifikan antar kedua layanan web tersebut Kata kunci: embedded system, web server Abstract Embedded systems are currently of particular concern in computer technology, some of the linux operating system and web server variegated also prepared to support the embedded system, one of the applications that can be used in embedded systems are operating on the web server. Selection of embedded web server on the environment is still rarely done, therefore this study was conducted with a focus on two web application servers belonging to the main features that offer a "lightness" to the CPU and memory consumption as Light HTTPD and Tiny HTTPD. By using the parameters of the thread (users, ramp-up periods, and loop count on a stress test embedded systems, this study offers a solution of web server which between the Light

  9. Learning Zimbra Server essentials

    CERN Document Server

    Kouka, Abdelmonam

    2013-01-01

    A standard tutorial approach which will guide the readers on all of the intricacies of the Zimbra Server.If you are any kind of Zimbra user, this book will be useful for you, from newbies to experts who would like to learn how to setup a Zimbra server. If you are an IT administrator or consultant who is exploring the idea of adopting, or have already adopted Zimbra as your mail server, then this book is for you. No prior knowledge of Zimbra is required.

  10. Energy Efficiency in Small Server Rooms: Field Surveys and Findings

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Iris [Hoi; Greenberg, Steve; Mahdavi, Roozbeh; Brown, Richard; Tschudi, William

    2014-08-11

    Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 small server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.

  11. Server hardware trends

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk will cover the status of the current and upcoming offers on server platforms, focusing mainly on the processing and storage parts. Alternative solutions like Open Compute (OCP) will be quickly covered.

  12. Locating Hidden Servers

    National Research Council Canada - National Science Library

    Oeverlier, Lasse; Syverson, Paul F

    2006-01-01

    .... Announced properties include server resistance to distributed DoS. Both the EFF and Reporters Without Borders have issued guides that describe using hidden services via Tor to protect the safety of dissidents as well as to resist censorship...

  13. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury.

    Science.gov (United States)

    Fong, Kenneth N K; Chow, Kathy Y Y; Chan, Bianca C H; Lam, Kino C K; Lee, Jeff C K; Li, Teresa H Y; Yan, Elaine W H; Wong, Asta T Y

    2010-04-30

    This study aimed to examine the usability of a newly designed virtual reality (VR) environment simulating the operation of an automated teller machine (ATM) for assessment and training. Part I involved evaluation of the sensitivity and specificity of a non-immersive VR program simulating an ATM (VR-ATM). Part II consisted of a clinical trial providing baseline and post-intervention outcome assessments. A rehabilitation hospital and university-based teaching facilities were used as the setting. A total of 24 persons in the community with acquired brain injury (ABI)--14 in Part I and 10 in Part II--made up the participants in the study. In Part I, participants were randomized to receive instruction in either an "early" or a "late" VR-ATM program and were assessed using both the VR program and a real ATM. In Part II, participants were assigned in matched pairs to either VR training or computer-assisted instruction (CAI) teaching programs for six 1-hour sessions over a three-week period. Two behavioral checklists based on activity analysis of cash withdrawals and money transfers using a real ATM were used to measure average reaction time, percentage of incorrect responses, level of cues required, and time spent as generated by the VR system; also used was the Neurobehavioral Cognitive Status Examination. The sensitivity of the VR-ATM was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83% and 75%, respectively. For cash withdrawals, the average reaction time of the VR group was significantly shorter than that of the CAI group (p = 0.021). We found no significant differences in average reaction time or accuracy between groups for money transfers, although we did note positive improvement for the VR-ATM group. We found the VR-ATM to be usable as a valid assessment and training tool for relearning the use of ATMs prior to real-life practice in persons with ABI.

  14. Client Server design and implementation issues in the Accelerator Control System environment

    International Nuclear Information System (INIS)

    Sathe, S.; Hoff, L.; Clifford, T.

    1995-01-01

    In distributed system communication software design, the Client Server model has been widely used. This paper addresses the design and implementation issues of such a model, particularly when used in Accelerator Control Systems. in designing the Client Server model one needs to decide how the services will be defined for a server, what types of messages the server will respond to, which data formats will be used for the network transactions and how the server will be located by the client. Special consideration needs to be given to error handling both on the server and client side. Since the server usually is located on a machine other than the client, easy and informative server diagnostic capability is required. The higher level abstraction provided by the Client Server model simplifies the application writing, however fine control over network parameters is essential to improve the performance. Above mentioned design issues and implementation trade-offs are discussed in this paper

  15. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  16. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury

    Directory of Open Access Journals (Sweden)

    Li Teresa HY

    2010-04-01

    Full Text Available Abstract Objective This study aimed to examine the usability of a newly designed virtual reality (VR environment simulating the operation of an automated teller machine (ATM for assessment and training. Design Part I involved evaluation of the sensitivity and specificity of a non-immersive VR program simulating an ATM (VR-ATM. Part II consisted of a clinical trial providing baseline and post-intervention outcome assessments. Setting A rehabilitation hospital and university-based teaching facilities were used as the setting. Participants A total of 24 persons in the community with acquired brain injury (ABI - 14 in Part I and 10 in Part II - made up the participants in the study. Interventions In Part I, participants were randomized to receive instruction in either an "early" or a "late" VR-ATM program and were assessed using both the VR program and a real ATM. In Part II, participants were assigned in matched pairs to either VR training or computer-assisted instruction (CAI teaching programs for six 1-hour sessions over a three-week period. Outcome Measures Two behavioral checklists based on activity analysis of cash withdrawals and money transfers using a real ATM were used to measure average reaction time, percentage of incorrect responses, level of cues required, and time spent as generated by the VR system; also used was the Neurobehavioral Cognitive Status Examination. Results The sensitivity of the VR-ATM was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83% and 75%, respectively. For cash withdrawals, the average reaction time of the VR group was significantly shorter than that of the CAI group (p = 0.021. We found no significant differences in average reaction time or accuracy between groups for money transfers, although we did note positive improvement for the VR-ATM group. Conclusion We found the VR-ATM to be usable as a valid assessment and training tool for relearning the use of ATMs prior to real

  17. Windows Terminal Servers Orchestration

    Science.gov (United States)

    Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim

    2017-10-01

    Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.

  18. The RNAsnp web server

    DEFF Research Database (Denmark)

    Radhakrishnan, Sabarinathan; Tafer, Hakim; Seemann, Ernst Stefan

    2013-01-01

    , are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected...... to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/....

  19. Machine Learning Approaches Toward Building Predictive Models for Small Molecule Modulators of miRNA and Its Utility in Virtual Screening of Molecular Databases.

    Science.gov (United States)

    Periwal, Vinita; Scaria, Vinod

    2017-01-01

    The ubiquitous role of microRNAs (miRNAs) in a number of pathological processes has suggested that they could act as potential drug targets. RNA-binding small molecules offer an attractive means for modulating miRNA function. The availability of bioassay data sets for a variety of biological assays and molecules in public domain provides a new opportunity toward utilizing them to create models and further utilize them for in silico virtual screening approaches to prioritize or assign potential functions for small molecules. Here, we describe a computational strategy based on machine learning for creation of predictive models from high-throughput biological screens for virtual screening of small molecules with the potential to inhibit microRNAs. Such models could be potentially used for computational prioritization of small molecules before performing high-throughput biological assay.

  20. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    OpenAIRE

    Yukun Huang; Rong Chen; Jingbo Wei; Xilong Pei; Jing Cao; Prem Prakash Jayaraman; Rajiv Ranjan

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant co...

  1. Professional SQL Server 2005 administration

    CERN Document Server

    Knight, Brian; Snyder, Wayne; Armand, Jean-Claude; LoForte, Ross; Ji, Haidong

    2007-01-01

    SQL Server 2005 is the largest leap forward for SQL Server since its inception. With this update comes new features that will challenge even the most experienced SQL Server DBAs. Written by a team of some of the best SQL Server experts in the industry, this comprehensive tutorial shows you how to navigate the vastly changed landscape of the SQL Server administration. Drawing on their own first-hand experiences to offer you best practices, unique tips and tricks, and useful workarounds, the authors help you handle even the most difficult SQL Server 2005 administration issues, including blockin

  2. A smarter way to search, share and utilize open-spatial online data for energy R&D - Custom machine learning and GIS tools in U.S. DOE's virtual data library & laboratory, EDX

    Science.gov (United States)

    Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.

    2017-12-01

    As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental

  3. A Virtual Good Idea

    Science.gov (United States)

    Bolch, Matt

    2009-01-01

    School districts across the country have always had to do more with less. Funding goes only so far, leaving administrators and IT staff to find innovative ways to save money while maintaining a high level of academic quality. Creating virtual servers accomplishes both tasks, district technology personnel say. Virtual environments not only allow…

  4. 10 Myths of Virtualization

    Science.gov (United States)

    Schaffhauser, Dian

    2012-01-01

    Half of servers in higher ed are virtualized. But that number's not high enough for Link Alander, interim vice chancellor and CIO at the Lone Star College System (Texas). He aspires to see 100 percent of the system's infrastructure requirements delivered as IT services from its own virtualized data centers or other cloud-based operators. Back in…

  5. Virtual reality for spherical images

    Science.gov (United States)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  6. UNIX secure server : a free, secure, and functional server example

    OpenAIRE

    Sastre, Hugo

    2016-01-01

    The purpose of this thesis work was to introduce UNIX server as a personal server but also as a start point for investigation and developing at a professional level. The objective of this thesis was to build a secure server providing not only a FTP server but also an HTTP server and a cloud system for remote backups. OpenBSD was used as the operating system. OpenBSD is a UNIX-like operating system made by hackers for hackers. The difference with other systems that might partially provid...

  7. KNBD: A Remote Kernel Block Server for Linux

    Science.gov (United States)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  8. Microsoft SQL Server 2012 bible

    CERN Document Server

    Jorgensen, Adam; LeBlanc, Patrick; Cherry, Denny; Nelson, Aaron

    2012-01-01

    Harness the powerful new SQL Server 2012 Microsoft SQL Server 2012 is the most significant update to this product since 2005, and it may change how database administrators and developers perform many aspects of their jobs. If you're a database administrator or developer, Microsoft SQL Server 2012 Bible teaches you everything you need to take full advantage of this major release. This detailed guide not only covers all the new features of SQL Server 2012, it also shows you step by step how to develop top-notch SQL Server databases and new data connections and keep your databases performing at p

  9. Windows Home Server users guide

    CERN Document Server

    Edney, Andrew

    2008-01-01

    Windows Home Server brings the idea of centralized storage, backup and computer management out of the enterprise and into the home. Windows Home Server is built for people with multiple computers at home and helps to synchronize them, keep them updated, stream media between them, and back them up centrally. Built on a similar foundation as the Microsoft server operating products, it's essentially Small Business Server for the home.This book details how to install, configure, and use Windows Home Server and explains how to connect to and manage different clients such as Windows XP, Windows Vist

  10. Mastering Microsoft Exchange Server 2010

    CERN Document Server

    McBee, Jim

    2010-01-01

    A top-selling guide to Exchange Server-now fully updated for Exchange Server 2010. Keep your Microsoft messaging system up to date and protected with the very newest version, Exchange Server 2010, and this comprehensive guide. Whether you're upgrading from Exchange Server 2007 SP1 or earlier, installing for the first time, or migrating from another system, this step-by-step guide provides the hands-on instruction, practical application, and real-world advice you need.: Explains Microsoft Exchange Server 2010, the latest release of Microsoft's messaging system that protects against spam and vir

  11. MLViS: A Web Tool for Machine Learning-Based Virtual Screening in Early-Phase of Drug Discovery and Development.

    Science.gov (United States)

    Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer

    2015-01-01

    Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/.

  12. Usage of Thin-Client/Server Architecture in Computer Aided Education

    Science.gov (United States)

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  13. Implementing VMware vCenter Server

    CERN Document Server

    Kuminsky, Konstantin

    2013-01-01

    This book is a practical, hands-on guide that will help you learn everything you need to know to administer your environment with VMware vCenter Server. Throughout the book, there are best practices and useful tips and tricks which can be used for day-to-day tasks.If you are an administrator or a technician starting with VMware, with little or no knowledge of virtualization products, this book is ideal for you. Even if you are an IT professional looking to expand your existing environment, you will be able to use this book to help you improve the management of these environments. IT managers w

  14. A virtual network computer's optical storage virtualization scheme

    Science.gov (United States)

    Wang, Jianzong; Hu, Huaixiang; Wan, Jiguang; Wang, Peng

    2008-12-01

    In this paper, we present the architecture and implementation of a virtual network computers' (VNC) optical storage virtualization scheme called VOSV. Its task is to manage the mapping of virtual optical storage to physical optical storage, a technique known as optical storage virtualization. The design of VOSV aims at the optical storage resources of different clients and servers that have high read-sharing patterns. VOSV uses several schemes such as a two-level Cache mechanism, a VNC server embedded module and the iSCSI protocols to improve the performance. The results measured on the prototype are encouraging, and indicating that VOSV provides the high I/O performance.

  15. KFC Server: interactive forecasting of protein interaction hot spots.

    Science.gov (United States)

    Darnell, Steven J; LeGault, Laura; Mitchell, Julie C

    2008-07-01

    The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model-a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein-protein or protein-DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org.

  16. SQL Server Integration Services

    CERN Document Server

    Hamilton, Bill

    2007-01-01

    SQL Server 2005 Integration Services (SSIS) lets you build high-performance data integration solutions. SSIS solutions wrap sophisticated workflows around tasks that extract, transform, and load (ETL) data from and to a wide variety of data sources. This Short Cut begins with an overview of key SSIS concepts, capabilities, standard workflow and ETL elements, the development environment, execution, deployment, and migration from Data Transformation Services (DTS). Next, you'll see how to apply the concepts you've learned through hands-on examples of common integration scenarios. Once you've

  17. Research on Web-Based Networked Virtual Instrument System

    International Nuclear Information System (INIS)

    Tang, B P; Xu, C; He, Q Y; Lu, D

    2006-01-01

    The web-based networked virtual instrument (NVI) system is designed by using the object oriented methodology (OOM). The architecture of the NVI system consists of two major parts: client-web server interaction and instrument server-virtual instrument (VI) communication. The web server communicates with the instrument server and the clients connected to it over the Internet, and it handles identifying the user's name, managing the connection between the user and the instrument server, adding, removing and configuring VI's information. The instrument server handles setting the parameters of VI, confirming the condition of VI and saving the VI's condition information into the database. The NVI system is required to be a general-purpose measurement system that is easy to maintain, adapt and extend. Virtual instruments are connected to the instrument server and clients can remotely configure and operate these virtual instruments. An application of The NVI system is given in the end of the paper

  18. Mastering Microsoft Exchange Server 2013

    CERN Document Server

    Elfassy, David

    2013-01-01

    The bestselling guide to Exchange Server, fully updated for the newest version Microsoft Exchange Server 2013 is touted as a solution for lowering the total cost of ownership, whether deployed on-premises or in the cloud. Like the earlier editions, this comprehensive guide covers every aspect of installing, configuring, and managing this multifaceted collaboration system. It offers Windows systems administrators and consultants a complete tutorial and reference, ideal for anyone installing Exchange Server for the first time or those migrating from an earlier Exchange Server version.Microsoft

  19. Microsoft Windows Server Administration Essentials

    CERN Document Server

    Carpenter, Tom

    2011-01-01

    The core concepts and technologies you need to administer a Windows Server OS Administering a Windows operating system (OS) can be a difficult topic to grasp, particularly if you are new to the field of IT. This full-color resource serves as an approachable introduction to understanding how to install a server, the various roles of a server, and how server performance and maintenance impacts a network. With a special focus placed on the new Microsoft Technology Associate (MTA) certificate, the straightforward, easy-to-understand tone is ideal for anyone new to computer administration looking t

  20. Server-side Statistics Scripting in PHP

    Directory of Open Access Journals (Sweden)

    Jan de Leeuw

    1997-06-01

    Full Text Available On the UCLA Statistics WWW server there are a large number of demos and calculators that can be used in statistics teaching and research. Some of these demos require substantial amounts of computation, others mainly use graphics. These calculators and demos are implemented in various different ways, reflecting developments in WWW based computing. As usual, one of the main choices is between doing the work on the client-side (i.e. in the browser or on the server-side (i.e. on our WWW server. Obviously, client-side computation puts fewer demands on the server. On the other hand, it requires that the client downloads Java applets, or installs plugins and/or helpers. If JavaScript is used, client-side computations will generally be slow. We also have to assume that the client is installed properly, and has the required capabilities. Requiring too much on the client-side has caused browsing machines such as Netscape Communicator to grow beyond all reasonable bounds, both in size and RAM requirements. Moreover requiring Java and JavaScript rules out such excellent browsers as Lynx or Emacs W3. For server-side computing, we can configure the server and its resources ourselves, and we need not worry about browser capabilities and configuration. Nothing needs to be downloaded, except the usual HTML pages and graphics. In the same way as on the client side, there is a scripting solution, where code is interpreted, or a ob ject-code solution using compiled code. For the server-side scripting, we use embedded languages, such as PHP/FI. The scripts in the HTML pages are interpreted by a CGI program, and the output of the CGI program is send to the clients. Of course the CGI program is compiled, but the statistics procedures will usually be interpreted, because PHP/FI does not have the appropriate functions in its scripting language. This will tend to be slow, because embedded languages do not deal efficiently with loops and similar constructs. Thus a first

  1. Windows server cookbook for Windows server 2003 and Windows 2000

    CERN Document Server

    Allen, Robbie

    2005-01-01

    This practical reference guide offers hundreds of useful tasks for managing Windows 2000 and Windows Server 2003, Microsoft's latest server. These concise, on-the-job solutions to common problems are certain to save you many hours of time searching through Microsoft documentation. Topics include files, event logs, security, DHCP, DNS, backup/restore, and more

  2. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.

    Science.gov (United States)

    Karimi, Kamran; Vize, Peter D

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.

  3. Virtualization in control system environment

    International Nuclear Information System (INIS)

    Shen, L.R.; Liu, D.K.; Wan, T.M.

    2012-01-01

    In large scale distributed control system, there are lots of common service composed an environment for the entire control system, such as the server system for the common software base library, application server, archive server and so on. This paper gives a description of a virtualization realization for control system environment including the virtualization for server, storage, network system and application for the control system. With a virtualization instance of the EPICS based control system environment that was built by the VMware vSphere v4, we tested the whole functionality of this virtualization environment in the SSRF control system, including the common server of the NFS, NIS, NTP, Boot and EPICS base and extension library tools, we also have applied virtualization to application servers such as the Archive, Alarm, EPICS gateway and all of the network based IOC. Specially, we test the high availability and VMotion for EPICS asynchronous IOC successful under the different VLAN configuration of the current SSRF control system network. (authors)

  4. Optimal control of a server farm

    NARCIS (Netherlands)

    Adan, I.J.B.F.; Kulkarni, V.G.; Wijk, van A.C.C.

    2013-01-01

    We consider a server farm consisting of ample exponential servers, that serve a Poisson stream of arriving customers. Each server can be either busy, idle or off. An arriving customer will immediately occupy an idle server, if there is one, and otherwise, an off server will be turned on and start

  5. Server farms with setup costs

    NARCIS (Netherlands)

    Gandhi, A.; Harchol-Balter, M.; Adan, I.J.B.F.

    2010-01-01

    In this paper we consider server farms with a setup cost. This model is common in manufacturing systems and data centers, where there is a cost to turn servers on. Setup costs always take the form of a time delay, and sometimes there is additionally a power penalty, as in the case of data centers.

  6. Identification of human flap endonuclease 1 (FEN1) inhibitors using a machine learning based consensus virtual screening.

    Science.gov (United States)

    Deshmukh, Amit Laxmikant; Chandra, Sharat; Singh, Deependra Kumar; Siddiqi, Mohammad Imran; Banerjee, Dibyendu

    2017-07-25

    Human Flap endonuclease1 (FEN1) is an enzyme that is indispensable for DNA replication and repair processes and inhibition of its Flap cleavage activity results in increased cellular sensitivity to DNA damaging agents (cisplatin, temozolomide, MMS, etc.), with the potential to improve cancer prognosis. Reports of the high expression levels of FEN1 in several cancer cells support the idea that FEN1 inhibitors may target cancer cells with minimum side effects to normal cells. In this study, we used large publicly available, high-throughput screening data of small molecule compounds targeted against FEN1. Two machine learning algorithms, Support Vector Machine (SVM) and Random Forest (RF), were utilized to generate four classification models from huge PubChem bioassay data containing probable FEN1 inhibitors and non-inhibitors. We also investigated the influence of randomly selected Zinc-database compounds as negative data on the outcome of classification modelling. The results show that the SVM model with inactive compounds was superior to RF with Matthews's correlation coefficient (MCC) of 0.67 for the test set. A Maybridge database containing approximately 53 000 compounds was screened and top ranking 5 compounds were selected for enzyme and cell-based in vitro screening. The compound JFD00950 was identified as a novel FEN1 inhibitor with in vitro inhibition of flap cleavage activity as well as cytotoxic activity against a colon cancer cell line, DLD-1.

  7. Designing communication and remote controlling of virtual instrument network system

    Science.gov (United States)

    Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.

  8. Designing communication and remote controlling of virtual instrument network system

    International Nuclear Information System (INIS)

    Lei Lin; Wang Houjun; Zhou Xue; Zhou Wenjian

    2005-01-01

    In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful

  9. iRODS-Based Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    Science.gov (United States)

    Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.

    2011-12-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.

  10. CernVM Co-Pilot: a Framework for Orchestrating Virtual Machines Running Applications of LHC Experiments on the Cloud

    International Nuclear Information System (INIS)

    Harutyunyan, A; Sánchez, C Aguado; Blomer, J; Buncic, P

    2011-01-01

    CernVM Co-Pilot is a framework for the delivery and execution of the workload on remote computing resources. It consists of components which are developed to ease the integration of geographically distributed resources (such as commercial or academic computing clouds, or the machines of users participating in volunteer computing projects) into existing computing grid infrastructures. The Co-Pilot framework can also be used to build an ad-hoc computing infrastructure on top of distributed resources. In this paper we present the architecture of the Co-Pilot framework, describe how it is used to execute the jobs of the ALICE and ATLAS experiments, as well as to run the Monte-Carlo simulation application of CERN Theoretical Physics Group.

  11. NEOS Server 4.0 Administrative Guide

    OpenAIRE

    Dolan, Elizabeth D.

    2001-01-01

    The NEOS Server 4.0 provides a general Internet-based client/server as a link between users and software applications. The administrative guide covers the fundamental principals behind the operation of the NEOS Server, installation and trouble-shooting of the Server software, and implementation details of potential interest to a NEOS Server administrator. The guide also discusses making new software applications available through the Server, including areas of concern to remote solver adminis...

  12. Medical video server construction.

    Science.gov (United States)

    Dańda, Jacek; Juszkiewicz, Krzysztof; Leszczuk, Mikołaj; Loziak, Krzysztof; Papir, Zdzisław; Sikora, Marek; Watza, Rafal

    2003-01-01

    The paper discusses two implementation options for a Digital Video Library, a repository used for archiving, accessing, and browsing of video medical records. Two crucial issues to be decided on are a video compression format and a video streaming platform. The paper presents numerous decision factors that have to be taken into account. The compression formats being compared are DICOM as a format representative for medical applications, both MPEGs, and several new formats targeted for an IP networking. The comparison includes transmission rates supported, compression rates, and at least options for controlling a compression process. The second part of the paper presents the ISDN technique as a solution for provisioning of tele-consultation services between medical parties that are accessing resources uploaded to a digital video library. There are several backbone techniques (like corporate LANs/WANs, leased lines or even radio/satellite links) available, however, the availability of network resources for hospitals was the prevailing choice criterion pointing to ISDN solutions. Another way to provide access to the Digital Video Library is based on radio frequency domain solutions. The paper describes possibilities of both, wireless and cellular network's data transmission service to be used as a medical video server transport layer. For the cellular net-work based solution two communication techniques are used: Circuit Switched Data and Packet Switched Data.

  13. Target-specific support vector machine scoring in structure-based virtual screening: computational validation, in vitro testing in kinases, and effects on lung cancer cell proliferation.

    Science.gov (United States)

    Li, Liwei; Khanna, May; Jo, Inha; Wang, Fang; Ashpole, Nicole M; Hudmon, Andy; Meroueh, Samy O

    2011-04-25

    We assess the performance of our previously reported structure-based support vector machine target-specific scoring function across 41 targets, 40 among them from the Directory of Useful Decoys (DUD). The area under the curve of receiver operating characteristic plots (ROC-AUC) revealed that scoring with SVM-SP resulted in consistently better enrichment over all target families, outperforming Glide and other scoring functions, most notably among kinases. In addition, SVM-SP performance showed little variation among protein classes, exhibited excellent performance in a test case using a homology model, and in some cases showed high enrichment even with few structures used to train a model. We put SVM-SP to the test by virtual screening 1125 compounds against two kinases, EGFR and CaMKII. Among the top 25 EGFR compounds, three compounds (1-3) inhibited kinase activity in vitro with IC₅₀ of 58, 2, and 10 μM. In cell cultures, compounds 1-3 inhibited nonsmall cell lung carcinoma (H1299) cancer cell proliferation with similar IC₅₀ values for compound 3. For CaMKII, one compound inhibited kinase activity in a dose-dependent manner among 20 tested with an IC₅₀ of 48 μM. These results are encouraging given that our in-house library consists of compounds that emerged from virtual screening of other targets with pockets that are different from typical ATP binding sites found in kinases. In light of the importance of kinases in chemical biology, these findings could have implications in future efforts to identify chemical probes of kinases within the human kinome.

  14. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    Science.gov (United States)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  15. CERN Document Server (CDS): Introduction

    CERN Multimedia

    CERN. Geneva; Costa, Flavio

    2017-01-01

    A short online tutorial introducing the CERN Document Server (CDS). Basic functionality description, the notion of Revisions and the CDS test environment. Links: CDS Production environment CDS Test environment  

  16. The internet, virtual communities and threats to confidentiality ...

    African Journals Online (AJOL)

    Internet list servers and chat groups gather doctors together in virtual space to exchange views on clinical and professional issues. This paper focuses on the last of these Internet applications, beginning with a description of the 'virtual community' that the list servers and chat groups constitute. It demonstrates how various ...

  17. High-Performance Tiled WMS and KML Web Server

    Science.gov (United States)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  18. Trends in Virtualized User Environments

    Directory of Open Access Journals (Sweden)

    Diane Barrett

    2008-06-01

    Full Text Available Virtualized environments can make forensics investigation more difficult. Technological advances in virtualization tools essentially make removable media a PC that can be carried around in a pocket or around a neck. Running operating systems and applications this way leaves very little trace on the host system. This paper will explore all the newest methods for virtualized environments and the implications they have on the world of forensics. It will begin by describing and differentiating between software and hardware virtualization. It will then move on to explain the various methods used for server and desktop virtualization. Next, it will explain how virtualization affects the basic forensic process. Finally, it will describe the common methods to find virtualization artifacts and identify virtual activities that affect the examination process of certain virtualized user environments.

  19. Windows Server 2012 R2 administrator cookbook

    CERN Document Server

    Krause, Jordan

    2015-01-01

    This book is intended for system administrators and IT professionals with experience in Windows Server 2008 or Windows Server 2012 environments who are looking to acquire the skills and knowledge necessary to manage and maintain the core infrastructure required for a Windows Server 2012 and Windows Server 2012 R2 environment.

  20. Mac OS X Lion Server For Dummies

    CERN Document Server

    Rizzo, John

    2011-01-01

    The perfect guide to help administrators set up Apple's Mac OS X Lion Server With the overwhelming popularity of the iPhone and iPad, more Macs are appearing in corporate settings. The newest version of Mac Server is the ideal way to administer a Mac network. This friendly guide explains to both Windows and Mac administrators how to set up and configure the server, including services such as iCal Server, Podcast Producer, Wiki Server, Spotlight Server, iChat Server, File Sharing, Mail Services, and support for iPhone and iPad. It explains how to secure, administer, and troubleshoot the networ

  1. QlikView Server and Publisher

    CERN Document Server

    Redmond, Stephen

    2014-01-01

    This is a comprehensive guide with a step-by-step approach that enables you to host and manage servers using QlikView Server and QlikView Publisher.If you are a server administrator wanting to learn about how to deploy QlikView Server for server management,analysis and testing, and QlikView Publisher for publishing of business content then this is the perfect book for you. No prior experience with QlikView is expected.

  2. CyberWalk : a web-based distributed virtual walkthrough environment.

    OpenAIRE

    Chim, J.; Lau, R. W. H.; Leong, H. V.; Si, A.

    2003-01-01

    A distributed virtual walkthrough environment allows users connected to the geometry server to walk through a specific place of interest, without having to travel physically. This place of interest may be a virtual museum, virtual library or virtual university. There are two basic approaches to distribute the virtual environment from the geometry server to the clients, complete replication and on-demand transmission. Although the on-demand transmission approach saves waiting time and optimize...

  3. Predicting subject-driven actions and sensory experience in a virtual world with relevance vector machine regression of fMRI data.

    Science.gov (United States)

    Valente, Giancarlo; De Martino, Federico; Esposito, Fabrizio; Goebel, Rainer; Formisano, Elia

    2011-05-15

    In this work we illustrate the approach of the Maastricht Brain Imaging Center to the PBAIC 2007 competition, where participants had to predict, based on fMRI measurements of brain activity, subject driven actions and sensory experience in a virtual world. After standard pre-processing (slice scan time correction, motion correction), we generated rating predictions based on linear Relevance Vector Machine (RVM) learning from all brain voxels. Spatial and temporal filtering of the time series was optimized rating by rating. For some of the ratings (e.g. Instructions, Hits, Faces, Velocity), linear RVM regression was accurate and very consistent within and between subjects. For other ratings (e.g. Arousal, Valence) results were less satisfactory. Our approach ranked overall second. To investigate the role of different brain regions in ratings prediction we generated predictive maps, i.e. maps of the weighted contribution of each voxel to the predicted rating. These maps generally included (but were not limited to) "specialized" regions which are consistent with results from conventional neuroimaging studies and known functional neuroanatomy. In conclusion, Sparse Bayesian Learning models, such as RVM, appear to be a valuable approach to the multivariate regression of fMRI time series. The implementation of the Automatic Relevance Determination criterion is particularly suitable and provides a good generalization, despite the limited number of samples which is typically available in fMRI. Predictive maps allow disclosing multi-voxel patterns of brain activity that predict perceptual and behavioral subjective experience. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Virtualizing Microsoft Tier 1 Applications with VMware vSphere 4

    CERN Document Server

    Windom, Charles A; Fontana, Alex

    2010-01-01

    Virtualize mission-critical Microsoft applications. How do you safely deploy Tier 1 apps in virtual environments? In this in-depth guide, VMware insiders Charles A. Windom, Hemant Gaidhani, and Alex Fontana show you how. Focusing on Microsoft applications, they guide you step by step through a Proof of Concept for virtualizing Windows Server, Active Directory, Internet Information Services, Exchange Server, SQL Server, SharePoint Server, and Remote Desktop Services—all on the VMware vSphere 4 platform. You'll find out what to consider for each application before you virtualize it, and learn ho

  5. Professional Team Foundation Server 2012

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2012-01-01

    A comprehensive guide to using Microsoft Team Foundation Server 2012 Team Foundation Server has become the leading Microsoft productivity tool for software management, and this book covers what developers need to know to use it effectively. Fully revised for the new features of TFS 2012, it provides developers and software project managers with step-by-step instructions and even assists those who are studying for the TFS 2012 certification exam. You'll find a broad overview of TFS, thorough coverage of core functions, a look at extensibility options, and more, written by Microsoft ins

  6. GeoServer beginner's guide

    CERN Document Server

    Youngblood, Brian

    2013-01-01

    Step-by-step instructions are included and the needs of a beginner are totally satisfied by the book. The book consists of plenty of examples with accompanying screenshots and code for an easy learning curve. You are a web developer with knowledge of server side scripting, and have experience with installing applications on the server. You have a desire to want more than Google maps, by offering dynamically built maps on your site with your latest geospatial data stored in MySQL, PostGIS, MsSQL or Oracle. If this is the case, this book is meant for you.

  7. Professional Team Foundation Server 2010

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2011-01-01

    Authoritative guide to TFS 2010 from a dream team of Microsoft insiders and MVPs!Microsoft Visual Studio Team Foundation Server (TFS) has evolved until it is now an essential tool for Microsoft?s Application Lifestyle Management suite of productivity tools, enabling collaboration within and among software development teams. By 2011, TFS will replace Microsoft?s leading source control system, VisualSourceSafe, resulting in an even greater demand for information about it. Professional Team Foundation Server 2010, written by an accomplished team of Microsoft insiders and Microsoft MVPs, provides

  8. Learning SQL Server Reporting Services 2012

    CERN Document Server

    Krishnaswamy, Jayaram

    2013-01-01

    The book is packed with clear instructions and plenty of screenshots, providing all the support and guidance you will need as you begin to generate reports with SQL Server 2012 Reporting Services.This book is for those who are new to SQL Server Reporting Services 2012 and aspiring to create and deploy cutting edge reports. This book is for report developers, report authors, ad-hoc report authors and model developers, and Report Server and SharePoint Server Integrated Report Server administrators. Minimal knowledge of SQL Server is assumed and SharePoint experience would be helpful.

  9. Open client/server computing and middleware

    CERN Document Server

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  10. Beginning Microsoft SQL Server 2012 Programming

    CERN Document Server

    Atkinson, Paul

    2012-01-01

    Get up to speed on the extensive changes to the newest release of Microsoft SQL Server The 2012 release of Microsoft SQL Server changes how you develop applications for SQL Server. With this comprehensive resource, SQL Server authority Robert Vieira presents the fundamentals of database design and SQL concepts, and then shows you how to apply these concepts using the updated SQL Server. Publishing time and date with the 2012 release, Beginning Microsoft SQL Server 2012 Programming begins with a quick overview of database design basics and the SQL query language and then quickly proceeds to sho

  11. A virtual computing infrastructure for TS-CV SCADA systems

    CERN Document Server

    Poulsen, S

    2008-01-01

    In modern data centres, it is an emerging trend to operate and manage computers as software components or logical resources and not as physical machines. This technique is known as â€ワvirtualisation” and the new computers are referred to as â€ワvirtual machines” (VMs). Multiple VMs can be consolidated on a single hardware platform and managed in ways that are not possible with physical machines. However, this is not yet widely practiced for control system deployment. In TS-CV, a collection of VMs or a â€ワvirtual infrastructure” is installed since 2005 for SCADA systems, PLC program development, and alarm transmission. This makes it possible to consolidate distributed, heterogeneous operating systems and applications on a limited number of standardised high-performance servers in the Central Control Room (CCR). More generally, virtualisation assists in offering continuous computing services for controls and maintaining performance and assuring quality. Implementing our systems in a vi...

  12. Building server capabilities in China

    DEFF Research Database (Denmark)

    Adeyemi, Oluseyi; Slepniov, Dmitrij; Wæhrens, Brian Vejrum

    2012-01-01

    The purpose of this paper is to further our understanding of multinational companies building server capabilities in China. The paper is based on the cases of two western companies with operations in China. The findings highlight a number of common patterns in the 1) managerial challenges related...

  13. Client-server password recovery

    NARCIS (Netherlands)

    Chmielewski, Ł.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the

  14. Client-Server Password Recovery

    NARCIS (Netherlands)

    Chmielewski, L.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect – people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the

  15. Team Foundation Server 2013 customization

    CERN Document Server

    Beeming, Gordon

    2014-01-01

    This book utilizes a tutorial based approach, focused on the practical customization of key features of the Team Foundation Server for collaborative enterprise software projects.This practical guide is intended for those who want to extend TFS. This book is for intermediate users who have an understanding of TFS, and basic coding skills will be required for the more complex customizations.

  16. The INTERLISP Virtual Machine Specification,

    Science.gov (United States)

    1976-09-01

    typescript ” files. t h i n _ r b is. f i les contain -sung all ofthen i n n _ p m_ n_ b n_ u n _ l~m m : t tm , u i n _ :- , u - n_ h i On s w i t h...nm _ sIn _ c d of a File N~n _ mune ) n_ n _ u u ’J t b - s e nann_e of tb;e current , - typescript file (i f n _ n _ n _ n y). The conrespondunmg f

  17. Web Server Configuration for an Academic Intranet

    National Research Council Canada - National Science Library

    Baltzis, Stamatios

    2000-01-01

    .... One of the factors that boosted this ability was the evolution of the Web Servers. Using the web server technology man can be connected and exchange information with the most remote places all over the...

  18. Implementation of SRPT Scheduling in Web Servers

    National Research Council Canada - National Science Library

    Harchol-Balter, Mor

    2000-01-01

    .... Experiments use the Linux operating system and the Flash web server. All experiments are repeated under a range of server loads and under both trace-based workloads and those generated by a Web workload generator...

  19. Locating Nearby Copies of Replicated Internet Servers

    National Research Council Canada - National Science Library

    Guyton, James D; Schwartz, Michael F

    1995-01-01

    In this paper we consider the problem of choosing among a collection of replicated servers focusing on the question of how to make choices that segregate client/server traffic according to network topology...

  20. A polling model with an autonomous server

    NARCIS (Netherlands)

    de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    Polling models are used as an analytical performance tool in several application areas. In these models, the focus often is on controlling the operation of the server as to optimize some performance measure. For several applications, controlling the server is not an issue as the server moves

  1. NRSAS: Nuclear Receptor Structure Analysis Servers.

    NARCIS (Netherlands)

    Bettler, E.J.M.; Krause, R.; Horn, F.; Vriend, G.

    2003-01-01

    We present a coherent series of servers that can perform a large number of structure analyses on nuclear hormone receptors. These servers are part of the NucleaRDB project, which provides a powerful information system for nuclear hormone receptors. The computations performed by the servers include

  2. Going virtual: popular trend or real prospect for enterprise information systems

    CSIR Research Space (South Africa)

    Carroll, M

    2010-06-01

    Full Text Available Organisations are faced with a number of challenges and issues in decentralised, multiple-server, physical, non-virtualized IT environments. Virtualization in recent years has had a significant impact on computing environments and has introduced...

  3. Security Implications of Virtualization: A Literature Study

    NARCIS (Netherlands)

    van Cleeff, A.; Pieters, Wolter; Wieringa, Roelf J.

    2009-01-01

    Server virtualization is a key technology for today's data centers, allowing dedicated hardware to be turned into resources that can be used on demand.However, in spite of its important role, the overall security impact of virtualization is not well understood.To remedy this situation, we have

  4. PENGEMBANGAN ANTIVIRUS BERBASIS CLIENT SERVER

    Directory of Open Access Journals (Sweden)

    Richki Hardi

    2015-07-01

    Full Text Available The era of globalization is included era where the komputer virus has been growing rapidly, not only of mere academic research but has become a common problem for komputer users in the world. The effect of this loss is increasingly becoming the widespread use of the Internet as a global communication line between komputer users around the world, based on the results of the survey CSI / FB. Along with the progress, komputer viruses undergo some evolution in shape, characteristics and distribution medium such as Worms, Spyware Trojan horse and program Malcodelain. Through the development of server-based antivirus clien then the user can easily determine the behavior of viruses and worms, knowing what part of an operating system that is being attacked by viruses and worms, making itself a development of network-based antivirus client server and can also be relied upon as an engine fast and reliable scanner to recognize the virus and saving in memory management.

  5. CERN servers go to Mexico

    CERN Multimedia

    Stefania Pandolfi

    2015-01-01

    On Wednesday, 26 August, 384 servers from the CERN Computing Centre were donated to the Faculty of Science in Physics and Mathematics (FCFM) and the Mesoamerican Centre for Theoretical Physics (MCTP) at the University of Chiapas, Mexico.   CERN’s Director-General, Rolf Heuer, met the Mexican representatives in an official ceremony in Building 133, where the servers were prepared for shipment. From left to right: Frédéric Hemmer, CERN IT Department Head; Raúl Heredia Acosta, Deputy Permanent Representative of Mexico to the United Nations and International Organizations in Geneva; Jorge Castro-Valle Kuehne, Ambassador of Mexico to the Swiss Confederation and the Principality of Liechtenstein; Rolf Heuer, CERN Director-General; Luis Roberto Flores Castillo, President of the Swiss Chapter of the Global Network of Qualified Mexicans Abroad; Virginia Romero Tellez, Coordinator of Institutional Relations of the Swiss Chapter of the Global Network of Qualified Me...

  6. PostgreSQL server programming

    CERN Document Server

    Krosing, Hannu

    2013-01-01

    This practical guide leads you through numerous aspects of working with PostgreSQL. Step by step examples allow you to easily set up and extend PostgreSQL. ""PostgreSQL Server Programming"" is for moderate to advanced PostgreSQL database professionals. To get the best understanding of this book, you should have general experience in writing SQL, a basic idea of query tuning, and some coding experience in a language of your choice.

  7. The implementation of virtualization technology in EAST data system

    International Nuclear Information System (INIS)

    Wang, Feng; Sun, Xiaoyang; Li, Shi; Wang, Yong; Xiao, Bingjia; Chang, Sidi

    2014-01-01

    Highlights: • The server virtualization based on XenServer has been used in EAST data center for common servers and software development platform. • The application virtualization based on XenApp has been demonstrated in EAST to provide an easy and unified data browser method. • The desktop virtualization based on XenDesktop has been adopted for desktop virtualization in the new EAST central control room. - Abstract: The virtualization technology is very popular in many fields at present which has many advantages such as reducing costs, unified management, mobile applications, cross platform, etc. We have also implemented the virtualization technology in EAST control and data system. There are primarily four kinds of technology providers in virtualization technology including VMware, Citrix, Microsoft Hyper-V as well as open source solutions. We have chosen the Citrix solution to implement our virtualization system which mainly includes three aspects. Firstly, we adopt the XenServer technology to realize virtual server for EAST data management and service system. Secondly, we use XenApp technology to realize cross platform system for unify data access. Thirdly, in order to simplify the management of the client computers, we adopt the XenDesktop technology to realize virtual desktops for new central control room. The details of the implementation are described in this paper

  8. Manipulating E-Mail Server Feedback for Spam Prevention

    Directory of Open Access Journals (Sweden)

    O. A. Okunade

    2017-08-01

    Full Text Available The cyber criminals who infect machines with bots are not the same as the spammers who rent botnets to distribute their messages. The activities of these spammers account for the majority of spam emails traffic on the internet. Once their botnets and campaigns are identified, it is not enough to keep on filtering the spam emails, it is necessary to deploy techniques that will carry the fight to their end. It is observed that spammers also take into account server feedback (for example to detect and remove non-existent recipients from email address lists. We can take advantage of this observation by returning fake information, thereby poisoning the server feedback on which the spammers rely. The results of this paper show that by sending misleading information to a spammer, it is possible to prevent recipients from receiving subsequent spam emails from that same spammer.

  9. Mastering Microsoft Windows Server 2008 R2

    CERN Document Server

    Minasi, Mark; Finn, Aidan

    2010-01-01

    The one book you absolutely need to get up and running with Windows Server 2008 R2. One of the world's leading Windows authorities and top-selling author Mark Minasi explores every nook and cranny of the latest version of Microsoft's flagship network operating system, Windows Server 2008 R2, giving you the most in-depth coverage in any book on the market.: Focuses on Windows Windows Server 2008 R2, the newest version of Microsoft's Windows' server line of operating system, and the ideal server for new Windows 7 clients; Author Mark Minasi is one of the world's leading Windows authorities and h

  10. Professional Microsoft SQL Server 2012 Administration

    CERN Document Server

    Jorgensen, Adam; LoForte, Ross; Knight, Brian

    2012-01-01

    An essential how-to guide for experienced DBAs on the most significant product release since 2005! Microsoft SQL Server 2012 will have major changes throughout the SQL Server and will impact how DBAs administer the database. With this book, a team of well-known SQL Server experts introduces the many new features of the most recent version of SQL Server and deciphers how these changes will affect the methods that administrators have been using for years. Loaded with unique tips, tricks, and workarounds for handling the most difficult SQL Server admin issues, this how-to guide deciphers topics s

  11. The weighted 2-server problem

    Czech Academy of Sciences Publication Activity Database

    Chrobak, M.; Sgall, Jiří

    2004-01-01

    Roč. 324, 2-3 (2004), s. 289-319 ISSN 0304-3975 R&D Projects: GA MŠk ME 103; GA MŠk ME 476; GA ČR GA201/01/1195; GA MŠk LN00A056; GA AV ČR IAA1019901; GA AV ČR IAA1019401 Institutional research plan: CEZ:AV0Z1019905 Keywords : online algorithms * k- server problem Subject RIV: BA - General Mathematics Impact factor: 0.676, year: 2004

  12. Measuring SIP proxy server performance

    CERN Document Server

    Subramanian, Sureshkumar V

    2013-01-01

    Internet Protocol (IP) telephony is an alternative to the traditional Public Switched Telephone Networks (PSTN), and the Session Initiation Protocol (SIP) is quickly becoming a popular signaling protocol for VoIP-based applications. SIP is a peer-to-peer multimedia signaling protocol standardized by the Internet Engineering Task Force (IETF), and it plays a vital role in providing IP telephony services through its use of the SIP Proxy Server (SPS), a software application that provides call routing services by parsing and forwarding all the incoming SIP packets in an IP telephony network.SIP Pr

  13. Virtual Memory Introspection Framework for Cyber Threat Detection in Virtual Environment

    Directory of Open Access Journals (Sweden)

    Himanshu Upadhyay

    2018-01-01

    Full Text Available In today’s information based world, it is increasingly important to safeguard the data owned by any organization, be it intellectual property or personal information. With ever increasing sophistication of malware, it is imperative to come up with an automated and advanced methods of attack vector recognition and isolation. Existing methods are not dynamic enough to adapt to the behavioral complexity of new malware. Widely used operating systems, especially Linux, have a popular perception of being more secure than other operating systems (e.g. Windows, but this is not necessarily true. The open source nature of the Linux operating system is a double edge sword; malicious actors having full access to the kernel code does not reassure the IT world of Linux’s vulnerabilities. Recent widely reported hacking attacks on reputable organizations have mostly been on Linux servers. Most new malwares are able to neutralize existing defenses on the Linux operating system. A radical solution for malware detection is needed – one which cannot be detected and damaged by malicious code. In this paper, we propose a novel framework design that uses virtualization to isolate and monitor Linux environments. The framework uses the well-known Xen hypervisor to host server environments and uses a Virtual Memory Introspection framework to capture process behavior. The behavioral data is analyzed using sophisticated machine learning algorithms to flag potential cyber threats. The framework can be enhanced to have self-healing properties: any compromised hosts are immediately replaced by their uncompromised versions, limiting the exposure to the wider enterprise network.

  14. Agreements in Virtual Organizations

    Science.gov (United States)

    Pankowska, Malgorzata

    This chapter is an attempt to explain the important impact that contract theory delivers with respect to the concept of virtual organization. The author believes that not enough research has been conducted in order to transfer theoretical foundations for networking to the phenomena of virtual organizations and open autonomic computing environment to ensure the controllability and management of them. The main research problem of this chapter is to explain the significance of agreements for virtual organizations governance. The first part of this chapter comprises explanations of differences among virtual machines and virtual organizations for further descriptions of the significance of the first ones to the development of the second. Next, the virtual organization development tendencies are presented and problems of IT governance in highly distributed organizational environment are discussed. The last part of this chapter covers analysis of contracts and agreements management for governance in open computing environments.

  15. A Framework For Fault Tolerance In Virtualized Servers

    Science.gov (United States)

    2016-06-01

    effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data

  16. HDF-EOS Web Server

    Science.gov (United States)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  17. CERN servers donated to Ghana

    CERN Multimedia

    CERN Bulletin

    2012-01-01

    Cutting-edge research requires a constantly high performance of the computing equipment. At the CERN Computing Centre, computers typically need to be replaced after about four years of use. However, while servers may be withdrawn from cutting-edge use, they are still good for other uses elsewhere. This week, 220 servers and 30 routers were donated to the Kwame Nkrumah University of Science and Technology (KNUST) in Ghana.   “KNUST will provide a good home for these computers. The university has also developed a plan for using them to develop scientific collaboration with CERN,” said John Ellis, a professor at King’s College London and a visiting professor in CERN’s Theory Group.  John Ellis was heavily involved in building the relationship with Ghana, which started in 2006 when a Ghanaian participated in the CERN openlab student programme. Since 2007 CERN has hosted Ghanaians especially from KNUST in the framework of the CERN Summer Student Progr...

  18. Home media server content management

    Science.gov (United States)

    Tokmakoff, Andrew A.; van Vliet, Harry

    2001-07-01

    With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.

  19. Passive Detection of Misbehaving Name Servers

    Science.gov (United States)

    2013-10-01

    name servers that changed IP address five or more times in a month. Solid red line indicates those servers possibly linked to pharmaceutical scams . 12...malicious and stated that fast-flux hosting “is considered one of the most serious threats to online activities today” [ICANN 2008, p. 2]. The...that time, apparently independent of filters on name-server flux, a large number of pharmaceutical scams1 were taken down. These scams apparently

  20. PERANCANGAN MAIL SERVER ZIMBRA MENGGUNAKAN TEKNOLOGI VIRTUALISASI STUDI KASUS : SMK PANCAKARYA KOTA TANGERANG

    Directory of Open Access Journals (Sweden)

    Heru Prasetiawan

    2017-05-01

    Full Text Available The development of information technology is growing rapidly spur the emergence of new technologies are constantly evolving. The development of technologies that generate more reliable, efficient, economical, and powerful than previous technology. Electronic mail (email is a form of communication and correspondence electronically through a computer system and transmitted to another computer that is intended to traverse the computer network. The existence of mail server is needed to support the communication needs via email. Zimbra Mail Server is implemented using virtualization technology with the operating system Proxmox which is a Linux distribution based on Debian and to guestnya operating system SLES (Suse Linux Enterprise Server. This research was conducted at the agency already has a previous computer networking facilities, so that the research was conducted to complement the needs of the mail server at the institution. The result achieved is a mail application server using virtualization technology that has the facilities and the web-based mail client applications, antivirus and antispam.

  1. Mastering Microsoft Windows Small Business Server 2008

    CERN Document Server

    Johnson, Steven

    2010-01-01

    A complete, winning approach to the number one small business solution. Do you have 75 or fewer users or devices on your small-business network? Find out how to integrate everything you need for your mini-enterprise with Microsoft's new Windows Server 2008 Small Business Server, a custom collection of server and management technologies designed to help small operations run smoothly without a giant IT department. This comprehensive guide shows you how to master all SBS components as well as handle integration with other Microsoft technologies.: Focuses on Windows Server 2008 Small Business Serv

  2. Mastering Windows Server 2008 Networking Foundations

    CERN Document Server

    Minasi, Mark; Mueller, John Paul

    2011-01-01

    Find in-depth coverage of general networking concepts and basic instruction on Windows Server 2008 installation and management including active directory, DNS, Windows storage, and TCP/IP and IPv4 networking basics in Mastering Windows Server 2008 Networking Foundations. One of three new books by best-selling author Mark Minasi, this guide explains what servers do, how basic networking works (IP basics and DNS/WINS basics), and the fundamentals of the under-the-hood technologies that support staff must understand. Learn how to install Windows Server 2008 and build a simple network, security co

  3. National Medical Terminology Server in Korea

    Science.gov (United States)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  4. A tandem queue with delayed server release

    OpenAIRE

    Nawijn, W.M.

    1997-01-01

    We consider a tandem queue with two stations. The rst station is an s-server queue with Poisson arrivals and exponential service times. After terminating his service in the rst station, a customer enters the second station to require service at an exponential single server, while in the meantime he is blocking his server in station 1 until he completes service in station 2, whereupon the server in station 1 is released. An analysis of the generating function of the simultaneous probability di...

  5. Microsoft Windows Server 2012 administration instant reference

    CERN Document Server

    Hester, Matthew

    2013-01-01

    Fast, accurate answers for common Windows Server questions Serving as a perfect companion to all Windows Server books, this reference provides you with quick and easily searchable solutions to day-to-day challenges of Microsoft's newest version of Windows Server. Using helpful design features such as thumb tabs, tables of contents, and special heading treatments, this resource boasts a smooth and seamless approach to finding information. Plus, quick-reference tables and lists provide additional on-the-spot answers. Covers such key topics as server roles and functionality, u

  6. EarthServer - 3D Visualization on the Web

    Science.gov (United States)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  7. Application of support vector machine to three-dimensional shape-based virtual screening using comprehensive three-dimensional molecular shape overlay with known inhibitors.

    Science.gov (United States)

    Sato, Tomohiro; Yuki, Hitomi; Takaya, Daisuke; Sasaki, Shunta; Tanaka, Akiko; Honma, Teruki

    2012-04-23

    In this study, machine learning using support vector machine was combined with three-dimensional (3D) molecular shape overlay, to improve the screening efficiency. Since the 3D molecular shape overlay does not use fingerprints or descriptors to compare two compounds, unlike 2D similarity methods, the application of machine learning to a 3D shape-based method has not been extensively investigated. The 3D similarity profile of a compound is defined as the array of 3D shape similarities with multiple known active compounds of the target protein and is used as the explanatory variable of support vector machine. As the measures of 3D shape similarity for our new prediction models, the prediction performances of the 3D shape similarity metrics implemented in ROCS, such as ShapeTanimoto and ScaledColor, were validated, using the known inhibitors of 15 target proteins derived from the ChEMBL database. The learning models based on the 3D similarity profiles stably outperformed the original ROCS when more than 10 known inhibitors were available as the queries. The results demonstrated the advantages of combining machine learning with the 3D similarity profile to process the 3D shape information of plural active compounds.

  8. First Indico Virtual Event

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The first Indico virtual event will take place on February 4th 15:00 and will focus on two main topics The release of Indico v1.2 The migration of the OO Indico backend database (ZODB) to a more standard DBMS It will be fully virtual using the CERN Vidyo service and will foster discussions between developers and administrators of Indico servers worldwide. Connections to the virtual room will be open, but attendees are encouraged to register to the event, in order to be informed of any changes in the organisation if any. If you would like to add a topic of discussion or propose yourself a contribution, please let us know at indico-team@cern.ch. Connection to Vidyo Vidyo connection details are available here CERN Vidyo service documentation can be found here First-time users are encouraged to try the service before connecting to the real event

  9. Essential Mac OS X panther server administration integrating Mac OS X server into heterogeneous networks

    CERN Document Server

    Bartosh, Michael

    2004-01-01

    If you've ever wondered how to safely manipulate Mac OS X Panther Server's many underlying configuration files or needed to explain AFP permission mapping--this book's for you. From the command line to Apple's graphical tools, the book provides insight into this powerful server software. Topics covered include installation, deployment, server management, web application services, data gathering, and more

  10. The ASDEX Upgrade Parameter Server

    Energy Technology Data Exchange (ETDEWEB)

    Neu, Gregor, E-mail: gregor.neu@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Gräter, Alex [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); Lüddecke, Klaus [Unlimited Computer Systems, Seeshaupter Str. 15, 82393 Iffeldorf (Germany); Rapson, Christopher J.; Raupp, Gerhard; Treutterer, Wolfgang; Zasche, Dietrich; Zehetbauer, Thomas [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany)

    2015-10-15

    Highlights: • We describe our main tool in the plasma control configuration process. • Parameter access and computation are configurable with XML files. • Simple implementation of in situ tests by rerouting requests to test data. • Pulse specific overriding of parameters. - Abstract: Concepts for the configuration of plant systems and plasma control of modern devices such as ITER and W7-X are based on global data structures, or “pulse schedules” or “experiment programs”, which specify all physics characteristics (waveforms for controlled actuators and plasma quantities) and all technical characteristics of the plant systems (diagnostics and actuators operation settings) for a planned pulse. At ASDEX Upgrade we use different approach. We observed that the physics characteristics driving the discharge control system (DCS) are frequently modified on a pulse-to-pulse basis. Plant system operation, however, relies on technical standard settings, or “basic configurations” to provide guaranteed resources or services, which evolve according to longer term session or campaign operation schedules. This is why AUG manages technical configuration items separately from physics items. Consistent computation of the DCS configuration requires access to all this physics and technical data, which include the discharge programme (DP), settings of actuator systems and real-time diagnostics, the current system state and a database of static parameters. A Parameter Server provides a unified view on all these parameter sets and acts as the central point of access. We describe the functionality and architecture of the Parameter Server and its embedding into the control environment.

  11. NCI's Distributed Geospatial Data Server

    Science.gov (United States)

    Larraondo, P. R.; Evans, B. J. K.; Antony, J.

    2016-12-01

    Earth systems, environmental and geophysics datasets are an extremely valuable source of information about the state and evolution of the Earth. However, different disciplines and applications require this data to be post-processed in different ways before it can be used. For researchers experimenting with algorithms across large datasets or combining multiple data sets, the traditional approach to batch data processing and storing all the output for later analysis rapidly becomes unfeasible, and often requires additional work to publish for others to use. Recent developments on distributed computing using interactive access to significant cloud infrastructure opens the door for new ways of processing data on demand, hence alleviating the need for storage space for each individual copy of each product. The Australian National Computational Infrastructure (NCI) has developed a highly distributed geospatial data server which supports interactive processing of large geospatial data products, including satellite Earth Observation data and global model data, using flexible user-defined functions. This system dynamically and efficiently distributes the required computations among cloud nodes and thus provides a scalable analysis capability. In many cases this completely alleviates the need to preprocess and store the data as products. This system presents a standards-compliant interface, allowing ready accessibility for users of the data. Typical data wrangling problems such as handling different file formats and data types, or harmonising the coordinate projections or temporal and spatial resolutions, can now be handled automatically by this service. The geospatial data server exposes functionality for specifying how the data should be aggregated and transformed. The resulting products can be served using several standards such as the Open Geospatial Consortium's (OGC) Web Map Service (WMS) or Web Feature Service (WFS), Open Street Map tiles, or raw binary arrays under

  12. ROLE OF VIRTUALIZATION IN CLOUD COMPUTING

    OpenAIRE

    Avneet kaur; Dr. Gaurav Gupta; Dr. Gurjit Singh Bhathal

    2017-01-01

    Cloud Computing is the fundamental change happening in the field of Information Technology.. Virtualization is the key component of cloud computing. With the use of virtualization, cloud computing brings about not only convenience and efficiency benefits, but also great challenges in the field of data security and privacy protection. .In this paper, we are discussing about virtualization, architecture of virtualization technology as well as Virtual Machine Monitor (VMM). Further discussing ab...

  13. A polling model with an autonomous server

    NARCIS (Netherlands)

    de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    2009-01-01

    This paper considers polling systems with an autonomous server that remain at a queue for an exponential amount of time before moving to a next queue incurring a generally distributed switch-over time. The server remains at a queue until the exponential visit time expires, also when the queue

  14. A tandem queue with delayed server release

    NARCIS (Netherlands)

    Nawijn, W.M.

    1997-01-01

    We consider a tandem queue with two stations. The rst station is an s-server queue with Poisson arrivals and exponential service times. After terminating his service in the rst station, a customer enters the second station to require service at an exponential single server, while in the meantime he

  15. Tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2007-01-01

    We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see [2]. In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of

  16. Personalized Pseudonyms for Servers in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiao Qiuyu

    2017-10-01

    Full Text Available A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”, a persistent pseudonym for a tenant server that can be used by a single client to access the server, whose real identity is protected by the cloud from both passive and active network attackers. When instantiated for TLS-based access to web servers, our design works with all major browsers and requires no additional client-side software and minimal changes to the client user experience. Moreover, changes to tenant servers can be hidden in supporting software (operating systems and web-programming frameworks without imposing on web-content development. Perhaps most notably, our system boosts privacy with minimal impact to web-browsing performance, after some initial setup during a user’s first access to each web server.

  17. Building mail server on distributed computing system

    International Nuclear Information System (INIS)

    Akihiro Shibata; Osamu Hamada; Tomoko Oshikubo; Takashi Sasaki

    2001-01-01

    The electronic mail has become the indispensable function in daily job, and the server stability and performance are required. Using DCE and DFS we have built the distributed electronic mail sever, that is, servers such as SMTP, IMAP are distributed symmetrically, and provides the seamless access

  18. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  19. Optimization of the parameter calculation the process of production historic by using Parallel Virtual Machine-PVM; Otimizacao do calculo de parametros no processo de ajuste de historicos de producao usando PVM

    Energy Technology Data Exchange (ETDEWEB)

    Vargas Cuervo, Carlos Hernan

    1997-03-01

    The main objective of this work is to develop a methodology to optimize the simultaneous computation of two parameters in the process of production history matching. This work describes a procedure to minimize an objective function established to find the values of the parameters which are modified in the process. The parameters are chosen after a sensibility analysis. Two optimization methods are tested: a Region Search Method (MBR) and Polytope Method. Both are based in direct search methods which do not require the function derivative. The software PVM (Parallel Virtual Machine) is used to parallelize the simulation runs, allowing the acceleration of the process and the search of multiple solutions. The validation of the methodology is applied to two reservoir models: one homogeneous and other heterogeneous. The advantages of each method and of the parallelization are also present. (author)

  20. Bringing it All Together: NODC's Geoportal Server as an Integration Tool for Interoperable Data Services

    Science.gov (United States)

    Casey, K. S.; Li, Y.

    2011-12-01

    The US National Oceanographic Data Center (NODC) has implemented numerous interoperable data technologies in recent years to enhance the discovery, understanding, and use of the vast quantities of data in the NODC archives. These services include OPeNDAP's Hyrax server, Unidata's THREDDS Data Server (TDS), NOAA's Live Access Server (LAS), and most recently the ESRI ArcGIS Server. Combined, these technologies enable NODC to provide access to its data holdings and products through most of the commonly-used standardized web services like the Data Access Protocol (DAP) and the Open Geospatial Consortium suite of services such as the Web Mapping Service (WMS) and Web Coverage Service (WCS). Despite the strong demand for and use of these services, the acronym-rich environment of services can also result in confusion for producers of data to the NODC archives, for consumers of data from the NODC archives, and for the data stewards at the archives as well. The situation is further complicated by the fact that NODC also maintains some ad hoc services like WODselect, and that not all services can be applied to all of the tens of thousands of collections in the NODC archive; where once every data set was available only through FTP and HTTP servers, now many are also available from the LAS, TDS, Hyrax, and ArcGIS Server. To bring order and clarity to this potentially confusing collection of services, NODC deployed the Geoportal Server into its Archive Management System as an integrating technology that brings together its various data access, visualization, and discovery services as well as its overall metadata management workflows. While providing an enhanced web-based interface for more integrated human-to-machine discovery and access, the deployment also enables NODC for the first time to support a robust set of machine-to-machine discovery services such as the Catalog Service for the Web (CS/W), OpenSearch, and Search and Retrieval via URL (SRU) . This approach allows NODC

  1. Virtually teaching virtual leadership

    DEFF Research Database (Denmark)

    Henriksen, Thomas Duus; Nielsen, Rikke Kristine; Børgesen, Kenneth

    2017-01-01

    This paper seeks to investigate the challenges to virtual collaboration and leadership on basis of findings from a virtual course on collaboration and leadership. The course used for this experiment was designed as a practical approach, which allowed participants to experience curriculum phenomena....... This experimental course provided insights into the challenges involved in virtual processes, and those experiences where used for addressing the challenges that virtual leadership is confronted with. Emphasis was placed on the reduction of undesired virtual distance and its consequences through affinity building....... We found that student scepticism appeared when a breakdown resulted in increasing virtual distance, and raises questions on how leaders might translate or upgrade their understandings of leadership to handling such increased distance through affinity building....

  2. Server for experimental data from LHD

    International Nuclear Information System (INIS)

    Emoto, M.; Ohdachi, S.; Watanabe, K.; Sudo, S.; Nagayama, Y.

    2006-01-01

    In order to unify various types of data, the Kaiseki Server was developed. This server provides physical experimental data of large helical device (LHD) experiments. Many types of data acquisition systems currently exist in operation, and they produce files of various formats. Therefore, it has been difficult to analyze different types of acquisition data at the same time because the data of each system should be read in a particular manner. To facilitate the usage of this data by researchers, the authors have developed a new server system, which provides a unified data format and a unique data retrieval interface. Although the Kaiseki Server satisfied the initial demand, new requests arose from researchers, one of which was the remote usage of the server. The current system cannot be used remotely because of security issues. Another request was group ownership, i.e., users belonging to the same group should have equal access to data. To satisfy these demands, the authors modified the server. However, since other requests may arise in the future, the new system must be flexible so that it can satisfy future demands. Therefore, the authors decided to develop a new server using a three-tier structure

  3. Optimizing queries in SQL Server 2008

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2010-05-01

    Full Text Available Starting from the need to develop efficient IT systems, we intend to review theoptimization methods and tools that can be used by SQL Server database administratorsand developers of applications based on Microsoft technology, focusing on the latestversion of the proprietary DBMS, SQL Server 2008. We’ll reflect on the objectives tobe considered in improving the performance of SQL Server instances, we will tackle themostly used techniques for analyzing and optimizing queries and we will describe the“Optimize for ad hoc workloads”, “Plan Freezing” and “Optimize for unknown" newoptions, accompanied by relevant code examples.

  4. Personalized Pseudonyms for Servers in the Cloud

    OpenAIRE

    Xiao Qiuyu; Reiter Michael K.; Zhang Yinqian

    2017-01-01

    A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”), ...

  5. Getting started with SQL Server 2014 administration

    CERN Document Server

    Ellis, Gethyn

    2014-01-01

    This is an easytofollow handson tutorial that includes real world examples of SQL Server 2014's new features. Each chapter is explained in a stepbystep manner which guides you to implement the new technology.If you want to create an highly efficient database server then this book is for you. This book is for database professionals and system administrators who want to use the added features of SQL Server 2014 to create a hybrid environment, which is both highly available and allows you to get the best performance from your databases.

  6. Markerless client-server augmented reality system with natural features

    Science.gov (United States)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  7. Mastering Windows Server 2012 R2

    CERN Document Server

    Minasi, Mark; Booth, Christian; Butler, Robert; McCabe, John; Panek, Robert; Rice, Michael; Roth, Stefan

    2013-01-01

    Check out the new Hyper-V, find new and easier ways to remotely connect back into the office, or learn all about Storage Spaces-these are just a few of the features in Windows Server 2012 R2 that are explained in this updated edition from Windows authority Mark Minasi and a team of Windows Server experts led by Kevin Greene. This book gets you up to speed on all of the new features and functions of Windows Server, and includes real-world scenarios to put them in perspective. If you're a system administrator upgrading to, migrating to, or managing Windows Server 2012 R2, find what you need to

  8. Asymetric Telecollaboration in Virtual Reality

    OpenAIRE

    PORSSUT, Thibault; CHARDONNET, Jean-Rémy

    2017-01-01

    International audience; We present a first study where we combine two asymetric virtual reality systems for telecollaboration purposes: a CAVE system and a head-mounted display (HMD), using a server-client type architecture. Experiments on a puzzle game in limited time, alone and in collaboration, show that combining asymetric systems reduces cognitive load. Moreover, the participants reported preferring working in collaboration and showed to be more efficient in collaboration. These results ...

  9. Cloudified Mobility and Bandwidth Prediction in Virtualized LTE Networks

    NARCIS (Netherlands)

    Zhao, Zongliang; Karimzadeh Motallebi Azar, Morteza; Braun, Torsten; Pras, Aiko; van den Berg, Hans Leo

    Network Function Virtualization involves implementing network functions (e.g., virtualized LTE component) in software that can run on a range of industry standard server hardware, and can be migrated or instantiated on demand. A prediction service hosted on cloud infrastructures enables consumers to

  10. Markov queue game with virtual reality strategies | Nwobi-Okoye ...

    African Journals Online (AJOL)

    A non cooperative markov game with several unique characteristics was introduced. Some of these characteristics include: the existence of a single phase multi server queuing model and markovian transition matrix/matrices for each game, introduction of virtual situations (virtual reality) or dummies to improve the chances ...

  11. Designing a Virtual-Reality-Based, Gamelike Math Learning Environment

    Science.gov (United States)

    Xu, Xinhao; Ke, Fengfeng

    2016-01-01

    This exploratory study examined the design issues related to a virtual-reality-based, gamelike learning environment (VRGLE) developed via OpenSimulator, an open-source virtual reality server. The researchers collected qualitative data to examine the VRGLE's usability, playability, and content integration for math learning. They found it important…

  12. Conversation Threads Hidden within Email Server Logs

    Science.gov (United States)

    Palus, Sebastian; Kazienko, Przemysław

    Email server logs contain records of all email Exchange through this server. Often we would like to analyze those emails not separately but in conversation thread, especially when we need to analyze social network extracted from those email logs. Unfortunately each mail is in different record and those record are not tided to each other in any obvious way. In this paper method for discussion threads extraction was proposed together with experiments on two different data sets - Enron and WrUT..

  13. Microsoft SQL Server OLAP Solution - A Survey

    OpenAIRE

    Badiozamany, Sobhan

    2010-01-01

    Microsoft SQL Server 2008 offers technologies for performing On-Line Analytical Processing (OLAP), directly on data stored in data warehouses, instead of moving the data into some offline OLAP tool. This brings certain benefits, such as elimination of data copying and better integration with the DBMS compared with off-line OLAP tools. This report reviews SQL Server support for OLAP, solution architectures, tools and components involved. Standard storage options are discussed but the focus of ...

  14. Solution for an Improved WEB Server

    Directory of Open Access Journals (Sweden)

    George PECHERLE

    2009-12-01

    Full Text Available We want to present a solution with maximum performance from a web server,in terms of services that the server provides. We do not always know what tools to useor how to configure what we have in order to get what we need. Keeping the Internetrelatedservices you provide in working condition can sometimes be a real challenge.And with the increasing demand in Internet services, we need to come up with solutionsto problems that occur every day.

  15. Analysis of the Macroscopic Behavior of Server Systems in the Internet Environment

    Directory of Open Access Journals (Sweden)

    Yusuke Tanimura

    2017-11-01

    Full Text Available Elasticity is one of the key features of cloud-hosted services built on virtualization technology. To utilize the elasticity of cloud environments, administrators should accurately capture the operational status of server systems, which changes constantly according to service requests incoming irregularly. However, it is difficult to detect and avoid in advance that operating services are falling into an undesirable state. In this paper, we focus on the management of server systems that include cloud systems, and propose a new method for detecting the sign of undesirable scenarios before the system becomes overloaded as a result of various causes. In this method, a measure that utilizes the fluctuation of the macroscopic operational state observed in the server system is introduced. The proposed measure has the property of drastically increasing before the server system is in an undesirable state. Using the proposed measure, we realize a function to detect that the server system is falling into an overload scenario, and we demonstrate its effectiveness through experiments.

  16. Beginning SQL Server Modeling Model-driven Application Development in SQL Server

    CERN Document Server

    Weller, Bart

    2010-01-01

    Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. Beginning SQL Server Modeling will help you gain a comprehensive understanding of how to apply DSLs and other modeling components in the development of SQL Server implementations. Most importantly, after reading the book and working through the examples, you will have considerable experience using SQL M

  17. Security challenges for virtualization in cloud

    International Nuclear Information System (INIS)

    Tayab, A.

    2015-01-01

    Virtualization is a model that is vastly growing in IT industry. Virtualization provides more than one logical resource in one single physical machine. Infrastructure use cloud services and on behalf of virtualization, cloud computing is also a rapidly growing model of IT industry. Cloud provider and cloud user, both remain ignorant of each other's security. Since virtualization and cloud computing are rapidly expanding and becoming more and more complex in infrastructure, more security is required to protect them from potential attacks and security threats. Virtualization provides various benefits in terms of hardware utilization, resources protection, remote access and other resources. This paper intends to discuss the common exploits of security uses in the virtualized environment and focuses on the security threats from the attacker's perspective. This paper discuss the major areas of virtualized model environment and also address the security concerns. And finally presents a solution for secure valorization in IT infrastructure and to protect inter communication of virtual machines. (author)

  18. Machine Shop Grinding Machines.

    Science.gov (United States)

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  19. A Web-Server of Cell Type Discrimination System

    Directory of Open Access Journals (Sweden)

    Anyou Wang

    2014-01-01

    Full Text Available Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs, induced pluripotent stem cells (iPSCs, and somatic cells (SCs. Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.

  20. Virtual Experience: The Impact of Mediated Communication in a Democratic Society.

    Science.gov (United States)

    Swartz, James D.; Hatcher, Tim

    1996-01-01

    Defines virtual experience as machine-based experience, of which virtual reality is a subconcept. Topics include a history of virtual experience; criticism of the influence of machine-mediated experiences such as computer games; virtual reality environments; and Heidegger's views on technology. (LRW)

  1. An adversarial queueing model for online server routing

    NARCIS (Netherlands)

    Bonifaci, V.

    2007-01-01

    In an online server routing problem, a vehicle or server moves in a network in order to process incoming requests at the nodes. Online server routing problems have been thoroughly studied using competitive analysis. We propose a new model for online server routing, based on adversarial queueing

  2. Security Implications of Virtualization: A Literature Study

    NARCIS (Netherlands)

    van Cleeff, A.; Pieters, Wolter; Wieringa, Roelf J.

    2009-01-01

    Data centers accumulate corporate and personal data at a rapid pace. Driven by economy of scale and the high bandwidth of today's network connections, more and more businesses and individuals store their data remotely. Server virtualization is an important technology to facilitate this process,

  3. A Heuristic Task Scheduling Algorithm for Heterogeneous Virtual Clusters

    Directory of Open Access Journals (Sweden)

    Weiwei Lin

    2016-01-01

    Full Text Available Cloud computing provides on-demand computing and storage services with high performance and high scalability. However, the rising energy consumption of cloud data centers has become a prominent problem. In this paper, we first introduce an energy-aware framework for task scheduling in virtual clusters. The framework consists of a task resource requirements prediction module, an energy estimate module, and a scheduler with a task buffer. Secondly, based on this framework, we propose a virtual machine power efficiency-aware greedy scheduling algorithm (VPEGS. As a heuristic algorithm, VPEGS estimates task energy by considering factors including task resource demands, VM power efficiency, and server workload before scheduling tasks in a greedy manner. We simulated a heterogeneous VM cluster and conducted experiment to evaluate the effectiveness of VPEGS. Simulation results show that VPEGS effectively reduced total energy consumption by more than 20% without producing large scheduling overheads. With the similar heuristic ideology, it outperformed Min-Min and RASA with respect to energy saving by about 29% and 28%, respectively.

  4. Virtual colonoscopy

    Science.gov (United States)

    Colonoscopy - virtual; CT colonography; Computed tomographic colonography; Colography - virtual ... Differences between virtual and conventional colonoscopy include: VC can view the colon from many different angles. This is not as easy ...

  5. The Machine / Job Features Mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Alef, M. [KIT, Karlsruhe; Cass, T. [CERN; Keijser, J. J. [NIKHEF, Amsterdam; McNab, A. [Manchester U.; Roiser, S. [CERN; Schwickerath, U. [CERN; Sfiligoi, I. [Fermilab

    2017-11-22

    Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.

  6. Installing and Testing a Server Operating System

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2003-08-01

    Full Text Available The paper is based on the experience of the author with the FreeBSD server operating system administration on three servers in use under academicdirect.ro domain.The paper describes a set of installation, preparation, and administration aspects of a FreeBSD server.First issue of the paper is the installation procedure of FreeBSD operating system on i386 computer architecture. Discussed problems are boot disks preparation and using, hard disk partitioning and operating system installation using a existent network topology and a internet connection.Second issue is the optimization procedure of operating system, server services installation, and configuration. Discussed problems are kernel and services configuration, system and services optimization.The third issue is about client-server applications. Using operating system utilities calls we present an original application, which allows displaying the system information in a friendly web interface. An original program designed for molecular structure analysis was adapted for systems performance comparisons and it serves for a discussion of Pentium, Pentium II and Pentium III processors computation speed.The last issue of the paper discusses the installation and configuration aspects of dial-in service on a UNIX-based operating system. The discussion includes serial ports, ppp and pppd services configuration, ppp and tun devices using.

  7. Analysis of Virtual Sensors for Predicting Aircraft Fuel Consumption

    Data.gov (United States)

    National Aeronautics and Space Administration — Previous research described the use of machine learning algorithms to predict aircraft fuel consumption. This technique, known as Virtual Sensors, models fuel...

  8. Change in brain activity through virtual reality-based brain-machine communication in a chronic tetraplegic subject with muscular dystrophy.

    Science.gov (United States)

    Hashimoto, Yasunari; Ushiba, Junichi; Kimura, Akio; Liu, Meigen; Tomita, Yutaka

    2010-09-16

    For severely paralyzed people, a brain-computer interface (BCI) provides a way of re-establishing communication. Although subjects with muscular dystrophy (MD) appear to be potential BCI users, the actual long-term effects of BCI use on brain activities in MD subjects have yet to be clarified. To investigate these effects, we followed BCI use by a chronic tetraplegic subject with MD over 5 months. The topographic changes in an electroencephalogram (EEG) after long-term use of the virtual reality (VR)-based BCI were also assessed. Our originally developed BCI system was used to classify an EEG recorded over the sensorimotor cortex in real time and estimate the user's motor intention (MI) in 3 different limb movements: feet, left hand, and right hand. An avatar in the internet-based VR was controlled in accordance with the results of the EEG classification by the BCI. The subject was trained to control his avatar via the BCI by strolling in the VR for 1 hour a day and then continued the same training twice a month at his home. After the training, the error rate of the EEG classification decreased from 40% to 28%. The subject successfully walked around in the VR using only his MI and chatted with other users through a voice-chat function embedded in the internet-based VR. With this improvement in BCI control, event-related desynchronization (ERD) following MI was significantly enhanced (p < 0.01) for feet MI (from -29% to -55%), left-hand MI (from -23% to -42%), and right-hand MI (from -22% to -51%). These results show that our subject with severe MD was able to learn to control his EEG signal and communicate with other users through use of VR navigation and suggest that an internet-based VR has the potential to provide paralyzed people with the opportunity for easy communication.

  9. Change in brain activity through virtual reality-based brain-machine communication in a chronic tetraplegic subject with muscular dystrophy

    Directory of Open Access Journals (Sweden)

    Liu Meigen

    2010-09-01

    Full Text Available Abstract Background For severely paralyzed people, a brain-computer interface (BCI provides a way of re-establishing communication. Although subjects with muscular dystrophy (MD appear to be potential BCI users, the actual long-term effects of BCI use on brain activities in MD subjects have yet to be clarified. To investigate these effects, we followed BCI use by a chronic tetraplegic subject with MD over 5 months. The topographic changes in an electroencephalogram (EEG after long-term use of the virtual reality (VR-based BCI were also assessed. Our originally developed BCI system was used to classify an EEG recorded over the sensorimotor cortex in real time and estimate the user's motor intention (MI in 3 different limb movements: feet, left hand, and right hand. An avatar in the internet-based VR was controlled in accordance with the results of the EEG classification by the BCI. The subject was trained to control his avatar via the BCI by strolling in the VR for 1 hour a day and then continued the same training twice a month at his home. Results After the training, the error rate of the EEG classification decreased from 40% to 28%. The subject successfully walked around in the VR using only his MI and chatted with other users through a voice-chat function embedded in the internet-based VR. With this improvement in BCI control, event-related desynchronization (ERD following MI was significantly enhanced (p Conclusions These results show that our subject with severe MD was able to learn to control his EEG signal and communicate with other users through use of VR navigation and suggest that an internet-based VR has the potential to provide paralyzed people with the opportunity for easy communication.

  10. Can machine learning on learner analytics produce a predictive model on student performance?

    OpenAIRE

    Busch, John; Hanna, Philip; O'Neill, Ian; McGowan, Aidan; Collins, Matthew

    2017-01-01

    The aim of this research is to analysis past student learner analytics using machine learning algorithms that had undertaken a web development and programming module. By specifically using the access and error web server logs from each student web server it provides a deeper learner analytic data. The web server logs every web file access and error access from a browser so in turn each data file can directly relate to a student's engagement level and assessment strategy. Each log holds severa...

  11. Energy-efficient server management; Energieeffizientes Servermanagement

    Energy Technology Data Exchange (ETDEWEB)

    Sauter, B.

    2003-07-01

    This final report for the Swiss Federal Office of Energy (SFOE) presents the results of a project that aimed to develop an automatic shut-down system for the servers used in typical electronic data processing installations to be found in small and medium-sized enterprises. The purpose of shutting down these computers - the saving of energy - is discussed. The development of a shutdown unit on the basis of a web-server that automatically shuts down the servers connected to it and then interrupts their power supply is described. The functions of the unit, including pre-set times for switching on and off, remote operation via the Internet and its interaction with clients connected to it are discussed. Examples of the system's user interface are presented.

  12. Macroscopic transport by synthetic molecular machines

    NARCIS (Netherlands)

    Berna, J; Leigh, DA; Lubomska, M; Mendoza, SM; Perez, EM; Rudolf, P; Teobaldi, G; Zerbetto, F

    Nature uses molecular motors and machines in virtually every significant biological process, but demonstrating that simpler artificial structures operating through the same gross mechanisms can be interfaced with - and perform physical tasks in - the macroscopic world represents a significant hurdle

  13. Synthetic hardware performance analysis in virtualized cloud environment for healthcare organization.

    Science.gov (United States)

    Tan, Chee-Heng; Teh, Ying-Wah

    2013-08-01

    The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.

  14. Software for virtual accelerator designing

    International Nuclear Information System (INIS)

    Kulabukhova, N.; Ivanov, A.; Korkhov, V.; Lazarev, A.

    2012-01-01

    The article discusses appropriate technologies for software implementation of the Virtual Accelerator. The Virtual Accelerator is considered as a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators on distributed computing resources. Distributed storage and information processing facilities utilized by the Virtual Accelerator make use of the Service-Oriented Architecture (SOA) according to a cloud computing paradigm. Control system tool-kits (such as EPICS, TANGO), computing modules (including high-performance computing), realization of the GUI with existing frameworks and visualization of the data are discussed in the paper. The presented research consists of software analysis for realization of interaction between all levels of the Virtual Accelerator and some samples of middle-ware implementation. A set of the servers and clusters at St.-Petersburg State University form the infrastructure of the computing environment for Virtual Accelerator design. Usage of component-oriented technology for realization of Virtual Accelerator levels interaction is proposed. The article concludes with an overview and substantiation of a choice of technologies that will be used for design and implementation of the Virtual Accelerator. (authors)

  15. Professional Microsoft SQL Server 2012 Integration Services

    CERN Document Server

    Knight, Brian; Moss, Jessica M; Davis, Mike; Rock, Chris

    2012-01-01

    An in-depth look at the radical changes to the newest release of SISS Microsoft SQL Server 2012 Integration Services (SISS) builds on the revolutionary database product suite first introduced in 2005. With this crucial resource, you will explore how this newest release serves as a powerful tool for performing extraction, transformation, and load operations (ETL). A team of SQL Server experts deciphers this complex topic and provides detailed coverage of the new features of the 2012 product release. In addition to technical updates and additions, the authors present you with a new set of SISS b

  16. Windows Server® 2008 Inside Out

    CERN Document Server

    Stanek, William R

    2009-01-01

    Learn how to conquer Windows Server 2008-from the inside out! Designed for system administrators, this definitive resource features hundreds of timesaving solutions, expert insights, troubleshooting tips, and workarounds for administering Windows Server 2008-all in concise, fast-answer format. You will learn how to perform upgrades and migrations, automate deployments, implement security features, manage software updates and patches, administer users and accounts, manage Active Directory® directory services, and more. With INSIDE OUT, you'll discover the best and fastest ways to perform core a

  17. On the single-server retrial queue

    Directory of Open Access Journals (Sweden)

    Djellab Natalia V.

    2006-01-01

    Full Text Available In this work, we review the stochastic decomposition for the number of customers in M/G/1 retrial queues with reliable server and server subjected to breakdowns which has been the subject of investigation in the literature. Using the decomposition property of M/G/1 retrial queues with breakdowns that holds under exponential assumption for retrial times as an approximation in the non-exponential case, we consider an approximate solution for the steady-state queue size distribution.

  18. Effect of training data size and noise level on support vector machines virtual screening of genotoxic compounds from large compound libraries.

    Science.gov (United States)

    Kumar, Pankaj; Ma, Xiaohua; Liu, Xianghui; Jia, Jia; Bucong, Han; Xue, Ying; Li, Ze Rong; Yang, Sheng Yong; Wei, Yu Quan; Chen, Yu Zong

    2011-05-01

    Various in vitro and in-silico methods have been used for drug genotoxicity tests, which show limited genotoxicity (GT+) and non-genotoxicity (GT-) identification rates. New methods and combinatorial approaches have been explored for enhanced collective identification capability. The rates of in-silco methods may be further improved by significantly diversified training data enriched by the large number of recently reported GT+ and GT- compounds, but a major concern is the increased noise levels arising from high false-positive rates of in vitro data. In this work, we evaluated the effect of training data size and noise level on the performance of support vector machines (SVM) method known to tolerate high noise levels in training data. Two SVMs of different diversity/noise levels were developed and tested. H-SVM trained by higher diversity higher noise data (GT+ in any in vivo or in vitro test) outperforms L-SVM trained by lower noise lower diversity data (GT+ in in vivo or Ames test only). H-SVM trained by 4,763 GT+ compounds reported before 2008 and 8,232 GT- compounds excluding clinical trial drugs correctly identified 81.6% of the 38 GT+ compounds reported since 2008, predicted 83.1% of the 2,008 clinical trial drugs as GT-, and 23.96% of 168 K MDDR and 27.23% of 17.86M PubChem compounds as GT+. These are comparable to the 43.1-51.9% GT+ and 75-93% GT- rates of existing in-silico methods, 58.8% GT+ and 79% GT- rates of Ames method, and the estimated percentages of 23% in vivo and 31-33% in vitro GT+ compounds in the "universe of chemicals". There is a substantial level of agreement between H-SVM and L-SVM predicted GT+ and GT- MDDR compounds and the prediction from TOPKAT. SVM showed good potential in identifying GT+ compounds from large compound libraries based on higher diversity and higher noise training data.

  19. MEMBANGUN SERVER BERBASIS LINUX PADA JARINGAN LAN DI LABOR SISTEM INFORMASI JURUSAN TEKNOLOGI INFORMASI POLITEKNIK NEGERI PADANG

    Directory of Open Access Journals (Sweden)

    Fifi Rasyidah

    2014-03-01

    Full Text Available The System Information Laboratory of Information Technology Department Polytechnic State of Padang has 30 units computer as education facilities to support learning process. All of computers used at same time in a learning section. This case causing trouble to monitoring each students activities. In order to get the solution for the lecturer, the writer then construct a server by using Linux operation system and client by using windows system operation in which Samba File Server is needed. By using this samba, the lecturer will be able to share the data and will be able to use the server as data storage media. Besides that, the writer will also use VNC (Virtual network connection to simplify the process of monitoring and supervising client working system. Based on the result gotten after the writer done some experiment, it can be concluded that Samba File Server can also be used after some configuration is applied on certain files. Moreover, the writer also conclude that VNC can control the entire of the client. The writer suggests that Samba File server which will be used is the latest version one which has more feature than the previous one, it is suggested that the configuration of VNC is applied on Ubuntu Linux since the service is available. Kata Kunci : Samba File Server, VNC, Ubuntu installation

  20. Design and Construction of Wireless Control System for Drilling Machine

    Directory of Open Access Journals (Sweden)

    Nang Su Moan Hsam

    2015-06-01

    Full Text Available Abstract Drilling machine is used for boring holes in various materials and used in woodworking metalworking construction and do-it-yourself projects. When the machine operate for a long time the temperature increases and so we need to control the temperature of the machine and some lubrication system need to apply to reduce the temperature. Due to the improvement of technology the system can be controlled with wireless network. This control system use Window Communication Foundation WCF which is the latest service oriented technology to control all drilling machines in industries simultaneously. All drilling machines are start working when they received command from server. After the machine is running for a long time the temperature is gradually increased. This system used LM35 temperature sensor to measure the temperature. When the temperature is over the safely level that is programmed in host server the controller at the server will command to control the speed of motor and applying some lubrication system at the tip and edges of drill. The command from the server is received by the client and sends to PIC. In this control system PIC microcontroller is used as an interface between the client computer and the machine. The speed of motor is controlled with PWM and water pump system is used for lubrication. This control system is designed and simulated with 12V DC motor LM35 sensor LCD displayand relay which is to open the water container to spray water between drill and work piece. The host server choosing to control the drilling machine that are overheat by selecting the clients IP address that is connected with that machine.

  1. A new man-machine-interface at BESSY

    International Nuclear Information System (INIS)

    Mueller, R.; Doll, H.D.; Donasch, I.J.; Marxen, H.; Pause, H.

    1991-01-01

    A UIMS (user interface management system) has been developed, that is completely based on non-proprietary software. Central part of the UIMS are processes (mapper) that act as universal X-clients for each specified X-server. Mapper (graphic server) and applications (graphic clients) exchange requests by an event driven interface. The communication protocol is free from any graphical information. The most powerful mapper client is a form interpreter, that can be programmed to act as an equipment access server. Mapper and form interpreter allow to compose control panels and synoptic views of the machine with statements in a simple and comprehensible UIDL (user interface definition language)

  2. Virtualization in the Operations Environments

    Science.gov (United States)

    Pitts, Lee; Lankford, Kim; Felton, Larry; Pruitt, Robert

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

  3. Microsoft Exchange Server PowerShell cookbook

    CERN Document Server

    Andersson, Jonas

    2015-01-01

    This book is for messaging professionals who want to build real-world scripts with Windows PowerShell 5 and the Exchange Management Shell. If you are a network or systems administrator responsible for managing and maintaining Exchange Server 2013, you will find this highly useful.

  4. Client/Server Architecture Promises Radical Changes.

    Science.gov (United States)

    Freeman, Grey; York, Jerry

    1991-01-01

    This article discusses the emergence of the client/server paradigm for the delivery of computer applications, its emergence in response to the proliferation of microcomputers and local area networks, the applicability of the model in academic institutions, and its implications for college campus information technology organizations. (Author/DB)

  5. Implementing bioinformatic workflows within the bioextract server

    Science.gov (United States)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  6. Solarwinds Server & Application Monitor deployment and administration

    CERN Document Server

    Brant, Justin

    2013-01-01

    A concise and practical guide to using SolarWinds Server & Application Monitor.If you are an IT professionals ranging from an entry-level technician to a more advanced network or system administrator who is new to network monitoring services and/or SolarWinds SAM, this book is ideal for you.

  7. Creating a Data Warehouse using SQL Server

    DEFF Research Database (Denmark)

    Sørensen, Jens Otto; Alnor, Karl

    1999-01-01

    In this paper we construct a Star Join Schema and show how this schema can be created using the basic tools delivered with SQL Server 7.0. Major objectives are to keep the operational database unchanged so that data loading can be done with out disturbing the business logic of the operational...

  8. Mastering SQL Server 2014 data mining

    CERN Document Server

    Bassan, Amarpreet Singh

    2014-01-01

    If you are a developer who is working on data mining for large companies and would like to enhance your knowledge of SQL Server Data Mining Suite, this book is for you. Whether you are brand new to data mining or are a seasoned expert, you will be able to master the skills needed to build a data mining solution.

  9. Data center virtualization and its economic implications for the companies

    Directory of Open Access Journals (Sweden)

    Cristian STEFAN

    2009-05-01

    Full Text Available In the current situation of the economic crisis, when companies target budgetcuttings in a context of an explosive data growth, the IT community must evaluate potentialtechnology developments not only on their technical advantages, but on their economiceffects as well. More then ever, the old cliché “doing more things with fewer resources” istrue today. Many IT companies started building very large facilities, called data centers(DCs or Internet DC (IDCs, which provide businesses a wide range of solutions forsystems deployment and operation. In recent years, the IT departments around the worldhave moved from data center and infrastructure consolidation to virtualization.Data center virtualization is the process of aligning available resources with the actualneeds of the offered services, moving from physical servers to virtual servers, sharing andprovisioning servers, networks, storage, and applications. By taking advantage of threebasic innovations — virtualization, tiered storage architectures and dynamic provisioningsoftware — an organization can achieve greater efficiencies in their current computingenvironment.Such a unified computing architecture offers end-to-end virtualization; all structures areoptimized for virtualized environments, from the CPU to the aggregation layer. Incombination with embedded management, this new approach increases responsiveness andreduces the opportunities for human error, improving consistency and reducing server andnetwork deployment times.

  10. Virtual Prototyping at CERN

    Science.gov (United States)

    Gennaro, Silvano De

    The VENUS (Virtual Environment Navigation in the Underground Sites) project is probably the largest Virtual Reality application to Engineering design in the world. VENUS is just over one year old and offers a fully immersive and stereoscopic "flythru" of the LHC pits for the proposed experiments, including the experimental area equipment and the surface models that are being prepared for a territorial impact study. VENUS' Virtual Prototypes are an ideal replacement for the wooden models traditionally build for the past CERN machines, as they are generated directly from the EUCLID CAD files, therefore they are totally reliable, they can be updated in a matter of minutes, and they allow designers to explore them from inside, in a one-to-one scale. Navigation can be performed on the computer screen, on a stereoscopic large projection screen, or in immersive conditions, with an helmet and 3D mouse. By using specialised collision detection software, the computer can find optimal paths to lower each detector part into the pits and position it to destination, letting us visualize the whole assembly probess. During construction, these paths can be fed to a robot controller, which can operate the bridge cranes and build LHC almost without human intervention. VENUS is currently developing a multiplatform VR browser that will let the whole HEP community access LHC's Virtual Protoypes over the web. Many interesting things took place during the conference on Virtual Reality. For more information please refer to the Virtual Reality section.

  11. ANALYSIS OF VIRTUAL ENVIRONMENT BENEFIT IN E-LEARNING

    Directory of Open Access Journals (Sweden)

    NOVÁK, Martin

    2013-06-01

    Full Text Available The analysis of the virtual environment assets towards the e-learning process improvements is mentioned in this article. The virtual environment was created within the solution of the project ‘Virtualization’ at the Faculty of Economics and Administration, University of Pardubice. The aim of this project was to eliminate the disproportion of free access to licensed software between groups of part-time and full-time students. The research was realized within selected subjects of the study program System Engineering and Informatics. The subjects were connected to the informatics, applied informatics, control and decision making. Student subject results, student feedback based on electronic questionnaire and data from log file of virtual server usage were compared and analysed. Based on analysis of virtualization possibilities the solution of virtual environment was implemented through Microsoft Terminal Server.

  12. RANCANG BANGUN PERANGKAT LUNAK MANAJEMEN DATABASE SQL SERVER BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Muchammad Husni

    2005-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Microsoft SQL Server merupakan aplikasi desktop database server yang bersifat client/server, karena memiliki komponen client, yang  berfungsi menampilkan dan memanipulasi data; serta komponen server yang berfungsi menyimpan, memanggil, dan mengamankan database. Operasi-operasi manajemen semua server database dalam jaringan dilakukan administrator database dengan menggunakan tool administratif utama SQL Server yang bernama Enterprise Manager. Hal ini mengakibatkan administrator database hanya bisa  melakukan operasi-operasi tersebut di komputer yang telah diinstalasi Microsoft SQL Server. Pada penelitian ini, dirancang suatu aplikasi berbasis web dengan menggunakan ASP.Net untuk melakukan pengaturan database server. Aplikasi ini menggunakan ADO.NET yang memanfaatkan Transact-SQL dan stored procedure pada server untuk melakukan operasi-operasi manajemen database pada suatu server database SQL, dan menampilkannya ke dalam web. Administrator database bisa menjalankan aplikasi berbasis web tersebut dari komputer mana saja pada jaringan dan melakukan koneksi ke server database SQL dengan menggunakan web browser. Dengan demikian memudahkan administrator melakukan tugasnya tanpa harus menggunakan komputer server.   Kata Kunci : Transact-SQL, ASP.Net, ADO.NET, SQL Server

  13. Factory Virtual Environment Development for Augmented and Virtual Reality

    OpenAIRE

    M. Gregor; J. Polcar; P. Horejsi; M. Simon

    2015-01-01

    Machine visualization is an area of interest with fast and progressive development. We present a method of machine visualization which will be applicable in real industrial conditions according to current needs and demands. Real factory data were obtained in a newly built research plant. Methods described in this paper were validated on a case study. Input data were processed and the virtual environment was created. The environment contains information about dimensions, s...

  14. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  15. Comparison of Certification Authority Roles in Windows Server 2003 and Windows Server 2008

    Directory of Open Access Journals (Sweden)

    A. I. Luchnik

    2011-03-01

    Full Text Available An analysis of Certification Authority components of Microsoft server operating systems was conducted. Based on the results main directions of development of certification authorities and PKI were highlighted.

  16. Material and Virtuality

    DEFF Research Database (Denmark)

    Kruse Aagaard, Anders

    2015-01-01

    Through tangible experiments this paper discusses the dialogues between digital architectural drawing and the process of materialisation. The paper sets op the spans between virtual and actual and control and uncertainty making these oppositions a combined spaces where information between a digital...... world and a physical world can interchange. The paper suggest an approach where an overlapping of virtuality and the tangible material output from digital fabrication machines create a method of using materialisation tools as instruments to connect the reality of materials and to an exploring process...... through these experiments is both tangible and directly connected to real actions in digital drawing or material processing but also the base for theoretical contemplations of the relation between virtual and actual and control and uncertainty....

  17. Network characteristics for server selection in online games

    Science.gov (United States)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  18. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  19. A Two-Tier Energy-Aware Resource Management for Virtualized Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2016-01-01

    Full Text Available The economic costs caused by electric power take the most significant part in total cost of data center; thus energy conservation is an important issue in cloud computing system. One well-known technique to reduce the energy consumption is the consolidation of Virtual Machines (VMs. However, it may lose some performance points on energy saving and the Quality of Service (QoS for dynamic workloads. Fortunately, Dynamic Frequency and Voltage Scaling (DVFS is an efficient technique to save energy in dynamic environment. In this paper, combined with the DVFS technology, we propose a cooperative two-tier energy-aware management method including local DVFS control and global VM deployment. The DVFS controller adjusts the frequencies of homogenous processors in each server at run-time based on the practical energy prediction. On the other hand, Global Scheduler assigns VMs onto the designate servers based on the cooperation with the local DVFS controller. The final evaluation results demonstrate the effectiveness of our two-tier method in energy saving.

  20. Single server queueing networks with varying service times and renewal input

    Directory of Open Access Journals (Sweden)

    Pierre Le Gall

    2000-01-01

    Full Text Available Using recent results in tandem queues and queueing networks with renewal input, when successive service times of the same customer are varying (and when the busy periods are frequently not broken up in large networks, the local queueing delay of a single server queueing network is evaluated utilizing new concepts of virtual and actual delays (respectively. It appears that because of an important property, due to the underlying tandem queue effect, the usual queueing standards (related to long queues cannot protect against significant overloads in the buffers due to some possible “agglutination phenomenon” (related to short queues. Usual network management methods and traffic simulation methods should be revised, and should monitor the partial traffic streams loads (and not only the server load.

  1. Client/server approach to image capturing

    Science.gov (United States)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven

  2. Construction of a nuclear data server using TCP/IP

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko; Sakai, Osamu [Kyushu Univ., Fukuoka (Japan)

    1997-03-01

    We construct a nuclear data server which provides data in the evaluated nuclear data library through the network by means of TCP/IP. The client is not necessarily a user but a computer program. Two examples with a prototype server program are demonstrated, the first is data transfer from the server to a user, and the second is to a computer program. (author)

  3. On-line single server dial-a-ride problems

    NARCIS (Netherlands)

    Feuerstein, E.; Stougie, L.

    1998-01-01

    In this paper results on the dial-a-ride problem with a single server are presented. Requests for rides consist of two points in a metric space, a source and a destination. A ride has to be made by the server from the source to the destination. The server travels at unit speed in the metric space

  4. JAFA: a protein function annotation meta-server

    DEFF Research Database (Denmark)

    Friedberg, Iddo; Harder, Tim; Godzik, Adam

    2006-01-01

    Annotations, or JAFA server. JAFA queries several function prediction servers with a protein sequence and assembles the returned predictions in a legible, non-redundant format. In this manner, JAFA combines the predictions of several servers to provide a comprehensive view of what are the predicted functions...

  5. Virtual Reality

    Science.gov (United States)

    1993-04-01

    until exhausted. SECURITY CLASSIFICATION OF THIS PAGE All other editions are obsolete. UNCLASSIFIED " VIRTUAL REALITY JAMES F. DAILEY, LIEUTENANT COLONEL...US" This paper reviews the exciting field of virtual reality . The author describes the basic concepts of virtual reality and finds that its numerous...potential benefits to society could revolutionize everyday life. The various components that make up a virtual reality system are described in detail

  6. Virtual reality in surgical training.

    Science.gov (United States)

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  7. Virtual Meteorological Center

    Directory of Open Access Journals (Sweden)

    Marius Brinzila

    2007-10-01

    Full Text Available A virtual meteorological center, computer based with Internet possibility transmission of the information is presented. Circumstance data is collected with logging field meteorological station. The station collects and automatically save data about the temperature in the air, relative humidity, pressure, wind speed and wind direction, rain gauge, solar radiation and air quality. Also can perform sensors test, analyze historical data and evaluate statistical information. The novelty of the system is that it can publish data over the Internet using LabVIEW Web Server capabilities and deliver a video signal to the School TV network. Also the system performs redundant measurement of temperature and humidity and was improved using new sensors and an original signal conditioning module.

  8. Virtual Box

    DEFF Research Database (Denmark)

    Davis, Hilary; Skov, Mikael B.; Stougaard, Malthe

    2007-01-01

    . This paper reports on the design, implementation and initial evaluation of Virtual Box. Virtual Box attempts to create a physical and engaging context in order to support reciprocal interactions with expressive content. An implemented version of Virtual Box is evaluated in a location-aware environment...

  9. Energy Servers Deliver Clean, Affordable Power

    Science.gov (United States)

    2010-01-01

    K.R. Sridhar developed a fuel cell device for Ames Research Center, that could use solar power to split water into oxygen for breathing and hydrogen for fuel on Mars. Sridhar saw the potential of the technology, when reversed, to create clean energy on Earth. He founded Bloom Energy, of Sunnyvale, California, to advance the technology. Today, the Bloom Energy Server is providing cost-effective, environmentally friendly energy to a host of companies such as eBay, Google, and The Coca-Cola Company. Bloom's NASA-derived Energy Servers generate energy that is about 67-percent cleaner than a typical coal-fired power plant when using fossil fuels and 100-percent cleaner with renewable fuels.

  10. Reporting with Microsoft SQL Server 2012

    CERN Document Server

    Serra, James

    2014-01-01

    This is a step-by-step tutorial that deals with Microsoft Server 2012 reporting tools:SSRS and Power View.If you are a BI developer, consultant, or architect who wishes to learn how to use SSRS and Power View, and want to understand the best use for each tool, then this book will get you up and running quickly. No prior experience is required with either tool!

  11. Descriptors of server capabilities in China

    DEFF Research Database (Denmark)

    Adeyemi, Oluseyi; Slepniov, Dmitrij; Wæhrens, Brian Vejrum

    are relevant to determine subsidiary roles and as an indication of the capabilities required. These descriptors are identified through extensive literature review and validated by case studies of two Danish multinational companies subsidiaries operating in China. They provided the empirical basis......China with the huge market potential it possesses is an important issue for subsidiaries of western multinational companies. The objective of this paper is therefore to strengthen researchers’ and practitioners’ perspectives on what are the descriptors of server capabilities. The descriptors...

  12. Instant Debian build a web server

    CERN Document Server

    Parrella, Jose Miguel

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. A concise guide full of step-by-step recipes to teach you how to install and configure a Debian web server.This is an ideal book if you are an administrator on a Development Operations team or infrastructure management, who is passionate about Linux and their Web applications but have no previous experience with Debian or APT-based systems.

  13. SQL Server 2012 reporting services blueprints

    CERN Document Server

    Ribunal, Marlon

    2013-01-01

    Follow the fictional John Kirkland through a series of real-world reporting challenges based on actual business conditions. Use his detailed blueprints to develop your own reports for every requirement.This book is for report developers, data analysts, and database administrators struggling to master the complex world of effective reporting in SQL Server 2012. Knowledge of how data sources and data sets work will greatly help readers to speed through the tutorials.

  14. TwiddleNet: Smartphones as Personal Servers

    OpenAIRE

    Gurminder, Singh; Center for the Study of Mobile Devices and Communications

    2012-01-01

    TwiddleNet uses smartphones as personal servers to enable instant content capture and dissemination for firstresponders. It supports the information sharing needs of first responders in the early stages of an emergency response operation. In TwiddleNet, content, once captured, is automatically tagged and disseminated using one of the several networking channels available in smartphones. TwiddleNet pays special attention to minimizing the equipment, network set-up time, and content...

  15. Preprint server seeks way to halt plagiarists

    CERN Multimedia

    Giles, J

    2003-01-01

    "An unusual case of plagiarism has struck ArXiv, the popular physics preprint server at Cornell University in Ithaca, New York, resulting in the withdrawal of 22 papers...The plagiarism case traces its origins to June 2002, when Yasushi Watanabe, a high-energy physicist at the Tokyo Insitute of Technology, was contacted by Ramy Noboulsi, who said he was a mathematical physicist" (1 page)

  16. Metastability of Queuing Networks with Mobile Servers

    Science.gov (United States)

    Baccelli, F.; Rybko, A.; Shlosman, S.; Vladimirov, A.

    2018-04-01

    We study symmetric queuing networks with moving servers and FIFO service discipline. The mean-field limit dynamics demonstrates unexpected behavior which we attribute to the metastability phenomenon. Large enough finite symmetric networks on regular graphs are proved to be transient for arbitrarily small inflow rates. However, the limiting non-linear Markov process possesses at least two stationary solutions. The proof of transience is based on martingale techniques.

  17. Development of Virtual Reality Cycling Simulator

    OpenAIRE

    Schramka, Filip; Arisona, Stefan; Joos, Michael; Erath, Alexander

    2017-01-01

    This paper presents a cycling simulator implemented using consumer virtual reality hardware and additional off-the-shelf sensors. Challenges like real time motion tracking within the performance requirements of state of the art virtual reality are successfully mastered. Retrieved data from digital motion processors is sent over Bluetooth to a render machine running Unity3D. By processing this data a bicycle is mapped into virtual space. Physically correct behaviour is simulated and high quali...

  18. Evaluation of a server-client architecture for accelerator modeling and simulation

    International Nuclear Information System (INIS)

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-01-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands

  19. Windows Server 2012 vulnerabilities and security

    Directory of Open Access Journals (Sweden)

    Gabriel R. López

    2015-09-01

    Full Text Available This investigation analyses the history of the vulnerabilities of the base system Windows Server 2012 highlighting the most critic vulnerabilities given every 4 months since its creation until the current date of the research. It was organized by the type of vulnerabilities based on the classification of the NIST. Next, given the official vulnerabilities of the system, the authors show how a critical vulnerability is treated by Microsoft in order to countermeasure the security flaw. Then, the authors present the recommended security approaches for Windows Server 2012, which focus on the baseline software given by Microsoft, update, patch and change management, hardening practices and the application of Active Directory Rights Management Services (AD RMS. AD RMS is considered as an important feature since it is able to protect the system even though it is compromised using access lists at a document level. Finally, the investigation of the state of the art related to the security of Windows Server 2012 shows an analysis of solutions given by third parties vendors, which offer security products to secure the base system objective of this study. The recommended solution given by the authors present the security vendor Symantec with its successful features and also characteristics that the authors considered that may have to be improved in future versions of the security solution.

  20. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2012-01-01

    Written by Denny Cherry, a Microsoft MVP for the SQL Server product, a Microsoft Certified Master for SQL Server 2008, and one of the biggest names in SQL Server today, Securing SQL Server, Second Edition explores the potential attack vectors someone can use to break into your SQL Server database as well as how to protect your database from these attacks. In this book, you will learn how to properly secure your database from both internal and external threats using best practices and specific tricks the author uses in his role as an independent consultant while working on some of the largest