Minasi, Mark; Mueller, John Paul
Find in-depth coverage of general networking concepts and basic instruction on Windows Server 2008 installation and management including active directory, DNS, Windows storage, and TCP/IP and IPv4 networking basics in Mastering Windows Server 2008 Networking Foundations. One of three new books by best-selling author Mark Minasi, this guide explains what servers do, how basic networking works (IP basics and DNS/WINS basics), and the fundamentals of the under-the-hood technologies that support staff must understand. Learn how to install Windows Server 2008 and build a simple network, security co
Katipamula, Srinivas; Lutes, Robert G.; Ngo, Hung; Underhill, Ronald M.
In FY13, Pacific Northwest National Laboratory (PNNL) with funding from the Department of Energy’s (DOE’s) Building Technologies Office (BTO) designed, prototyped and tested a transactional network platform to support energy, operational and financial transactions between any networked entities (equipment, organizations, buildings, grid, etc.). Initially, in FY13, the concept demonstrated transactions between packaged rooftop air conditioners and heat pump units (RTUs) and the electric grid using applications or "agents" that reside on the platform, on the equipment, on a local building controller or in the Cloud. The transactional network project is a multi-lab effort with Oakridge National Laboratory (ORNL) and Lawrence Berkeley National Laboratory (LBNL) also contributing to the effort. PNNL coordinated the project and also was responsible for the development of the transactional network (TN) platform and three different applications associated with RTUs. This document describes two applications or "agents" in details, and also summarizes the platform. The TN platform details are described in another companion document.
M.A.A. Boon (Marko); R.D. van der Mei (Rob); E.M.M. Winands
htmlabstractWe study a queueing network with a single shared server, that serves the queues in a cyclic order according to the gated service discipline. External customers arrive at the queues according to independent Poisson processes. After completing service, a customer either leaves the
On a paper main tasks and problems of server virtualization are considerate. Practical value of virtualization in a corporate network, advantages and disadvantages of application of server virtualization are also considerate.
Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.
B. Zhang (Bo); S.C. Borst (Sem); M.I. Reiman
htmlabstractWe consider the server scheduling problem in hybrid P2P networks in the context of a fluid model. Specifically, we examine how to allocate the limited amount of server upload capacity among competing swarms over time in order to optimize the download performance experienced by users. For
The article presents the main aspects for configuring the httpd file in Solaris Unix operating system and the facilities by using the Qt cross-platform application for the Web server administration. The considerations are available for the configuring of the DNS server Bind 8 and 9.
Full Text Available The article presents the main aspects for configuring the httpd file in Solaris Unix operating system and the facilities by using the Qt cross-platform application for the Web server administration. The considerations are available for the configuring of the DNS server Bind 8 and 9.
The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server
Marshall, David; McCrory, Dave
Executives of IT organizations are compelled to quickly implement server virtualization solutions because of significant cost savings. However, most IT professionals tasked with deploying virtualization solutions have little or no experience with the technology. This creates a high demand for information on virtualization and how to properly implement it in a datacenter. Advanced Server Virtualization: VMware® and Microsoft® Platforms in the Virtual Data Center focuses on the core knowledge needed to evaluate, implement, and maintain an environment that is using server virtualization. This boo
Windows 2012 Server Network Security provides the most in-depth guide to deploying and maintaining a secure Windows network. The book drills down into all the new features of Windows 2012 and provides practical, hands-on methods for securing your Windows systems networks, including: Secure remote access Network vulnerabilities and mitigations DHCP installations configuration MAC filtering DNS server security WINS installation configuration Securing wired and wireless connections Windows personal firewall
Dandy Pramana Hostiadi
Full Text Available Electronic mail is a communication model that is fundamental in the era of globalization. Proven on any form of registration data or information requires the presence of email address(electronic mail. The use of email itself cannot be separated from the abuse (such as stelling password and mail spoofing from some parties so it needs security form in email communication. Communication security on mail server such as ZIMBRA mail server has been well-implemented, such as the use of ssl certificate. But the security is still standard. So, when user and password have been found out by third party, email content will be read easily (in cryptography technique it is called plaintext reading. On research that was conducted with pretty good privacy (PGP method email communication security was focused on the email content by encrypting mail text along with the attachment file. In a study conducted, using the mail engine Zimbra mail server. Result of research shows that PGP security is able to secure email content whether the text or the attachment, showing difference of attachment file size is bigger on PGP using and change mail header from the standard mail.
Boon, M.A.A; van der Mei, R.D.; Winands, E.M.M.
We study a queueing network with a single shared server that serves the queues in a cyclic order. External customers arrive at the queues according to independent Poisson processes. After completing service, a customer either leaves the system or is routed to another queue. This model is very
Liu, Lu; Antonopoulos, Nick
Peer-to-peer (P2P) networks attract attentions worldwide with their great success in file sharing networks (such as Napster, Gnutella, Freenet, BitTorrent, Kazaa, and JXTA). An explosive increase in the popularity of P2P networks has been witnessed by millions of Internet users. In this chapter, an investigation of network architecture evolution, from client-server to P2P networking, will be given, underlining the benefits and the potential problems of existing approaches, which provides essential theoretical base to drive future generation of distributed systems.
If you are a developer with BeagleBone experience and want to learn how to use it to set up a network and file server, then this book is ideal for you. To make the most of this book, you should be comfortable with the Linux operating system and know how to install software from the Internet, but you do not have to be a network guru.
Karlsson, Christer; Skold, Martin
The article examines the strategic issues involved in the deployment of product platform development in an industrial network. The move entails identifying the types and characteristics of generically different product platform strategies and clarifying strategic motives and differences. Number...
A.A. Ketut Agung Cahyawan W
Full Text Available Selama ini seorang network administrator harus berada pada ruang server jika ingin menyalakan server yang ada disana, atau memeriksa apakah temperatur ruang server sudah cukup agak server dapat bekerja optimal. Permasalahan timbul karena ruang server biasanya terletak cukup jauh dan harus selalu terkunci demi alasan keamanan. Pada penelitian ini dirancang suatu sistem kendali dan monitor yang dapat menyalakan server dari jarak jauh sekaligus memantau suhu ruangan server, menaikkan atau menurunkan temperatur AC dan juga mematikan dan menghidupkannya. Desain yang dibuat berbasis Arduino Duemilanove dan Arduino Ethernet Shield, yang merupakan suatu platform kit elektronik yang open source. Dengan sistem ini seorang network administrator dapat melakukan kontrol ruang server dari jarak jauh.
Herring, Ralph H.; Tefend, Linda L.
The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.
Balmer, Steven R; Irvine, Cynthia E
This paper examines the architectural and security impact of using commercially available, popular terminal servers to support thin clients within the context of a high assurance multilevel network...
Quinlan, Jason J.; Raca, Darijo; Zahran, Ahmed H.; Khalid, Ahmed; Ramakrishnan, K. K.; Sreenan, Cormac J.
In this demonstration we present a platform that encompasses all of the components required to realistically evaluate the performance of Dynamic Adaptive Streaming over HTTP (DASH) over a real-time NS-3 simulated network. Our platform consists of a network-attached storage server with DASH video clips and a simulated LTE network which utilises the NS-3 LTE module provided by the LENA project. We stream to clients running an open-source player with a choice of adaptation algorithms. By providi...
Full Text Available Abstract Background Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. Results To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i Submitting specimens requests across collaborating organizations (ii Graphically defining new experimental data types, metadata and wizards for data collection (iii Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v Interacting dynamically with external data sources (vi Tracking study participants and cohorts over time (vii Developing custom interfaces using client libraries (viii Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36
Nelson, Elizabeth K; Piehler, Britt; Eckels, Josh; Rauch, Adam; Bellew, Matthew; Hussey, Peter; Ramsay, Sarah; Nathe, Cory; Lum, Karl; Krouse, Kevin; Stearns, David; Connolly, Brian; Skillman, Tom; Igra, Mark
Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350 organizations. It tracks
Background Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely. Results To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment. Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350
I. Rimac; S.C. Borst (Sem); A. Walid
htmlabstractContent distribution networks are experiencing tremendous growth, in terms of traffic volume, scope, and diversity, fueled by several technological advances and competing paradigms. Traditional client/server architectures as deployed in the majority of today's commercial networks provide
Cheung, Kei-Hoi; Hager, Janet; Pan, Deyun; Srivastava, Ranjana; Mane, Shrikant; Li, Yuli; Miller, Perry; Williams, Kenneth R
We have developed a universal web server application (KARMA) that allows comparison and annotation of user-defined pairs of microarray platforms based on diverse types of genome annotation data (across different species) collected from multiple sources. The application is an effective tool for diverse microarray platforms, including arrays that are provided by (i) the Keck Microarray Resource at Yale, (ii) commercially available Affymetrix GeneChips and spotted arrays and (iii) custom arrays made by individual academics. The tool provides a web interface that allows users to input pairs of test files that represent diverse array platforms for either single or multiple species. The program dynamically identifies analogous DNA fragments spotted or synthesized on multiple microarray platforms based on the following types of information: (i) NCBI-Unigene identifiers, if the platforms being compared are within the same species or (ii) NCBI-Homologene data, if they are cross-species. The single-species comparison is implemented based on set operations: intersection, union and difference. Other forms of retrievable annotation data, including LocusLink, SwissProt and Gene Ontology (GO), are collected from multiple remote sites and stored in an integrated fashion using an Oracle database. The KARMA database, which is updated periodically, is available on line at the following URL: http://ymd.med.yale.edu/karma/cgi-bin/karma.pl.
Davis, G.; Foley, S.; Battistuz, B.; Eakins, J.; Vernon, F. L.; Astiz, L.
The Array Network Facility is charged with the acquisition and processing of seismic data from the Earthscope USArray experiment. High resolution data from 400 seismic sensors is streamed in near real-time to the ANF at UCSD in La Jolla, CA where it is automatically processed by machine and reviewed by analysts before being externally distributed to other data centers, including the IRIS Data Management Center. Data streams include six channels of 24- bit seismic data at 40 samples per second and over twenty channels of state-of-heath data at 1 sample per second per station. The sheer volume of data acquired and processed overwhelms the capabilities of any one affordable server system. Due to the relatively small buffers on-site (typically four hours) at the seismic stations, it is vital that the real-time systems remain online and acquiring data around the clock in order to meet data distribution requirements in a timely manner. Although the ANF does not have a 24x7x365 operations staff, the logistical difficulty in retrieving data from often remote locations after it expires from the on-site buffers requires the real- time systems to automatically recover from server failures without immediate operator intervention. To accomplish these goals, the ANF has implemented a five node Sun Solaris Cluster with acquisition and processing tasks shared by a mixture of integer and floating point processing units (Sun T2000 and V240/V245 systems). This configuration is an improvement over the typical regional network data center for a number of reasons: - By implementing a shared storage architecture, acquisition, processing, and distribution can be split between multiple systems working on the same data set, thus limiting the impact of a particularly resource-intensive task on the acquisition system. - The Solaris Cluster software monitors the health of the cluster nodes and provides the ability automatically fail over processes from a failed node to a healthy node
Vesely, Martin; Baron, Thomas; Le Meur, Jean-Yves; Simko, Tibor; GreyNet, Grey Literature Network Service
In this paper we present a technology for networked information services, developed at the CERN Document Server (CDS) research group, called the CERN Document Server Software (CDSware). Standardization of networked information services in the field of grey literature has recently become a subject of an intensive research in the digital library community. The current state-of-the-art in this area effectively allows to provide various networked information services, such as information brokerin...
Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun
This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected inf...
Hansen, Morten Tranberg; Kusy, Branislav
Design and development of wireless sensor network applications adds an additional layer of complexity to traditional computer systems. The developer needs to be an expert in resource constrained embedded devices as well as traditional desktop computers. We propose Tinylnventor, an open......-source development environment that takes a holistic approach to implementing sensor network applications. Users build applications using a drag-and-drop visual programming language Open Blocks, a language that Google selected for its App Inventor for Android. Tinylnventor uses cross-platform programming concepts......, such as threads and common network operations, to provide a unified environment from which it generates application binaries for the respective platforms. We demonstrate through an application example that Tinylnventor is both simple to use and powerful in expressing complex applications....
Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia
servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays......We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... in the L2 cache. Experiments show that under high concurrency, our optimizations improve the throughput of TUX by up to 40% and the number of requests serviced at the time of failure by 21%....
... paramount. One way to provide this support is to create a Local Area Network (LAN) in which the workstations are positioned at the deployed location while the servers are maintained at a Main Operating Base (MOB...
Full Text Available Web applications, databases and advanced mobile platform can facilitate real-time data acquisition for effective monitoring on intelligent agriculture. To improve facilities for aquaculture production automation and efficient, this paper presents an application for wireless network and Android platform that interacts with an advanced control system based on Apache, SQL Server, Java, to collect and monitor variables applied in aquaculture. The test and application shows that is stable, high price-performance ratio, good mobility and easy to operate, It has a strong practicality and application prospects.
Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei
This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.
Kim, Hyung Joon
(multielectrode array) or nanowire electrode array to study electrophysiology in neuronal network. Also, "diode-like" microgrooves to control the number of neuronal processes is embedded in this platform. Chapter 6 concludes with a possible future direction of this work. Interfacing micro/nanotechnology with primary neuron culture would open many doors in fundamental neuroscience research and also biomedical innovation.
Full Text Available Raspberry Pi is a small-sized computer, but it can function like an ordinary computer. Because it can function like a regular PC then it is also possible to run a web server application on the Raspberry Pi. This paper will report results from testing the feasibility and performance of running a web server on the Raspberry Pi. The test was conducted on the current top three most popular web servers, which are: Apache, Nginx, and Lighttpd. The parameters used to evaluate the feasibility and performance of these web servers were: maximum request and reply time. The results from the test showed that it is feasible to run all three web servers on the Raspberry Pi but Nginx gave the best performance followed by Lighttpd and Apache.Keywords: Raspberry Pi, web server, Apache, Lighttpd, Nginx, web server performance
Full Text Available The fast development of multimedia devices and services causes the need for increase of the transport capacity of packet networks. OSPF-TE uses both the information about network topology and the link utilization when finding the routing path. Accordingly, it might find path even in the cases when the shortest path routing would cause overloaded link and dropped packets. In this paper we develop the platform for capacity reservation in IP networks. We implement OSPF-TE protocol as an extension of the existing OSPF. In addition, the basic functionalities of the reservation protocol and the user interface are implemented. We present the simulation environment for the verification of our implementation and for the analysis of various routing algorithms based on the information conveyed by OSPF-TE.
If you are an administrator who is looking to gain a greater understanding of how to design and implement a virtualization solution based on Citrix® XenServer®, then this book is for you. The book will serve as an excellent resource for those who are already familiar with other virtualization platforms, such as Microsoft Hyper-V or VMware vSphere.The book assumes that you have a good working knowledge of servers, networking, and storage technologies.
S. A. Dudin
Full Text Available A multi-server queueing system with two types of customers as a model of a cell of mobile network is considered. Part of the servers is reserved for service of first type customers only. The customers who do not receive service can make repeated attempts. All the system parameters including the total number of servers and the number of reserved servers are influenced by a random environment. The process of the system states is constructed, the ergodicity condition is derived, the stationary distribution of the system states is computed. The formulas for the main performance measures of the system are presented.
Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan
result of which is that its functionality becomes totally inaccessible or hard to access for clients) requires measuring the capacity of a server at any given time. This measurement is highly complex, if not impossible. There are several variables which we can measure on a running system, such as: CPU...
Full Text Available Peer-to-peer has entered the public limelight over the last few years. Several research projects are underway on peer-to-peer technologies, but no definitive conclusion is currently available. Compared with traditional Internet technologies, peer-to-peer has the potential to realize highly scalable, extensible, and efficient distributed applications. This is because its basic functions realize resource discovery, resource sharing, and load balancing in a highly distributed manner. An easy prediction is the emergence of an environment in which many sensors, people, and many different kinds of objects exist, move, and communicate with one another. Peer-to-peer is one of the most important and suitable technologies for such networking since it supports discovery mechanisms, simple one-to-one communication between devices, free and extensible distribution of resources, and distributed search to handle the enormous number of resources. The purpose of this study is to explore a universal peer-to-peer network architecture that will allow various devices to communicate with one another across various networks. We have been designing architecture and protocols for realizing peer-to-peer networking among various devices. We are currently designing APIs that are available for various peer-to-peer applications and are implementing a prototype called "Jupiter" as a peer-to-peer networking platform over heterogeneous networks.
Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun
This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.
Gou Zhao Xia
Full Text Available With rapid development of online education; teaching platform based on the network, as a new instructional mode has become a hot topic in online teaching. In this paper, the he teaching situation and existing problems on online was analyzed by comparing the difference between network teaching platform and traditional classroom teaching. Then the strategies of network teaching management and the case, which is focusing on the characteristics of Blackboard with the application of network teaching management was presents.
Wu, Jianmin; Mao, Xizeng; Cai, Tao; Luo, Jingchu; Wei, Liping
There is an increasing need to automatically annotate a set of genes or proteins (from genome sequencing, DNA microarray analysis or protein 2D gel experiments) using controlled vocabularies and identify the pathways involved, especially the statistically enriched pathways. We have previously demonstrated the KEGG Orthology (KO) as an effective alternative controlled vocabulary and developed a standalone KO-Based Annotation System (KOBAS). Here we report a KOBAS server with a friendly web-based user interface and enhanced functionalities. The server can support input by nucleotide or amino acid sequences or by sequence identifiers in popular databases and can annotate the input with KO terms and KEGG pathways by BLAST sequence similarity or directly ID mapping to genes with known annotations. The server can then identify both frequent and statistically enriched pathways, offering the choices of four statistical tests and the option of multiple testing correction. The server also has a 'User Space' in which frequent users may store and manage their data and results online. We demonstrate the usability of the server by finding statistically enriched pathways in a set of upregulated genes in Alzheimer's Disease (AD) hippocampal cornu ammonis 1 (CA1). KOBAS server can be accessed at http://kobas.cbi.pku.edu.cn.
Vingelmann, Peter; Pedersen, Morten Videbæk; Fitzek, Frank
This paper looks into the implementation details of random linear network coding on the Apple iPhone and iPod Touch mobile platforms for multimedia distribution. Previous implementations of network coding on this platform failed to achieve a throughput which is suﬃcient to saturate the WLAN...
Boon, M.A.A.; van der Mei, R.D.; Winands, E.M.M.
We study a queueing network with a single shared server that serves the queues in a cyclic order. External customers arrive at the queues according to independent Poisson processes. After completing service, a customer either leaves the system or is routed to another queue. This model is very
Oleksii O. Kaplun
Full Text Available The network modernization, educational information systems software and hardware updates problem is actual in modern term of information technologies prompt development. There are server applications and network topology of Institute of Information Technology and Learning Tools of National Academy of Pedagogical Sciences of Ukraine analysis and their improvement methods expound in the article. The article materials represent modernization results implemented to increase network efficiency and reliability, decrease response time in Institute’s network information systems. The article gives diagrams of network topology before upgrading and after finish of optimization and upgrading processes.
Pierre Le Gall
Full Text Available Using recent results in tandem queues and queueing networks with renewal input, when successive service times of the same customer are varying (and when the busy periods are frequently not broken up in large networks, the local queueing delay of a single server queueing network is evaluated utilizing new concepts of virtual and actual delays (respectively. It appears that because of an important property, due to the underlying tandem queue effect, the usual queueing standards (related to long queues cannot protect against significant overloads in the buffers due to some possible agglutination phenomenon (related to short queues. Usual network management methods and traffic simulation methods should be revised, and should monitor the partial traffic streams loads (and not only the server load.
Haack, Jereme N.; Katipamula, Srinivas; Akyol, Bora A.; Lutes, Robert G.
In FY13, Pacific Northwest National Laboratory (PNNL) with funding from the Department of Energy’s (DOE’s) Building Technologies Office (BTO) designed, prototyped and tested a transactional network platform. The platform is intended to support energy, operational and financial transactions between any networked entities (equipment, organizations, buildings, grid, etc.). Initially, in FY13, the concept demonstrated transactions between packaged rooftop units (RTUs) and the electric grid using applications or “agents” that reside on the platform, on the equipment, on local building controller or in the Cloud. This document describes the core of the transactional network platform, the Volttron Lite™ software and associated services hosted on the platform. Future enhancements are also discussed. The appendix of the document provides examples of how to use the various services hosted on the platform.
Background Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists’ capacity to use these immunoassays to evaluate human clinical trials. Results The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose–response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Conclusions Unlike other tools tailored for
Eckels, Josh; Nathe, Cory; Nelson, Elizabeth K; Shoemaker, Sara G; Nostrand, Elizabeth Van; Yates, Nicole L; Ashley, Vicki C; Harris, Linda J; Bollenbeck, Mark; Fong, Youyi; Tomaras, Georgia D; Piehler, Britt
Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists' capacity to use these immunoassays to evaluate human clinical trials. The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose-response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Unlike other tools tailored for Luminex immunoassays, LabKey Server
Dutta, R.; Bentum, Marinus Jan; van der Zee, Ronan A.R.; Kokkeler, Andre B.J.
Wireless sensor networks are predicted to be the most versatile, popular and useful technology in the near future. A large number of applications are targeted which will hugely benefit from a network of tiny computers with few sensors, radio communication platform, intelligent networking and
Futral, William T; Greene, James
... focus on helping the platform designer and operating system vendor (OSV) implement the technology on a hardware or software platform. There are, however, a small amount of engineering, marketing and positi...
Zhang, Xiao-Fei; Ou-Yang, Le; Zhao, Xing-Ming; Yan, Hong
Understanding how the structure of gene dependency network changes between two patient-specific groups is an important task for genomic research. Although many computational approaches have been proposed to undertake this task, most of them estimate correlation networks from group-specific gene expression data independently without considering the common structure shared between different groups. In addition, with the development of high-throughput technologies, we can collect gene expression profiles of same patients from multiple platforms. Therefore, inferring differential networks by considering cross-platform gene expression profiles will improve the reliability of network inference. We introduce a two dimensional joint graphical lasso (TDJGL) model to simultaneously estimate group-specific gene dependency networks from gene expression profiles collected from different platforms and infer differential networks. TDJGL can borrow strength across different patient groups and data platforms to improve the accuracy of estimated networks. Simulation studies demonstrate that TDJGL provides more accurate estimates of gene networks and differential networks than previous competing approaches. We apply TDJGL to the PI3K/AKT/mTOR pathway in ovarian tumors to build differential networks associated with platinum resistance. The hub genes of our inferred differential networks are significantly enriched with known platinum resistance-related genes and include potential platinum resistance-related genes.
The Server and Agent-based Active Network Management (SAAM) architecture was initially designed to work with the next generation Internet where increasingly sophisticated applications will require QoS guarantees...
Pierre Le Gall
Full Text Available We consider the stochastic behavior of networks of single server queues when successive service times of a given customer are highly correlated. The study is conducted in two particular cases: 1 networks in heavy traffic, and 2 networks in which all successive service times have the same value (for a given customer, in order to avoid the possibility of breaking up the busy periods. We then show how the local queueing delay (for an arbitrary customer can be derived through an equivalent tandem queue on the condition that one other local queueing delay is added: the jitter delay due to the independence of partial traffic streams.
Puelma, Tomas; Araus, Viviana; Canales, Javier; Vidal, Elena A; Cabello, Juan M; Soto, Alvaro; Gutiérrez, Rodrigo A
GENIUS is a user-friendly web server that uses a novel machine learning algorithm to infer functional gene networks focused on specific genes and experimental conditions that are relevant to biological functions of interest. These functions may have different levels of complexity, from specific biological processes to complex traits that involve several interacting processes. GENIUS also enriches the network with new genes related to the biological function of interest, with accuracies comparable to highly discriminative Support Vector Machine methods. GENIUS currently supports eight model organisms and is freely available for public use at http://networks.bio.puc.cl/genius . email@example.com. Supplementary data are available at Bioinformatics online.
Febrian Wahyu Christanto; Mohammad Sani Suprayogi
Ilmu Manajemen Jaringan adalah bagian dari ilmu jaringan komputer yang berkaitan dengan pengaturan sumber daya, optimasi, dan keamanan jaringan komputer. Pemantauan sumber daya server merupakan salah satu kegiatan dalam manajemen jaringan, dimana server yang dipantau harus tetap terjaga layanannya, baik server yang berupa server fisik atau virtual server. Universitas Semarang telah mengimplementasikan cloud computing namun memiliki kendala dalam memantau virtual server yang berjalan. Syst...
Knowledge Management Systems have been actively promoted for decades within organizations but have frequently failed to be used. Recently, deployments of enterprise social networking platforms used for knowledge management have become commonplace. These platforms help harness the knowledge of workers by serving as repositories of knowledge as well…
Sørensen, Jens Otto; Alnor, Karl
In this paper we construct a Star Join Schema and show how this schema can be created using the basic tools delivered with SQL Server 7.0. Major objectives are to keep the operational database unchanged so that data loading can be done without disturbing the business logic of the operational...... database. The operational data base is an expanded version of the Pubs database....
Lim, Jeongheui; Kim, Sang-Yoon; Kim, Sungmin; Eo, Hae-Seok; Kim, Chang-Bae; Paek, Woon Kee; Kim, Won; Bhak, Jong
DNA barcoding provides a rapid, accurate, and standardized method for species-level identification using short DNA sequences. Such a standardized identification method is useful for mapping all the species on Earth, particularly when DNA sequencing technology is cheaply available. There are many nations in Asia with many biodiversity resources that need to be mapped and registered in databases. We have built a general DNA barcode data processing system, BioBarcode, with open source software - which is a general purpose database and server. It uses mySQL RDBMS 5.0, BLAST2, and Apache httpd server. An exemplary database of BioBarcode has around 11,300 specimen entries (including GenBank data) and registers the biological species to map their genetic relationships. The BioBarcode database contains a chromatogram viewer which improves the performance in DNA sequence analyses. Asia has a very high degree of biodiversity and the BioBarcode database server system aims to provide an efficient bioinformatics protocol that can be freely used by Asian researchers and research organizations interested in DNA barcoding. The BioBarcode promotes the rapid acquisition of biological species DNA sequence data that meet global standards by providing specialized services, and provides useful tools that will make barcoding cheaper and faster in the biodiversity community such as standardization, depository, management, and analysis of DNA barcode data. The system can be downloaded upon request, and an exemplary server has been constructed with which to build an Asian biodiversity system http://www.asianbarcode.org.
Azodolmolky, Siamak; Petersen, Martin Nordal; Fagertun, Anna Manolova
the lightweight system virtualization, which is recently supported in modern operating systems, in this work we present the architecture of a Software-Defined Network (SDN) emulation platform for transport optical networks and investigate its usage in a use-case scenario. To the best of our knowledge......, this is for the first time that an SDN-based emulation platform is proposed for modeling and performance evaluation of optical networks. Coupled with recent trend of extension of SDN towards transport (optical) networks, the presented tool can facilitate the evaluation of innovative idea before actual implementations...
Smoly, Ilan Y; Lerman, Eugene; Ziv-Ukelson, Michal; Yeger-Lotem, Esti
Network motifs are small topological patterns that recur in a network significantly more often than expected by chance. Their identification emerged as a powerful approach for uncovering the design principles underlying complex networks. However, available tools for network motif analysis typically require download and execution of computationally intensive software on a local computer. We present MotifNet, the first open-access web-server for network motif analysis. MotifNet allows researchers to analyze integrated networks, where nodes and edges may be labeled, and to search for motifs of up to eight nodes. The output motifs are presented graphically and the user can interactively filter them by their significance, number of instances, node and edge labels, and node identities, and view their instances. MotifNet also allows the user to distinguish between motifs that are centered on specific nodes and motifs that recur in distinct parts of the network. MotifNet is freely available at http://netbio.bgu.ac.il/motifnet . The website was implemented using ReactJs and supports all major browsers. The server interface was implemented in Python with data stored on a MySQL database. firstname.lastname@example.org or email@example.com. Supplementary data are available at Bioinformatics online.
Von Solms, S
Full Text Available of the NS. A discussion on the various aspects of the NS is discussed subsequently. A. Topology It can be seen from Figure 1 that the developed NS comprises of multiple network sections, namely Internal User Networks/Local Area Networks (LANs) connected...]. This will provide a realistic platform which is isolated, more controlled and more predictable than implementation across live networks . In this paper we discuss the development of such a network simulation environment, called a network simulator (NS...
Full Text Available Peer-to-Peer (P2P networking in a mobile learning environment has become a popular topic of research. One of the new emerging research ideas is on the ability to combine P2P network with server-based network to form a strong efficient portable and compatible network infrastructure. This paper describes a unique mobile network architecture, which reflects the on-campus students’ need for a mobile learning environment. This can be achieved by combining two different networks, client-server and peer-to-peer ad-hoc to form a sold and secure network. This is accomplished by employing one peer within the ad-hoc network to act as an agent-peer to facilitate communication and information sharing between the two networks. It can be implemented without any major changes to the current network technologies, and can combine any wireless protocols such as GPRS, Wi-Fi, Bluetooth, and 3G.
Shen, Tzu-Chiang; Ovando, Nicolás.; Bartsch, Marcelo; Simmond, Max; Vélez, Gastón; Robles, Manuel; Soto, Rubén.; Ibsen, Jorge; Saldias, Christian
ALMA is the first astronomical project being constructed and operated under industrial approach due to the huge amount of elements involved. In order to achieve the maximum through put during the engineering and scientific commissioning phase, several production lines have been established to work in parallel. This decision required modification in the original system architecture in which all the elements are controlled and operated within a unique Standard Test Environment (STE). The advance in the network industry and together with the maturity of virtualization paradigm allows us to provide a solution which can replicate the STE infrastructure without changing their network address definition. This is only possible with Virtual Routing and Forwarding (VRF) and Virtual LAN (VLAN) concepts. The solution allows dynamic reconfiguration of antennas and other hardware across the production lines with minimum time and zero human intervention in the cabling. We also push the virtualization even further, classical rack mount servers are being replaced and consolidated by blade servers. On top of them virtualized server are centrally administrated with VMWare ESX. Hardware costs and system administration effort will be reduced considerably. This mechanism has been established and operated successfully during the last two years. This experience gave us confident to propose a solution to divide the main operation array into subarrays using the same concept which will introduce huge flexibility and efficiency for ALMA operation and eventually may simplify the complexity of ALMA core observing software since there will be no need to deal with subarrays complexity at software level.
Winoto, Basuki; Wardani, Deni
File, as an important aspect of personal computer USAge, can be managed personally by the ownerhimself or centralized in file storage servers. File storage server in Windows network environment use a specificcommunication protocol called SMB. This protocol, instead of being used by Windows Server, can also used bya server application called Samba which runs on Linux. Therefore, file storage server migration from WindowsServer to Linux Server is possible.
Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia
Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators
Full Text Available Application development platform is the most important environment in IT industry. There are a variety of platforms. Although the native development enables application to optimize, various languages and software development kits need to be acquired according to the device. The coexistence of smart devices and platforms has rendered the native development approach time and cost consuming. Cross-platform development emerged as a response to these issues. These platforms generate applications for multiple devices based on web languages. Nevertheless, development requires additional implementation based on a native language because of the coverage and functions of supported application programming interfaces (APIs. Wearable devices have recently attracted considerable attention. These devices only support Bluetooth-based interdevice communication, thereby making communication and device control impossible beyond a certain range. We propose Network Application Agent (NetApp-Agent in order to overcome issues. NetApp-Agent based on the Cordova is a wearable device control platform for the development of network applications, controls input/output functions of smartphones and wearable/IoT through the Cordova and Native API, and enables device control and information exchange by external users by offering a self-defined API. We confirmed the efficiency of the proposed platform through experiments and a qualitative assessment of its implementation.
rates, whereas optical networks can offer much higher data rates but only provide fixed connection structures. Their complementary characteristics make the integration of the two networks a promising trend for next generation networks. With combined strengths, the converged network will provide both...... high data rate services and connectivity at anytime and anywhere. One major challenge in the interworking is how to achieve seamless integration. There are many aspects involved in designing an integrated control platform, such as QoS provisioning, mobility, and resiliency. This dissertation introduces...
Pai, V.S.; Aron, M.; Banga, G.; Svendsen, M.; Druschel, P.; Zwaenepoel, W.; Nahum, E.
We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution: the front-end uses the content requested, in addition to information about the load on the back-end nodes, to choose which back-end will handle this request. Content-based request distribution can improve locality in the back-ends’ main memory caches, increase secondary storage scalability by partitioning the se...
Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique
This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.
Full Text Available This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI. The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A. Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.
Piovesan, Damiano; Minervini, Giovanni; Tosatto, Silvio C E
Residue interaction networks (RINs) are an alternative way of representing protein structures where nodes are residues and arcs physico-chemical interactions. RINs have been extensively and successfully used for analysing mutation effects, protein folding, domain-domain communication and catalytic activity. Here we present RING 2.0, a new version of the RING software for the identification of covalent and non-covalent bonds in protein structures, including π-π stacking and π-cation interactions. RING 2.0 is extremely fast and generates both intra and inter-chain interactions including solvent and ligand atoms. The generated networks are very accurate and reliable thanks to a complex empirical re-parameterization of distance thresholds performed on the entire Protein Data Bank. By default, RING output is generated with optimal parameters but the web server provides an exhaustive interface to customize the calculation. The network can be visualized directly in the browser or in Cytoscape. Alternatively, the RING-Viz script for Pymol allows visualizing the interactions at atomic level in the structure. The web server and RING-Viz, together with an extensive help and tutorial, are available from URL: http://protein.bio.unipd.it/ring. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Ziegler, C.; Schilling, D. L.
Two networks consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self delay and interference delay.
Eyal, Eran; Lum, Gengkon; Bahar, Ivet
The anisotropic network model (ANM) is one of the simplest yet powerful tools for exploring protein dynamics. Its main utility is to predict and visualize the collective motions of large complexes and assemblies near their equilibrium structures. The ANM server, introduced by us in 2006 helped making this tool more accessible to non-sophisticated users. We now provide a new version (ANM 2.0), which allows inclusion of nucleic acids and ligands in the network model and thus enables the investigation of the collective motions of protein-DNA/RNA and -ligand systems. The new version offers the flexibility of defining the system nodes and the interaction types and cutoffs. It also includes extensive improvements in hardware, software and graphical interfaces. ANM 2.0 is available at http://anm.csb.pitt.edu firstname.lastname@example.org, email@example.com. © The Author 2015. Published by Oxford University Press.
Chao, I-Chun; Lee, Kang B; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc.) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks.
An in-depth guide on the leading Unified Communications platform Microsoft Lync Server 2010 maximizes communication capabilities in the workplace like no other Unified Communications (UC) solution. Written by experts who know Lync Server inside and out, this comprehensive guide shows you step by step how to administer the newest and most robust version of Lync Server. Along with clear and detailed instructions, learning is aided by exercise problems and real-world examples of established Lync Server environments. You'll gain the skills you need to effectively deploy Lync Server 2010 and be on
Ye, Juanjuan; Shang, Fei; Yu, Chuang
At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF LABOR Employment and Training Administration Hewlett Packard Company; Enterprise Storage Servers and Networking... May 20, 2013 in response to a petition filed on behalf of workers of Hewlett Packard Company...
Full Text Available This paper proposes an agent-based intelligent platform to model and support parallel and concurrent negotiations among organizations acting in the same industrial market. The underlying complexity is to model the dynamic environment where multi-attribute and multi-participant negotiations are racing over a set of heterogeneous resources. The metaphor Interaction Abstract Machines (IAMs is used to model the parallelism and the non-deterministic aspects of the negotiation processes that occur in Collaborative Networked Environment.
Full Text Available This paper proposes an agent-based platform to model and support parallel and concurrent negotiations among organizations acting in the same industrial market. The underlying complexity is to model the dynamic environment where multi-attribute and multi-participant negotiations are racing over a set of heterogeneous resources. The metaphor Interaction Abstract Machines (IAMs is used to model the parallelism and the non-deterministic aspects of the negotiation processes that occur in Collaborative Networked Environment.
Van Leeuwen, Brian P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldridge, John M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Wireless networking and mobile communications is increasing around the world and in all sectors of our lives. With increasing use, the density and complexity of the systems increase with more base stations and advanced protocols to enable higher data throughputs. The security of data transported over wireless networks must also evolve with the advances in technologies enabling more capable wireless networks. However, means for analysis of the effectiveness of security approaches and implementations used on wireless networks are lacking. More specifically a capability to analyze the lower-layer protocols (i.e., Link and Physical layers) is a major challenge. An analysis approach that incorporates protocol implementations without the need for RF emissions is necessary. In this research paper several emulation tools and custom extensions that enable an analysis platform to perform cyber security analysis of lower layer wireless networks is presented. A use case of a published exploit in the 802.11 (i.e., WiFi) protocol family is provided to demonstrate the effectiveness of the described emulation platform.
Salour, Michael M.; Batayneh, Marwan; Figueroa, Luis
With rapid growth of the Internet, bandwidth demand for data traffic is continuing to explode. In addition, emerging and future applications are becoming more and more network centric. With the proliferation of data communication platforms and data-intensive applications (e.g. cloud computing), high-bandwidth materials such as video clips dominating the Internet, and social networking tools, a networking technology is very desirable which can scale the Internet's capability (particularly its bandwidth) by two to three orders of magnitude. As the limits of Moore's law are approached, optical mesh networks based on wavelength-division multiplexing (WDM) have the ability to satisfy the large- and scalable-bandwidth requirements of our future backbone telecommunication networks. In addition, this trend is also affecting other special-purpose systems in applications such as mobile platforms, automobiles, aircraft, ships, tanks, and micro unmanned air vehicles (UAVs) which are becoming independent systems roaming the sky while sensing data, processing, making decisions, and even communicating and networking with other heterogeneous systems. Recently, WDM optical technologies have seen advances in its transmission speeds, switching technologies, routing protocols, and control systems. Such advances have made WDM optical technology an appealing choice for the design of future Internet architectures. Along these lines, scientists across the entire spectrum of the network architectures from physical layer to applications have been working on developing devices and communication protocols which can take full advantage of the rapid advances in WDM technology. Nevertheless, the focus has always been on large-scale telecommunication networks that span hundreds and even thousands of miles. Given these advances, we investigate the vision and applicability of integrating the traditionally large-scale WDM optical networks into miniaturized mobile platforms such as UAVs. We explain
Pedersen, Morten Videbæk; Heide, Janus; Vingelmann, Peter
This paper is looking into the possibility of multimedia content distribution over multiple mobile platforms forming wireless peer–to–peer networks. State of the art mobile networks are centralized and base station or access point oriented. Current developments break ground for device to device...
Full Text Available In recent years, the number of subscribers of the social network services such as Facebook and Twitter has increased rapidly. In accordance with the increasing popularity of social network services, concerns about user privacy are also growing. Existing social network services have a centralized structure that a service provider collects all the userâ€™s profile and logs until the end of the connection. The information collected typically useful for commercial purposes, but may lead to a serious user privacy violation. The userâ€™s profile can be compromised for malicious purposes, and even may be a tool of surveillance extremely. In this paper, we remove a centralized structure to prevent the service provider from collecting all usersâ€™ information indiscriminately, and present a decentralized structure using the web hosting server. The service provider provides only the service applications to web hosting companies, and the user should select a web hosting company that he trusts. Thus, the userâ€™s information is distributed, and the userâ€™s privacy is guaranteed from the service provider.
In this paper, a new conclusion based on rotating parabolic model and a different scheme of laser communication networking antenna system has been put forward in the paper. Based on rotating parabolic antenna, a new theory of the optical properties have been deduced, which can realize larger dynamic, duplex, networking communications among multiple platforms in 360° azimuth and pitch range. Meanwhile, depending on the operation mode of the system, multiple mathematical optimization models have been established. Tracking communication range, emission energy efficiency and receiving energy efficiency have been analyzed and optimized. Relationship among opening up and low apertures, the lens unit aperture, focal length of lens unit as well as rotating parabolic focal length have been analyzed. Tracking pitching range and emission energy utilization has carried on the theoretical derivation and optimization and networking platform link between energy receiver and transmitter has been analyzed. Taking some parameters of this new system into calculation, optimized results can be utilized with MATLAB software for its application and system of communication engineering. The rotating parabolic internal can form a hollow structure, which is utilized for miniaturization, light-weighted design and realize duplex communication in a wide range and distance. Circular orbit guidance is the modern way used in dynamic tracking system. The new theory and optical antenna system has widespread applications value as well.
In this thesis, two platforms for simulating artificial neural networks are discussed: MIMD-parallel processor systems as an execution platform and neurosimulators as a research and development platform. Because of the parallelism encountered in neural networks, distributed processor systems seem to
Tuan Anh Nguyen
Full Text Available Sensitivity assessment of availability for data center networks (DCNs is of paramount importance in design and management of cloud computing based businesses. Previous work has presented a performance modeling and analysis of a fat-tree based DCN using queuing theory. In this paper, we present a comprehensive availability modeling and sensitivity analysis of a DCell-based DCN with server virtualization for business continuity using stochastic reward nets (SRN. We use SRN in modeling to capture complex behaviors and dependencies of the system in detail. The models take into account (i two DCell configurations, respectively, composed of two and three physical hosts in a DCell0 unit, (ii failure modes and corresponding recovery behaviors of hosts, switches, and VMs, and VM live migration mechanism within and between DCell0s, and (iii dependencies between subsystems (e.g., between a host and VMs and between switches and VMs in the same DCell0. The constructed SRN models are analyzed in detail with regard to various metrics of interest to investigate system’s characteristics. A comprehensive sensitivity analysis of system availability is carried out in consideration of the major impacting parameters in order to observe the system’s complicated behaviors and find the bottlenecks of system availability. The analysis results show the availability improvement, capability of fault tolerance, and business continuity of the DCNs complying with DCell network topology. This study provides a basis of designing and management of DCNs for business continuity.
This talk will cover the status of the current and upcoming offers on server platforms, focusing mainly on the processing and storage parts. Alternative solutions like Open Compute (OCP) will be quickly covered.
Beginning SQL Server 2008 Administration is essential for anyone wishing to learn about implementing and managing SQL Server 2008 database. From college students, to experienced database administrators from other platforms, to those already familiar with SQL Server and wanting to fill in some gaps of knowledge, this book will bring all readers up to speed on the enterprise platform Microsoft SQL Server 2008. * Clearly describes relational database concepts* Explains the SQL Server database engine and supporting tools* Shows various database maintenance scenarios What you'll learn* Understand c
Morón, M J; Luque, J R; Botella, A A; Cuberos, E J; Casilari, E; Díaz-Estrella, A
A prototype of a system based on a Bluetooth Body Area Network (BAN) for continuous and wireless telemonitoring of patients' biosignals is presented. Smart phones and Java (J2ME) have been selected as platform to build a central node in patients' BAN. A midlet running in the smart phone compiles information about patient's location and health status. The midlet encrypts and retransmits it to the server through 802.11 or GPRS/UMTS. Besides when an alerting condition is detected, the midlet generates a MMS and a SMS to be sent to patients' relatives and to physician, respectively. Additionally, the system provides to physicians the possibility of configuring BAN's parameters remotely, from a PC or even a smart phone.
Pierre Le Gall
Full Text Available To evaluate the local actual queueing delay in general single server queueing networks with non-correlated successive service times for the same customer, we start from a recent work using the tandem queue effect, when two successive local arrivals are not separated by premature departures. In that case, two assumptions were made: busy periods not broken up, and there are limited variations for successive service times. These assumptions are given up after having crossed two stages. The local arrivals become indistinguishable for the sojourn time inside a given busy period. It is then proved that the local sojourn time of this tandem queue effect may be considered as the sum of two components: the first (independent of the local interarrival time corresponding to the case where upstream, successive service times are supposed to be identical to the local service time, and the second (negligible after having crossed 2 or 3 stages depending on local interarrival times increasing because of broken up busy periods. The consequence is the possible occurrence of the agglutination phenomenon of indistinguishable customers in the buffers (when there are limited premature departures, due to a stronger impact of long service times upon the local actual queueing delay, which is not consistent with the traditional concept of local traffic source only generating distinguishable customers.
Yan, Zining; Wang, Yunhan; Shao, Shijiao; Li, Boquan
This paper discusses the cloud desktop technology, virtualization technology and penetration testing technology used in the network attack and defense training platform, and introduces the design and implementation process of the network attack and defense training platform. This paper focuses on the cloud desktop construction scheme based on B/S structure, and aims to enhance the flexibility and convenience of network attack and defense training platform.
Pedro Ángel Luna Ariza
Full Text Available A look to the future of Educational Inspection demands to put the attention in the internal processes of organization and functioning. The model of inspector which work is realized to the margin of the rest of his companions has expired. The current tools that provide to us the new technologies of the information and the communication facilitate our daily occupation and improve the possibilities of coordination in the equipment and services. The possibilities to work in network of the inspectors of education in Andalusia using Inspectio Platform it is a good example of it. After a normative foundation and a conceptual approximation, we try to explain this tool, using a descriptive and practical methodology.
Abstract Voice quality on VoIP communication is caused by many factors, one of which is the quality of the server. Choosing PC platform or server which is suitable is the main issue in developing VoIP network. A bad server performance or not equivalent with the most of users will degrade the sound quality or even not able to connect between users. Tthe test carried out to the performance of the wireless access point Linksys WRT54GL which is used as a VoIP server. The test was carried out to determine how many VoIP calls which are able to be serviced by a wireless access point as a VoIP server and how long the server needs to be able to process every signal of SIP and RTP packet. Based on the test result performed, the VoIP server on the wireless access point is able to serve VoIP communication well for a few calls number, so it is worth to be implemented on the use of small scale. The use of Native Bridging method in handling the media performed by the server can increase the number of calls that were able to be served about 3 to 7 times compared with other methods. Keywords— VoIP, Asterisk, Acess Point, WRT54GL, OpenWRT, Performance
Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder
Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the network's functionality and focus on the end users. PMID:22037890
Cummings, J; Aisen, P; Barton, R; Bork, J; Doody, R; Dwyer, J; Egan, J C; Feldman, H; Lappin, D; Truyen, L; Salloway, S; Sperling, R; Vradenburg, G
Alzheimer's disease (AD) drug development is costly, time-consuming, and inefficient. Trial site functions, trial design, and patient recruitment for trials all require improvement. The Global Alzheimer Platform (GAP) was initiated in response to these challenges. Four GAP work streams evolved in the US to address different trial challenges: 1) registry-to-cohort web-based recruitment; 2) clinical trial site activation and site network construction (GAP-NET); 3) adaptive proof-of-concept clinical trial design; and 4) finance and fund raising. GAP-NET proposes to establish a standardized network of continuously funded trial sites that are highly qualified to perform trials (with established clinical, biomarker, imaging capability; certified raters; sophisticated management system. GAP-NET will conduct trials for academic and biopharma industry partners using standardized instrument versions and administration. Collaboration with the Innovative Medicines Initiative (IMI) European Prevention of Alzheimer's Disease (EPAD) program, the Canadian Consortium on Neurodegeneration in Aging (CCNA) and other similar international initiatives will allow conduct of global trials. GAP-NET aims to increase trial efficiency and quality, decrease trial redundancy, accelerate cohort development and trial recruitment, and decrease trial costs. The value proposition for sites includes stable funding and uniform training and trial execution; the value to trial sponsors is decreased trial costs, reduced time to execute trials, and enhanced data quality. The value for patients and society is the more rapid availability of new treatments for AD.
Full Text Available The emerging Network Function Virtualization (NFV paradigm, coupled with the highly flexible and programmatic control of network devices offered by Software Defined Networking solutions, enables unprecedented levels of network virtualization that will definitely change the shape of future network architectures, where legacy telco central offices will be replaced by cloud data centers located at the edge. On the one hand, this software-centric evolution of telecommunications will allow network operators to take advantage of the increased flexibility and reduced deployment costs typical of cloud computing. On the other hand, it will pose a number of challenges in terms of virtual network performance and customer isolation. This paper intends to provide some insights on how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how it can be used to deploy NFV, focusing in particular on packet forwarding performance issues. To this purpose, a set of experiments is presented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms, considering both single tenant and multitenant scenarios. From the results of the evaluation it is possible to highlight potentials and limitations of running NFV on OpenStack.
Full Text Available UTM is an application which integrated many security features become a single hardware platform. The reasonbehind this research is to build a system that protects the network in St. Bellarminus school. Research method that hasbeen used in this research is spiral method, whereas the development of the application is continues and can be modifiedeasily if there is new version of the security tools implemented in the application, or if there is a better security tools to beused. The outcome of the system is very good, because it can protects the network: cross-platform firewall, IntrusionDetection System, Proxy Server, email protection against virus and spam. In conclusion, the application can producehigh effectiveness with low cost and this application is very useful in monitoring and configuring the network in St.Bellarminus school.Keywords: security network, unified threat management, anti virus, server, proxy, firewall
Vimalathithan, S.; Sudarsan, S. D.; Seker, R.; Lenin, R. B.; Ramaswamy, S.
The emerging global reach of technology presents myriad challenges and intricacies as Information Technology teams aim to provide anywhere, anytime and anyone access, for service providers and customers alike. The world is fraught with stifling inequalities, both from an economic as well as socio-political perspective. The net result has been large capability gaps between various organizational locations that need to work together, which has raised new challenges for information security teams. Similar issues arise, when mergers and acquisitions among and between organizations take place. While integrating remote business locations with mainstream operations, one or more of the issues including the lack of application level support, computational capabilities, communication limitations, and legal requirements cause a serious impediment thereby complicating integration while not violating the organizations' security requirements. Often resorted techniques like IPSec, tunneling, secure socket layer, etc. may not be always techno-economically feasible. This paper addresses such security issues by introducing an intermediate server between corporate central server and remote sites, called stand-off-server. We present techniques such as break-before-make connection, break connection after transfer, multiple virtual machine instances with different operating systems using the concept of a stand-off-server. Our experiments show that the proposed solution provides sufficient isolation for the central server/site from attacks arising out of weak communication and/or computing links and is simple to implement.
Full Text Available Involved in many diseases such as cancer, diabetes, neurodegenerative, inflammatory and respiratory disorders, G-protein-coupled receptors (GPCRs are among the most frequent targets of therapeutic drugs. It is time-consuming and expensive to determine whether a drug and a GPCR are to interact with each other in a cellular network purely by means of experimental techniques. Although some computational methods were developed in this regard based on the knowledge of the 3D (dimensional structure of protein, unfortunately their usage is quite limited because the 3D structures for most GPCRs are still unknown. To overcome the situation, a sequence-based classifier, called "iGPCR-drug", was developed to predict the interactions between GPCRs and drugs in cellular networking. In the predictor, the drug compound is formulated by a 2D (dimensional fingerprint via a 256D vector, GPCR by the PseAAC (pseudo amino acid composition generated with the grey model theory, and the prediction engine is operated by the fuzzy K-nearest neighbour algorithm. Moreover, a user-friendly web-server for iGPCR-drug was established at http://www.jci-bioinfo.cn/iGPCR-Drug/. For the convenience of most experimental scientists, a step-by-step guide is provided on how to use the web-server to get the desired results without the need to follow the complicated math equations presented in this paper just for its integrity. The overall success rate achieved by iGPCR-drug via the jackknife test was 85.5%, which is remarkably higher than the rate by the existing peer method developed in 2010 although no web server was ever established for it. It is anticipated that iGPCR-Drug may become a useful high throughput tool for both basic research and drug development, and that the approach presented here can also be extended to study other drug - target interaction networks.
González, Apolinar; Aquino, Raúl; Mata, Walter; Ochoa, Alberto; Saldaña, Pedro; Edwards, Arthur
Because battery-powered nodes are required in wireless sensor networks and energy consumption represents an important design consideration, alternate energy sources are needed to provide more effective and optimal function. The main goal of this work is to present an energy harvesting wireless sensor network platform, the Open Wireless Sensor node (WiSe). The design and implementation of the solar powered wireless platform is described including the hardware architecture, firmware, and a POSIX Real-Time Kernel. A sleep and wake up strategy was implemented to prolong the lifetime of the wireless sensor network. This platform was developed as a tool for researchers investigating Wireless sensor network or system integrators.
Ou-Yang, Le; Zhang, Xiao-Fei; Wu, Min; Li, Xiao-Li
Recovering gene regulatory networks and exploring the network rewiring between two different disease states are important for revealing the mechanisms behind disease progression. The advent of high-throughput experimental techniques has enabled the possibility of inferring gene regulatory networks and differential networks using computational methods. However, most of existing differential network analysis methods are designed for single-platform data analysis and assume that differences between networks are driven by individual edges. Therefore, they cannot take into account the common information shared across different data platforms and may fail in identifying driver genes that lead to the change of network. In this study, we develop a node-based multi-view differential network analysis model to simultaneously estimate multiple gene regulatory networks and their differences from multi-platform gene expression data. Our model can leverage the strength across multiple data platforms to improve the accuracy of network inference and differential network estimation. Simulation studies demonstrate that our model can obtain more accurate estimations of gene regulatory networks and differential networks than other existing state-of-the-art models. We apply our model on TCGA ovarian cancer samples to identify network rewiring associated with drug resistance. We observe from our experiments that the hub nodes of our identified differential networks include known drug resistance-related genes and potential targets that are useful to improve the treatment of drug resistant tumors. Copyright © 2017 Elsevier Inc. All rights reserved.
Srinivasan, Nikhil; Damsgaard, Jan
by interviewing individuals within a university context. An analysis of the vignettes and individual use behaviors highlights the tension between network-based adoption of social media platforms and the constraints that the network places on individual use of the platform.......Social media have diffused into the everyday lives of many but still pose challenges to individuals regarding use of these platforms. This paper explores the multiple manners in which social media platforms gets employed by individuals based on an examination of 4 vignettes generated...
Full Text Available Currently, many people suffer from arrhythmia or hypoxia, which are abnormal health conditions. Arrhythmia occurs when a person has an irregular or abnormal heart rate, while hypoxia is realized when there is a deficiency in oxygen reaching the tissues. When a person suffers from arrhythmia, there is the possibility that the person has cardiovascular disease. A low oxygen level eventually leads to organ failure, which can result in death. To prevent such conditions, a mobile physiological measurement platform has been proposed in this paper. This system will continuously monitor the heart rate and the oxygen level of a patient. The proposed system is mainly beneficial because the medical staff or the caregiver can provide care to patients without being in close proximity. In this way, multiple numbers of patients can be treated by the physician at the same time. In this paper, two main physiological signals: the electrocardiogram (ECG and the photoplethysmogram (PPG are recorded, to measure the heart rate (in beats per minute and the peripheral capillary oxygen saturation level or SpO2 (in percentage of the patient. This is done by using a convenient graphical user interface (GUI in the Matrix Laboratory (MATLAB. Pre-processing of the bio-medical signals is done in the GUI and the calculated results are saved as text files in the current directory of MATLAB. We further propose an Android application, which will display the physiological parameters after the text files have been accessed via a wireless network. The heart rate and the oxygen level can both be monitored via this application. In case the results show an abnormal reading, the physician is notified immediately via text messaging. Keywords: ECG, PPG, SpO2, GUI, MATLAB, Android, Android App
Deb, Somnath (Inventor); Ghoshal, Sudipto (Inventor); Malepati, Venkata N. (Inventor); Kleinman, David L. (Inventor); Cavanaugh, Kevin F. (Inventor)
A network-based diagnosis server for monitoring and diagnosing a system, the server being remote from the system it is observing, comprises a sensor for generating signals indicative of a characteristic of a component of the system, a network-interfaced sensor agent coupled to the sensor for receiving signals therefrom, a broker module coupled to the network for sending signals to and receiving signals from the sensor agent, a handler application connected to the broker module for transmitting signals to and receiving signals therefrom, a reasoner application in communication with the handler application for processing, and responding to signals received from the handler application, wherein the sensor agent, broker module, handler application, and reasoner applications operate simultaneously relative to each other, such that the present invention diagnosis server performs continuous monitoring and diagnosing of said components of the system in real time. The diagnosis server is readily adaptable to various different systems.
A.A. Ketut Agung Cahyawan W
Selama ini seorang network administrator harus berada pada ruang server jika ingin menyalakan server yang ada disana, atau memeriksa apakah temperatur ruang server sudah cukup agak server dapat bekerja optimal. Permasalahan timbul karena ruang server biasanya terletak cukup jauh dan harus selalu terkunci demi alasan keamanan. Pada penelitian ini dirancang suatu sistem kendali dan monitor yang dapat menyalakan server dari jarak jauh sekaligus memantau suhu ruangan server, menaikkan atau menuru...
Purpose: This paper aims to introduce an enterprise-wide Web 2.0 learning support platform--SNAP, developed at Victoria University in Melbourne, Australia. Design/methodology/approach: Pointing to the evolution of the social web, the paper discusses the potential for the development of e-learning platforms that employ constructivist, connectivist,…
Cheung, Kit; Schultz, Simon R; Luk, Wayne
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs...
Aranki, N.; Tawel, R.
In this paper we present an FPGA based reconfigurable computing platform for prototyping and evaluation of advanced neural network based applications for control and diagnostics in an automotive sub-systems.
Mandal, S.; Hegde, G.; Gupta, K.G.
. This paper deals with the prediction of stress resultant deflections of fixed offshore platform to the varying environmental loading conditions using neural networks. The manual estimation of stress resultant to the varying loading conditions involves tedious...
...，using the data mining technology and distributed parallel computing method，establishing an active distribution network security monitoring system model based on PDMiner large data mining platform...
Taslidere, E.; Cohen, F. S.; Reisman, F. K.
This paper presents the use of wireless sensor networks (WSNs) in educational research as a platform for enhanced pedagogical learning. The aim here with the use of a WSN platform was to go beyond the implementation stage to the real-life application stage, i.e., linking the implementation to real-life applications, where abstract theory and…
Arthur Edwards; Alberto Ochoa; Pedro Saldaña; Walter Mata; Raúl Aquino; Apolinar González
Because battery-powered nodes are required in wireless sensor networks and energy consumption represents an important design consideration, alternate energy sources are needed to provide more effective and optimal function. The main goal of this work is to present an energy harvesting wireless sensor network platform, the Open Wireless Sensor node (WiSe). The design and implementation of the solar powered wireless platform is described including the hardware architecture, firmware, and a POSI...
The perfect guide to help administrators set up Apple's Mac OS X Lion Server With the overwhelming popularity of the iPhone and iPad, more Macs are appearing in corporate settings. The newest version of Mac Server is the ideal way to administer a Mac network. This friendly guide explains to both Windows and Mac administrators how to set up and configure the server, including services such as iCal Server, Podcast Producer, Wiki Server, Spotlight Server, iChat Server, File Sharing, Mail Services, and support for iPhone and iPad. It explains how to secure, administer, and troubleshoot the networ
Karami, Mojtaba; Rangzan, Kazem; Saberi, Azim
With emergence of air-borne and space-borne hyperspectral sensors, spectroscopic measurements are gaining more importance in remote sensing. Therefore, the number of available spectral reference data is constantly increasing. This rapid increase often exhibits a poor data management, which leads to ultimate isolation of data on disk storages. Spectral data without precise description of the target, methods, environment, and sampling geometry cannot be used by other researchers. Moreover, existing spectral data (in case it accompanied with good documentation) become virtually invisible or unreachable for researchers. Providing documentation and a data-sharing framework for spectral data, in which researchers are able to search for or share spectral data and documentation, would definitely improve the data lifetime. Relational Database Management Systems (RDBMS) are main candidates for spectral data management and their efficiency is proven by many studies and applications to date. In this study, a new approach to spectral data administration is presented based on spatial identity of spectral samples. This method benefits from scalability and performance of RDBMS for storage of spectral data, but uses GIS servers to provide users with interactive maps as an interface to the system. The spectral files, photographs and descriptive data are considered as belongings of a geospatial object. A spectral processing unit is responsible for evaluation of metadata quality and performing routine spectral processing tasks for newly-added data. As a result, by using internet browser software the users would be able to visually examine availability of data and/or search for data based on descriptive attributes associated to it. The proposed system is scalable and besides giving the users good sense of what data are available in the database, it facilitates participation of spectral reference data in producing geoinformation.
Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel; Marzolf, Athneal D
This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.
Full Text Available Distance education has been an important development tendency and learning platform with the emphasis of lifelong learning of the society. Networked learning and teaching is a main characteristic of distance education, which inevitably needs to transmit large magnitude of private data among students, teachers and the education platform. To protect the security of data transmission and storage, a networked security strategy was proposed. The security strategy is based on the technologies of intrusion detection and digital signature. An intrusion detection model was established in accordance to the main tasks of distance education platform. The encryption process of digital signature was illustrated along with the information flow of the distance education platform. The paper offers an effective reference for solving security problems of distance education platforms.
Full Text Available In this paper we describe the design, key features and results obtained from the development of a generic platform usable for sensor network applications operational in the ISM band. The goal was to create an open source low cost platform suitable for use in educational environment. The platform should allow students to easily grasp the fundamentals of wireless sensor networks so special attention was paid to basic concepts related to their functioning. Two versions of this platform were designed, the first one being a proof of concept and the second one more adequate to field test and measurements. Practical aspects of implementation such as network protocol, power consumption, processing speed, media access are discussed.
Dinaker Babu Bollini
Full Text Available The sliding window algorithm proposed for determining an optimal sliding window does not consider the waiting times of call setup requests of a mobile station in queue at a Mobile Switching Centre (MSC in the Global System for Mobile (GSM Communication Network. This study proposes a model integrating the sliding window algorithm with a single server finite queuing model, referred to as integrated model for measurement of realistic throughput of a MSC considering the waiting times of call setup requests. It assumes that a MSC can process one call setup request at a time. It is useful in determining an optimal sliding window size that maximizes the realistic throughput of a MSC. Though the model assumes that a MSC can process one call setup request at a time, its scope can be extended for measuring the realistic throughput of a MSC that can process multiple call setup requests at a time.
Full Text Available After a brief presentation of the DNS and BIND standard for Unix platforms, the paper presents an application which has a principal objective, the configuring of the DNS BIND 9 server. The general objectives of the application are presented, follow by the description of the details of designing the program.
Full Text Available Network virtualization is a method of providing virtual instances of physical networks. Virtualized networks are widely used with virtualized servers, forming a powerful dynamically reconfigurable platform. In this paper we discuss the impact of network virtualization on the overall system availability. We describe a system reflecting the network architecture usually deployed in today’s data centres. The proposed system is modelled using Markov chains and fault trees. We compare the availability of virtualized system using standard physique network with the availability of virtualized system using virtualized network. Network virtualization introduces a new software layer to the network architecture. The proposed availability model integrates software failures in addition to the hardware failures. Based on the estimated numerical failure rates, we analyse system’s availability.
Hansen, Mads Stenhuus; Jensen, P.; Soldatos, J.
An important issue for the implementation of an agent system, which controls a telecommunications network, is to enable low-level access of the network devices by the agent platform, bypassing the control logic inherent in them. This issue has been coped with successfully in the IMPACT project...
Seyedali Hosseininezhad; Victor C. M. Leung
Heterogeneous wireless networks are capable of providing customers with better services while service providers can offer more applications to more customers with lower costs. To provide services, some applications rely on existing servers in the network. In a vehicular ad-hoc network (VANET) some mobile nodes may function as servers. Due to high mobility of nodes and short lifetime of links, server-to-client and server-to-server communications become challenging. In this paper we propose to ...
Raje, Manali; Mukhopadhyay, Debajyoti
Large amount of electronic data is generated in Cloud computing every day. Efficient maintenance of this data requires proper services. Hence a method to collect data securely, by protecting and developing backups is mentioned. The Objective is to provide Auto Response Server, better solutions for data backup and restoring using Cloud. Data can be collected and sent to a centralized repository in a platform independent format without any network consideration. This data can then be used accor...
Mikóczy, E.; Kotuliak, I.; Deventer, M.O. van
This article presents a comparison of main characteristics of the Next Generation Networks (NGN) and Future Generation Internet (FGI). The aim is to discuss and compare two approaches to Future Networks (FN) and services: the evolution of NGN, and the revolutionary approach of a new FGI. We present
This study aims to build traffic monitoring application that can help network administrator to monitor server anytime and anywhere, by using SMS. Things to be monitored are data traffic and server network connection. Literature study, and field study were done before designing the application.The result is application that can send and receive SMS to / from network administrator, check the connection to the server, and respond to network administrator in a relatively fast time when the connec...
Full Text Available Cognitive radio technology has received wide attention for its ability to sense and use idle frequency. IEEE 802.22 WRAN, the first to follow the standard in cognitive radio technology, is featured by spectrum sensing and wireless data transmission. As far as wireless transmission is concerned, the availability and implementation of a mature and robust physical layer algorithm are essential to high performance. For the physical layer of WRAN using OFDMA technology, this paper proposes a synchronization algorithm and at the same time provides a public platform for the improvement and verification of that new algorithm. The simulation results show that the performance of the platform is highly close to the theoretical value.
development in the area of secure mobile computing recently, including the development of commercial off the shelf (COTS) Android secure platforms such as... Filesystem encryption: from Android 3.0 onwards, full filesystem encryption (using AES128 and SHA256) is supported [Android Security Overview 2013...gapped computer . In order to mitigate some of these issues related to the provisioning of devices we built AOSP with the SE Android MMAC changes
Bernat Vercher, Jesús; Perez Marin, Santiago; Gonzalez Lucas, Agustin; Sorribas Mollon, Rafael; Villarrubia Grande, Luis; Campoy Cervera, Luis M.; Hernández Gómez, Luis Alfonso
Ubiquitous Sensor Network (USN) concept describes the integration of heterogeneous and geographically dispersed Wireless Sensor and Actuator Networks (WS&AN) into rich information infrastructures for accurate representation and access to different dynamic user’s physical contexts. This relatively new concept envisions future Sensor-Based Services leading to market disruptive innovations in a broad range of application domains, mainly personal (lifestyle assistants), community (professional us...
Von Solms, S
Full Text Available Flexible, open source network emulation tools can provide network researchers with significant benefits regarding network behaviour and performance. The evaluation of these networks can benefit greatly from the integration of realistic, network...
Course, Microsoft Official Academic
Microsoft Windows Server is a multi-purpose server designed to increase reliability and flexibility of a network infrastructure. Windows Server is the paramount tool used by enterprises in their datacenter and desktop strategy. The most recent versions of Windows Server also provide both server and client virtualization. Its ubiquity in the enterprise results in the need for networking professionals who know how to plan, design, implement, operate, and troubleshoot networks relying on Windows Server. Microsoft Learning is preparing the next round of its Windows Server Certification program
Xing, Fangyuan; Wang, Honghuan; Yin, Hongxi; Li, Ming; Luo, Shenzi; Wu, Chenguang
With the extensive application of cloud computing and data centres, as well as the constantly emerging services, the big data with the burst characteristic has brought huge challenges to optical networks. Consequently, the software defined optical network (SDON) that combines optical networks with software defined network (SDN), has attracted much attention. In this paper, an OpenFlow-enabled optical node employed in optical cross-connect (OXC) and reconfigurable optical add/drop multiplexer (ROADM), is proposed. An open source OpenFlow controller is extended on routing strategies. In addition, the experiment platform based on OpenFlow protocol for software defined optical network, is designed. The feasibility and availability of the OpenFlow-enabled optical nodes and the extended OpenFlow controller are validated by the connectivity test, protection switching and load balancing experiments in this test platform.
Grigoriev, M. [Fermilab; DeMar, P. [Fermilab; Tierney, B. [LBL, Berkeley; Lake, A. [LBL, Berkeley; Metzger, J. [LBL, Berkeley; Frey, M. [Bucknell U.; Calyam, P. [Ohio State U.
The E-Center is a social collaborative web-based platform for assisting network users in understanding network conditions across network paths of interest to them. It is designed to give a user the necessary tools to isolate, identify, and resolve network performance-related problems. E-Center provides network path information on a link-by-link level, as well as from an end-to-end perspective. In addition to providing current and recent network path data, E-Center is intended to provide a social media environment for them to share issues, ideas, concerns, and problems. The product has a modular design that accommodates integration of other network services that make use of the same network path and performance data.
Full Text Available Terminology and multilingualism have been one of the main focuses of the Athena Project. Linked Heritage as a legacy of this project also deals with terminology and bring theory to practice applying the recommendations given in the Athena Project. Linked Heritage as a direct follow-up of these recommendations on terminology and multilingualism is currently working on the development of a Terminology Management Platform (TMP. This platform will allow any cultural institution to register, SKOSify and manage its terminology in a collaborative way. This Terminology Management Platform will provide a network of multilingual and cross-domain terminologies.
Full Text Available Purpose of this research is to analyze and design a network between head and branch office, andcompany mobile user, which can be used to increase performance and effectiveness of company in doingtheir business process. There were 3 main methods used in this research, which were: library study, analysis,and design method. Library study method was done by searching theoretical sources, knowledge, and otherinformation from books, articles in library, and internet pages. Analysis method was done by doing anobservation on company network, and an interview to acquire description of current business process andidentify problems which can be solved by using a network technology. Meanwhile, the design method wasdone by making a topology network diagram, and determining elements needed to design a VPN technology,then suggesting a configuration system, and testing to know whether the suggested system could run well ornot. The result is that network between the head and branch office, and the mobile user can be connectedsuccessfully using a VPN technology. In conclusion, with the connected network between the head andbranch office can create a centralization of company database, and a suggested VPN network has run well byencapsulating data packages had been sent.Keywords: network, Virtual Private Network (VPN, library study, analysis, design
Muniz, Frederico B.; Araújo, Luciano V.; Nunes, Fátima L. S.
Computer-aided diagnosis systems using medical images and three-dimensional models as input data have greatly expanded and developed, but in terms of building suitable image databases to assess them, the challenge remains. Although there are some image databases available for this purpose, they are generally limited to certain types of exams or contain a limited number of medical cases. The objective of this work is to present the concepts and the development of a collaborative platform for sharing medical images and three-dimensional models, providing a resource to share and increase the number of images available for researchers. The collaborative cloud platform, called CATALYZER, aims to increase the availability and sharing of graphic objects, including 3D images, and their reports that are essential for research related to medical images. A survey conducted with researchers and health professionals indicated that this could be an innovative approach in the creation of medical image databases, providing a wider variety of cases together with a considerable amount of shared information among its users.
The core concepts and technologies you need to administer a Windows Server OS Administering a Windows operating system (OS) can be a difficult topic to grasp, particularly if you are new to the field of IT. This full-color resource serves as an approachable introduction to understanding how to install a server, the various roles of a server, and how server performance and maintenance impacts a network. With a special focus placed on the new Microsoft Technology Associate (MTA) certificate, the straightforward, easy-to-understand tone is ideal for anyone new to computer administration looking t
Jiao, Zheng; Ma, Kun
The investigation shows that the difficulties students encounter in the course of optics are mainly due to the abstraction of the content of the optical course, and the problem that the description of the physical phenomenon and process is difficult to show in the classroom teaching. We consider to integrate information technology with classroom teaching. Teachers can set up course websites and create more teaching resources, such as videos of experimental processes, design of simulated optical paths, mock demonstration of optical phenomena, and so on. Teachers can use the courseware to link the resources of the website platform, and display the related resources to the students. After class, students are also able to learn through the website, which is helpful to their study.
Full Text Available Now all kinds of public cloud providers take computing and storage resources as the user’s main demand, making it difficult for users to deploy complex network in the public cloud. This paper proposes a virtual cloud platform with network as the core demand of the user, which can provide the user with the capacity of free network architecture as well as all kinds of virtual resources. The network is isolated by port groups of the virtual distributed switch and the data forwarding and access control between different network segments are implemented by virtual machines loading a soft-routing system. This paper also studies the management interface of network architecture and the uniform way to connect the remote desktop of virtual resources on the web, hoping to provide some new ideas for the Network as a Service model.
Calient Networks, a provider of intelligent all-optical switching systems and software, will team with the California Institute for Telecommunications and Information Technology (Cal-(IT)2) and the University of Illinois at Chicago (UIC) on development of the "OptIPuter,". This is a powerful distributed cyber-infrastructure project designed to support data-intensive scientific research and collaboration (1/2 page).
Kerczewski, Robert J.; Bhasin, Kul B.; Fabian, Theodore P.; Griner, James H.; Kachmar, Brian A.; Richard, Alan M.
The continuing technological advances in satellite communications and global networking have resulted in commercial systems that now can potentially provide capabilities for communications with space-based science platforms. This reduces the need for expensive government owned communications infrastructures to support space science missions while simultaneously making available better service to the end users. An interactive, high data rate Internet type connection through commercial space communications networks would enable authorized researchers anywhere to control space-based experiments in near real time and obtain experimental results immediately. A space based communications network architecture consisting of satellite constellations connecting orbiting space science platforms to ground users can be developed to provide this service. The unresolved technical issues presented by this scenario are the subject of research at NASA's Glenn Research Center in Cleveland, Ohio. Assessment of network architectures, identification of required new or improved technologies, and investigation of data communications protocols are being performed through testbed and satellite experiments and laboratory simulations.
Full Text Available This article presents a comparison of main characteristics of the Next Generation Networks (NGN and Future Generation Internet (FGI. The aim is to discuss and compare two approaches to Future Networks (FN and services: the evolution of NGN, and the revolutionary approach of a new FGI. We present both frameworks from the services point of view as they are delivered to the end-user, as well as from the architectural point of view. We compare selected properties of both approaches to explain commonalities and differences. Their challenges are similar: managing the quality of experience, mobility, security, scalability and providing openness to applications. Based on this comparison, we evaluate possible areas for future convergence in the approach of the two architectures to the Future Network concept. Our analysis shows that despite their different backgrounds, the internet’s FGI and telco’s NGN are not that different after all. The convergence of the two approaches therefore seems the only logical way forward.
J. L. Xu
Full Text Available This paper takes the graduates course “Theories and Methods of Environmental Geography” as an example, and carries out the practice on network assistant teaching based on the platform of blackboard. It objectively analyzes and summarizes the key results and innovations during practical process, discusses the existing problems and offers some relative suggestions in accordance with the current teaching situation, and expects to provide some references for other network course construction.
This paper outlines a comprehensive model to increase system efficiency, preserve network bandwidth, monitor incoming and outgoing packets, ensure the security of confidential files and reduce power wastage in an organization. This model illustrates the use and potential application of a Network Analysis Tool (NAT) in a multi-computer set-up of any scale. The model is designed to run in the background and not hamper any currently executing applications, while using minimum system resources. I...
Yuko Kamiya; Toshihiko Shimokawa; Fuminori Tanizaki; Norihiko Yoshida
On providing broadband contents, to provide enough network bandwidth is an important. Existing Contents Delivery Network has mainly focused on increasing network bandwidth statically. Therefore, it is not flexible. In this paper, we propose Soarin, a novel contents delivery system to increase network bandwidth dynamically by deploying delivery servers in a wide area. Moreover Soarin can use various server deployment policy to deploy delivery servers, it can decide which server is suitable for...
"Chiaro Networks, the developer of true infrastructure-class Internet Protocol/Multi-Protocol Label Switching (IP/MPLS) platforms, today announced that its Enstara router has been selected by the European Organization for Nuclear Research (CERN) for its DataTAG project" (1 page)
De Jong, Tim; Fuertes, Alba; Schmeits, Tally; Specht, Marcus; Koper, Rob
De Jong, T., Fuertes, A., Schmeits, T., Specht, M., & Koper, R. (2009). A Contextualised Multi-Platform Framework to Support Blended Learning Scenarios in Learning Networks. In D. Goh (Ed.), Multiplatform E-Learning Systems and Technologies: Mobile Devices for Ubiquitous ICT-Based Education (pp.
Renaud, S; Tomas, J; Lewis, N; Bornat, Y; Daouzli, A; Rudolph, M; Destexhe, A; Saïghi, S
Many hardware-based solutions now exist for the simulation of bio-like neural networks. Less conventional than software-based systems, these types of simulators generally combine digital and analog forms of computation. In this paper we present a mixed hardware-software platform, specifically designed for the simulation of spiking neural networks, using conductance-based models of neurons and synaptic connections with dynamic adaptation rules (Spike-Timing-Dependent Plasticity). The neurons and networks are configurable, and are computed in 'biological real time' by which we mean that the difference between simulated time and simulation time is guaranteed lower than 50 mus. After presenting the issues and context involved in the design and use of hardware-based spiking neural networks, we describe the analog neuromimetic integrated circuits which form the core of the platform. We then explain the organization and computation principles of the modules within the platform, and present experimental results which validate the system. Designed as a tool for computational neuroscience, the platform is exploited in collaborative research projects together with neurobiology and computer science partners. Copyright 2010 Elsevier Ltd. All rights reserved.
To prepare business communication undergraduates for a changing work world and to engage today's tech-savvy students, many instructors have embraced social media by incorporating its use in the classroom. This article describes AxeCorp, a fictional company headquartered on the immersive social networking platform, Second Life, and one particular…
Mizuno, Shinya; Iwamoto, Shogo; Seki, Mutsumi; Yamaki, Naokazu
In recent social experiments, rental motorbikes and rental bicycles have been arranged at nodes, and environments where users can ride these bikes have been improved. When people borrow bikes, they return them to nearby nodes. Some experiments have been conducted using the models of Hamachari of Yokohama, the Niigata Rental Cycle, and Bicing. However, from these experiments, the effectiveness of distributing bikes was unclear, and many models were discontinued midway. Thus, we need to consider whether these models are effectively designed to represent the distribution system. Therefore, we construct a model to arrange the nodes for distributing bikes using a queueing network. To adopt realistic values for our model, we use the Google Maps application program interface. Thus, we can easily obtain values of distance and transit time between nodes in various places in the world. Moreover, we apply the distribution of a population to a gravity model and we compute the effective transition probability for this queueing network. If the arrangement of the nodes and number of bikes at each node is known, we can precisely design the system. We illustrate our system using convenience stores as nodes and optimize the node configuration. As a result, we can optimize simultaneously the number of nodes, node places, and number of bikes for each node, and we can construct a base for a rental cycle business to use our system.
Full Text Available With the advancement of computing and network virtualization technology, the networking research community shows great interest in network emulation. Compared with network simulation, network emulation can provide more relevant and comprehensive details. In this paper, EmuStack, a large-scale real-time emulation platform for Delay Tolerant Network (DTN, is proposed. EmuStack aims at empowering network emulation to become as simple as network simulation. Based on OpenStack, distributed synchronous emulation modules are developed to enable EmuStack to implement synchronous and dynamic, precise, and real-time network emulation. Meanwhile, the lightweight approach of using Docker container technology and network namespaces allows EmuStack to support a (up to hundreds of nodes large-scale topology with only several physical nodes. In addition, EmuStack integrates the Linux Traffic Control (TC tools with OpenStack for managing and emulating the virtual link characteristics which include variable bandwidth, delay, loss, jitter, reordering, and duplication. Finally, experiences with our initial implementation suggest the ability to run and debug experimental network protocol in real time. EmuStack environment would bring qualitative change in network research works.
Plesea, Lucian; Wood, James F.
This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.
Nitsch, Daniela; Tranchevent, Léon-Charles; Goncalves, Joana P.
PINTA (available at http://www.esat.kuleuven.be/ pinta/; this web site is free and open to all users and there is no login requirement) is a web resource for the prioritization of candidate genes based on the differential expression of their neighborhood in a genome-wide protein–protein interaction...... and is available for five species (human, mouse, rat, worm and yeast). As input data, PINTA only requires disease specific expression data, whereas various platforms (e.g. Affymetrix) are supported. As a result, PINTA computes a gene ranking and presents the results as a table that can easily be browsed...
Benioff, Ron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bazilian, Morgan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cox, Sadie [National Renewable Energy Lab. (NREL), Golden, CO (United States); Uriarte, Caroline [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kecman, Ana [United Nations Industrial Development Organization, Vienna (Austria); De Simone, Giuseppe [United Nations Industrial Development Organization, Vienna (Austria); Kitaoka, Kazuki [United Nations Industrial Development Organization, Vienna (Austria); Ploutakhina, Marina [United Nations Industrial Development Organization, Vienna (Austria); Radka, M. [United Nations Environment Programme, Nairobi (Kenya)
Considerable effort has been made to address the transition to low-carbon economy. A key focus of these efforts has been on the development of national low-emissions developments strategies (LEDS). One enabler of these plans is the existence of well-functioning national, regional and international low-emission development networks and knowledge platforms. To better understand the role of LEDS, weexamine this area in relation to network theory. We present a review of strengths and weaknesses of existing LEDS networks that builds on the findings of a study conducted by the Coordinated Low Emission Assistance Network (CLEAN). Based on the insights from theory and a mapping of the climate-related network space, we identify opportunities for further refinement of LEDS networks.
Syed Tahir Hussain Rizvi; Denis Patti; Tomas Björklund; Gianpiero Cabodi; Gianluca Francini
The realization of a deep neural architecture on a mobile platform is challenging, but can open up a number of possibilities for visual analysis applications. A neural network can be realized on a mobile platform by exploiting the computational power of the embedded GPU and simplifying the flow of a neural architecture trained on the desktop workstation or a GPU server. This paper presents an embedded platform-based Italian license plate detection and recognition system using deep neural clas...
This book is for Windows network administrators, analysts, or architects,Â with a grasp of the basic operations of Active Directory, and are looking for a book that goes beyond rudimentary operations. However, all of the concepts are explained from the g
Dimitrios D. Piromalis
Full Text Available In this paper, the architecture of a versatile networking and control platform for Light-Emitting Diode (LED lighting applications is presented, based on embedded wireless and wired networking technologies. All the possible power and control signals distribution topologies of the lighting fixtures are examined with particular focus on dynamic lighting applications with design metrics as the cost, the required wiring installation expenses and maintenance complexity. The proposed platform is optimized for applications where the grouping of LED-based lighting fictures clusters is essential, as well as their synchronization. With such an approach, the distributed control and synchronization of LED lighting fixtures' clusters is performed through a versatile network that uses the single wire Local Interconnect Network (LIN bus. The proposed networking platform is presented in terms of its physical layer architecture, its data protocol configuration, and its functionality for smart control. As a proof of concept, the design of a LED lighting fixture together with a LIN-to-IEEE802.15.4/ZigBee data gateway is presented.
Full Text Available Abstract Background Reconstruction of genes and/or protein networks from automated analysis of the literature is one of the current targets of text mining in biomedical research. Some user-friendly tools already perform this analysis on precompiled databases of abstracts of scientific papers. Other tools allow expert users to elaborate and analyze the full content of a corpus of scientific documents. However, to our knowledge, no user friendly tool that simultaneously analyzes the latest set of scientific documents available on line and reconstructs the set of genes referenced in those documents is available. Results This article presents such a tool, Biblio-MetReS, and compares its functioning and results to those of other user-friendly applications (iHOP, STRING that are widely used. Under similar conditions, Biblio-MetReS creates networks that are comparable to those of other user friendly tools. Furthermore, analysis of full text documents provides more complete reconstructions than those that result from using only the abstract of the document. Conclusions Literature-based automated network reconstruction is still far from providing complete reconstructions of molecular networks. However, its value as an auxiliary tool is high and it will increase as standards for reporting biological entities and relationships become more widely accepted and enforced. Biblio-MetReS is an application that can be downloaded from http://metres.udl.cat/. It provides an easy to use environment for researchers to reconstruct their networks of interest from an always up to date set of scientific documents.
Full Text Available Teknologi Wireless Network sudah lama ditemukan dan seiring waktu juga mengalami perkembangan, Namun sifat teknologi ini menggantungkan diri pada infrastruktur jaringan yang ada. Hal ini bias menjadi kelemahan tersendiri saat kondisi infrastruktur jaringan sedang mengalami gangguan, karena setiap komunikasi yang melewati infrastruktur jaringan tersebut tidak akan sampai pada tujuan. Teknologi jaringan Mobile Ad-hoc Network (MANET diciptakan sebagai antisipasi jika infrastruktur jaringan sedang mengalami gangguan. Dengan jaringan MANET sistem komunikasi yang dilakukan tidak membutuhkan infrastruktur jaringan karena tiap node pada jaringan tersebut bersifat mobile. Untuk menguji kemampuan MANET, pada penelitian ini akan menerapkan File Transfer Protocol (FTP sebagai media untuk melakukan komunikasi data file transfer yang diimplementasi pada jaringan MANET. Dari pengujian yang telah dilakukan diperoleh hasil bahwa File Transfer dapat berfungsi dengan baik saat diterapkan pada jaringan MANET.
Full Text Available We describe a Peer-to-Peer (P2P network that is designed to support Video on Demand (VoD services. This network is based on a video-file sharing mechanism that classifies peers according to the window (segment of the file that they are downloading. This classification easily allows identifying peers that are able to share windows among them, so one of our major contributions is the definition of a mechanism that could be implemented to efficiently distribute video content in future 5G networks. Considering that cooperation among peers can be insufficient to guarantee an appropriate system performance, we also propose that this network must be assisted by upload bandwidth from servers; since these resources represent an extra cost to the service provider, especially in mobile networks, we complement our work by defining a scheme that efficiently allocates them only to those peers that are in windows with resources scarcity (we called it prioritized windows distribution scheme. On the basis of a fluid model and a Markov chain, we also developed a methodology that allows us to select the system parameters values (e.g., windows sizes or minimum servers upload bandwidth that satisfy a set of Quality of Experience (QoE parameters.
"Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .
Kim, Kangil; Park, Yoo Min; Yoon, Hyun C.; Yang, Sang Sik
Osteoarthritis (OA) is one of the most common human diseases, and the occurrence of OA is likely to increase with the increase of population ages. The diagnosis of OA is based on patientrelevant measures, structural measures, and measurement of biomarkers that are released through joint metabolism. Traditionally, radiography or magnetic resonance imaging (MRI) is used to diagnose OA and predict its course. However, diagnostic imaging in OA provides only indirect information on pathology and treatment response. A sensing of OA based on the detection of biomarkers insignificantly improves the accuracy and sensitivity of diagnosis and reduces the cost compared with that of radiography or MRI. In our former study, we proposed microfluidic platform to detect biomarker of OA. But the platform can detect only one biomarker because it has one microfluidic channel. In this report, we proposes microfluidic platform that can detect several biomarkers. The proposed platform has three layers. The bottom layer has gold patterns on a Si substrate for optical sensing. The middle layer and top layer were fabricated by polydimethysiloxane (PDMS) using soft-lithography. The middle layer has four channels connecting top layer to bottom layer. The top layer consists of one sample injection inlet, and four antibody injection inlets. To this end, we designed a flow-balanced microfluidic network using analogy between electric and hydraulic systems. Also, the designed microfluidic network was confirmed by finite element model (FEM) analysis using COMSOL FEMLAB. To verify the efficiency of fabricated platform, the optical sensing test was performed to detect biomarker of OA using fluorescence microscope. We used cartilage oligomeric matrix protein (COMP) as biomarker because it reflects specific changes in joint tissues. The platform successfully detected various concentration of COMP (0, 100, 500, 1000 ng/ml) at each chamber. The effectiveness of the microfluidic platform was verified
Zhao, Tao; Karamcheti, Vijay
Future scalable, high throughput, and high performance applications are likely to execute on platforms constructed by clustering multiple autonomous distributed servers, with resource access governed...
A. M. Vasilev
Full Text Available The paper presents an agent substitution algorithm for a dataflow network implemented on the Smart-M3 platform. Such a substitution allows to transfer control and computational context from an unexpectedly disconnected agent to a programmable substitute agent for the period of absence of the first agent in the network. It also guarantees integrity of the information flow, i.e. the functioning of all dependent services is not disrupted after the agent disconnection. When the agent returns to the network the reverse substitution occurs also with keeping integrity of the information flow. The paper gives a description of the dataflow network implementation and substitution mechanism structure on the Smart-M3 platform. The detailed description of the substitution algorithm including initialization, registration, and bidirectional substitution phases is given. The proposed substitution algorithm was implemented by the authors in the substitution mechanism as a part of the RedSIB semantic information broker on the Smart-M3 platform.
Grieder, T.; Huser, A.
As a result of this work, sample texts, so-called performance sheets, have been drawn up for the invitation to tender for IT devices. As a supplement to the standard technical requirements, such as computer performance, memory capacity, etc., these texts cover the aspects of energy efficiency. The performance sheets can be enclosed with the invitations to tender as an appendix, or be used directly as text modules. They are supplemented by explanatory texts, which give information regarding technical terms, labels and possible technical realizations. Performance sheets and explanatory texts are included in the appendix to this report. The goal of these activities is to exert pressure on the market, which should ultimately lead to more efficient units. In addition, however, these texts should serve to make the offices placing the invitations to tender more aware of the energy efficiency aspect. Energy saving functions are fairly common for PCs and monitors nowadays. Reference to proved technical realisations can be made in the performance sheets. The situation is more difficult for servers. Although some technical solutions have been initiated, very little is known about practical applications. Further activities are necessary here. (author)
Jain, Madhu; Mittal, Ragini
The ever increasing demand of the subscribers has put pressure on the capacity of wireless networks around the world. To utilize the scare resources, in the present paper we propose an optimal allocation scheme for an integrated wireless/cellular model with handoff priority and handoff guarantee services. The suggested algorithm optimally allocates the resources in each cell and dynamically adjust threshold to control the admission. To give the priority to handoff calls over the new calls, the provision of guard channels and subrating scheme is taken into consideration. The handoff voice call may balk and renege from the system while waiting in the buffer. An iterative algorithm is implemented to generate the arrival rate of the handoff calls in each cell. Various performance indices are established in term of steady state probabilities. The sensitivity analysis has also been carried out to examine the tractability of algorithms and to explore the effects of system descriptors on the performance indices.
de Vera, David Díaz Pardo; Izquierdo, Álvaro Sigüenza; Vercher, Jesús Bernat; Gómez, Luis Alfonso Hernández
Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678
de Vera, David Díaz Pardo; Izquierdo, Alvaro Sigüenza; Vercher, Jesús Bernat; Hernández Gómez, Luis Alfonso
Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs.
David Díaz Pardo de Vera
Full Text Available Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs.
20 C. NETWORK ARCHITECTURE .............................................................22 D. DESIGN CHARACTERISTICS...2. Server-Based BYOD Network Architecture 25 D. DESIGN CHARACTERISTICS From a scalability perspective, the server-based network architecture must... landscape and has empowered employees to conduct work-related business from the comfort of their own phone, tablet, or other personal electronic device
Full Text Available We present a simulation platform for access selection algorithms in heterogeneous wireless networks, called “ABCDecision”. The simulator implements the different parts of an Always Best Connected (ABC system, including Access Technology Selector (ATS, Radio Access Networks (RANs, and users. After describing the architecture of the simulator, we show an overview of the existing decision algorithms for access selection. Then we propose a new selection algorithm in heterogeneous networks and we run a set of simulations to evaluate the performance of the proposed algorithm in comparison with the existing ones. The performance results, in terms of the occupancy rate, show that our algorithm achieves a load balancing distribution between networks by taking into consideration the capacities of the available cells.
Full Text Available Active distribution network system has the characteristics of complex structure，high DG permeability，large load fluctuation，strict control requirements. The data information of operation has the characteristics of high volume，high speed，diversity and value. For active distribution network data processing， according to the theory of cloud calculation，using the data mining technology and distributed parallel computing method，establishing an active distribution network security monitoring system model based on PDMiner large data mining platform. The processing of historical data and real time fault data are studied respectively. Research results show that the system by processing of historical data for risk zoning，development planning，operation state evaluation，by processing of fault data for fault analysis and processing，providing the basis for the distribution network security. The result of the system is verified by the simulation example.
Narang, Pankaj; Khan, Shawez; Hemrom, Anmol Jaywant; Lynn, Andrew Michael
Metabolic reactions have been extensively studied and compiled over the last century. These have provided a theoretical base to implement models, simulations of which are used to identify drug targets and optimize metabolic throughput at a systemic level. While tools for the perturbation of metabolic networks are available, their applications are limited and restricted as they require varied dependencies and often a commercial platform for full functionality. We have developed MetaNET, an open source user-friendly platform-independent and web-accessible resource consisting of several pre-defined workflows for metabolic network analysis. MetaNET is a web-accessible platform that incorporates a range of functions which can be combined to produce different simulations related to metabolic networks. These include (i) optimization of an objective function for wild type strain, gene/catalyst/reaction knock-out/knock-down analysis using flux balance analysis. (ii) flux variability analysis (iii) chemical species participation (iv) cycles and extreme paths identification and (v) choke point reaction analysis to facilitate identification of potential drug targets. The platform is built using custom scripts along with the open-source Galaxy workflow and Systems Biology Research Tool as components. Pre-defined workflows are available for common processes, and an exhaustive list of over 50 functions are provided for user defined workflows. MetaNET, available at http://metanet.osdd.net , provides a user-friendly rich interface allowing the analysis of genome-scale metabolic networks under various genetic and environmental conditions. The framework permits the storage of previous results, the ability to repeat analysis and share results with other users over the internet as well as run different tools simultaneously using pre-defined workflows, and user-created custom workflows.
Shi, Heyuan; Song, Xiaoyu; Gu, Ming; Sun, Jiaguang
The vehicular participatory sensing network (VPSN) is now becoming more and more prevalent, and additionally has shown its great potential in various applications. A general VPSN consists of many tasks from task, publishers, trading platforms and a crowd of participants. Some literature treats publishers and the trading platform as a whole, which is impractical since they are two independent economic entities with respective purposes. For a trading platform in markets, its purpose is to maximize the profit by selecting tasks and recruiting participants who satisfy the requirements of accepted tasks, rather than to improve the quality of each task. This scheduling problem for a trading platform consists of two parts: which tasks should be selected and which participants to be recruited? In this paper, we investigate the scheduling problem in vehicular participatory sensing with the predictable mobility of each vehicle. A genetic-based trading scheduling algorithm (GTSA) is proposed to solve the scheduling problem. Experiments with a realistic dataset of taxi trajectories demonstrate that GTSA algorithm is efficient for trading platforms to gain considerable profit in VPSN.
Chowdhury, A. K. M. Rezaul Haque; Tavangar, Amirhossein; Tan, Bo; Venkatakrishnan, Krishnan
Carbon nanomaterials have been investigated for various biomedical applications. In most cases, however, these nanomaterials must be functionalized biologically or chemically due to their biological inertness or possible cytotoxicity. Here, we report the development of a new carbon nanomaterial with a bioactive phase that significantly promotes cell adhesion. We synthesize the bioactive phase by introducing self-assembled nanotopography and altered nano-chemistry to graphite substrates using ultrafast laser. To the best of our knowledge, this is the first time that such a cytophilic bio-carbon is developed in a single step without requiring subsequent biological/chemical treatments. By controlling the nano-network concentration and chemistry, we develop platforms with different degrees of cell cytophilicity. We study quantitatively and qualitatively the cell response to nano-network platforms with NIH-3T3 fibroblasts. The findings from the in vitro study indicate that the platforms possess excellent biocompatibility and promote cell adhesion considerably. The study of the cell morphology shows a healthy attachment of cells with a well-spread shape, overextended actin filaments, and morphological symmetry, which is indicative of a high cellular interaction with the nano-network. The developed nanomaterial possesses great biocompatibility and considerably stimulates cell adhesion and subsequent cell proliferation, thus offering a promising path toward engineering various biomedical devices. PMID:28287138
Lahrmann, Harry; Agerholm, Niels; Juhl, Jens
This paper presents the project entitled “ITS Platform North Denmark” which is used as a test platform for Intelligent Transportation System (ITS) solutions. The platform consists of a newly developed GNSS/GPRS On Board Unit (OBU) to be installed in 500 cars, a backend server and a specially...
Carreras, P.; Elani, Y.; Law, R. V.; Brooks, N. J.; Seddon, J. M.; Ces, O.
Droplet interface bilayer (DIB) networks are emerging as a cornerstone technology for the bottom up construction of cell-like and tissue-like structures and bio-devices. They are an exciting and versatile model-membrane platform, seeing increasing use in the disciplines of synthetic biology, chemical biology, and membrane biophysics. DIBs are formed when lipid-coated water-in-oil droplets are brought together—oil is excluded from the interface, resulting in a bilayer. Perhaps the greatest feature of the DIB platform is the ability to generate bilayer networks by connecting multiple droplets together, which can in turn be used in applications ranging from tissue mimics, multicellular models, and bio-devices. For such applications, the construction and release of DIB networks of defined size and composition on-demand is crucial. We have developed a droplet-based microfluidic method for the generation of different sized DIB networks (300–1500 pl droplets) on-chip. We do this by employing a droplet-on-rails strategy where droplets are guided down designated paths of a chip with the aid of microfabricated grooves or “rails,” and droplets of set sizes are selectively directed to specific rails using auxiliary flows. In this way we can uniquely produce parallel bilayer networks of defined sizes. By trapping several droplets in a rail, extended DIB networks containing up to 20 sequential bilayers could be constructed. The trapped DIB arrays can be composed of different lipid types and can be released on-demand and regenerated within seconds. We show that chemical signals can be propagated across the bio-network by transplanting enzymatic reaction cascades for inter-droplet communication. PMID:26759638
Truong, K. P.; Griffin, D.; Maini, E.; Rio, M.
This paper presents a new method for selection between replicated servers distributed over a wide area, allowing application and network providers to trade-off costs with quality-of-service for their users. First, we create a novel utility framework that factors in quality of service metrics. Then we design a polynomial optimization algorithm to allocate user service requests to servers based on the utility while satisfying transit cost constraint. We then describe an efficient - low overhead...
Full Text Available NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs. Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimised performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP rule for learning. A 6-FPGA system can simulate a network of up to approximately 600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Cheung, Kit; Schultz, Simon R; Luk, Wayne
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
Kawashima, Ryota; Ji, Yusheng; Maruyama, Katsumi
Networking technologies have recently been evolving and network applications are now expected to support flexible composition of upper-layer network services, such as security, QoS, or personal firewall. We propose a multi-platform framework called FreeNA that extends existing applications by incorporating the services based on user definitions. This extension does not require users to modify their systems at all. Therefore, FreeNA is valuable for experimental system usage. We implemented FreeNA on both Linux and Microsoft Windows operating systems, and evaluated their functionality and performance. In this paper, we describe the design and implementation of FreeNA including details on how to insert network services into existing applications and how to create services in a multi-platform environment. We also give an example implementation of a service with SSL, a functionality comparison with relevant systems, and our performance evaluation results. The results show that FreeNA offers finer configurability, composability, and usability than other similar systems. We also show that the throughput degradation of transparent service insertion is 2% at most compared with a method of directly inserting such services into applications.
Chowdhury, A K M Rezaul Haque; Tan, Bo; Venkatakrishnan, Krishnan
Carbon nanomaterials have emerged as a promising material in cancer diagnosis and therapy. Carbon nanomaterials/nanostructures (C-C molecular structure) act as a carrier/skeleton and require further surface modification through functionalization with chemicals or biomolecules to attain cell response. We report the synthesis of a novel carbon nanoribbon network (CNRN) platform that possesses a combination of C-C and C-O bond architecture. The bioactive CNRN showed enhanced ability for cell adhesion. Most importantly, it induced opposite cell responses from healthy cells and cancerous cells, cytophilic to fibroblasts but cytotoxic to HeLa cells. Ultrafast laser ionization under ambient conditions transforms nonbioresponsive C-C bond of graphite to C-C and C-O bonds, forming a self-assembled CNRN platform. The morphology, nanochemistry, and functionality on modulating fibroblast and HeLa adhesion and proliferation of the fabricated CNRN platforms were investigated. The results of in vitro studies suggested that the CNRN platforms not only attracted but also actively accelerated the adhesion and proliferation of both fibroblasts and HeLa cells. The proliferation rate of fibroblasts and HeLa cells is 91 and 98 times greater compared with that of a native graphite substrate, respectively. The morphology of the cells over a period of 24 to 48 h revealed that the CNRN platform induced an apoptosis-like cytotoxic function on HeLa cells, whereas fibroblasts experienced a cytophilic effect and formed a tissuelike structure. The degree of cytotoxic or cytophilic effect can be further enhanced by adjusting parameters such as the ratio of C-C bonds to C-O bonds, the nanoribbon width, and the nanovoid porosity of the CNRN platforms, which could be tuned by careful control of laser ionization. In a nutshell, for the first time, pristine carbon nanostructures free from biochemical functionalization demonstrate dual function, cytophilic to fibroblast cells and cytotoxic to He
Eduardo Paciência Godoy
Full Text Available A current trend in the agricultural area is the development of mobile robots and autonomous vehicles for precision agriculture (PA. One of the major challenges in the design of these robots is the development of the electronic architecture for the control of the devices. In a joint project among research institutions and a private company in Brazil a multifunctional robotic platform for information acquisition in PA is being designed. This platform has as main characteristics four-wheel propulsion and independent steering, adjustable width, span of 1,80 m in height, diesel engine, hydraulic system, and a CAN-based networked control system (NCS. This paper presents a NCS solution for the platform guidance by the four-wheel hydraulic steering distributed control. The control strategy, centered on the robot manipulators control theory, is based on the difference between the desired and actual position and considering the angular speed of the wheels. The results demonstrate that the NCS was simple and efficient, providing suitable steering performance for the platform guidance. Even though the simplicity of the NCS solution developed, it also overcame some verified control challenges in the robot guidance system design such as the hydraulic system delay, nonlinearities in the steering actuators, and inertia in the steering system due the friction of different terrains.
Holm, Liisa; Laakso, Laura M
The Dali server (http://ekhidna2.biocenter.helsinki.fi/dali) is a network service for comparing protein structures in 3D. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The Dali server has been running in various places for over 20 years and is used routinely by crystallographers on newly solved structures. The latest update of the server provides enhanced analytics for the study of sequence and structure conservation. The server performs three types of structure comparisons: (i) Protein Data Bank (PDB) search compares one query structure against those in the PDB and returns a list of similar structures; (ii) pairwise comparison compares one query structure against a list of structures specified by the user; and (iii) all against all structure comparison returns a structural similarity matrix, a dendrogram and a multidimensional scaling projection of a set of structures specified by the user. Structural superimpositions are visualized using the Java-free WebGL viewer PV. The structural alignment view is enhanced by sequence similarity searches against Uniprot. The combined structure-sequence alignment information is compressed to a stack of aligned sequence logos. In the stack, each structure is structurally aligned to the query protein and represented by a sequence logo. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
In the development of information technology, information is obtained quickly through technology computer network known as the Internet. The use bandwidth for Internet access can be maximized by using a proxy server. One of the proxy server is squid. The use squid as the proxy server need to consider the operating system on the server and have not known its best performance on any operating system yet. For that it is necessary to analyze the performance of squid proxy server on a different op...
Suh, Kyo; Smith, Timothy; Linhoff, Michelle
Increasing numbers of people are managing their social networks on mobile information and communication technology (ICT) platforms. This study materializes these social relationships by leveraging spatial and networked information for sharing excess capacity to reduce the environmental impacts associated with "last-mile" package delivery systems from online purchases, particularly in low population density settings. Alternative package pickup location systems (PLS), such as a kiosk on a public transit platform or in a grocery store, have been suggested as effective strategies for reducing package travel miles and greenhouse gas emissions, compared to current door-to-door delivery models (CDS). However, our results suggest that a pickup location delivery system operating in a suburban setting may actually increase travel miles and emissions. Only once a social network is employed to assist in package pickup (SPLS) are significant reductions in the last-mile delivery distance and carbon emissions observed across both urban and suburban settings. Implications for logistics management's decades-long focus on improving efficiencies of dedicated distribution systems through specialization, as well as for public policy targeting carbon emissions of the transport sector are discussed.
Wang, Jia-Hong; Zhao, Ling-Feng; Lin, Pei; Su, Xiao-Rong; Chen, Shi-Jun; Huang, Li-Qiang; Wang, Hua-Feng; Zhang, Hai; Hu, Zhen-Fu; Yao, Kai-Tai; Huang, Zhong-Xi
Identifying biological functions and molecular networks in a gene list and how the genes may relate to various topics is of considerable value to biomedical researchers. Here, we present a web-based text-mining server, GenCLiP 2.0, which can analyze human genes with enriched keywords and molecular interactions. Compared with other similar tools, GenCLiP 2.0 offers two unique features: (i) analysis of gene functions with free terms (i.e. any terms in the literature) generated by literature mining or provided by the user and (ii) accurate identification and integration of comprehensive molecular interactions from Medline abstracts, to construct molecular networks and subnetworks related to the free terms. http://ci.smu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Dagan, Noa; Beskin, Daniel; Brezis, Mayer; Reis, Ben Y
Social networking sites (SNSs) such as Facebook have the potential to enhance online public health interventions, in part, as they provide social exposure and reinforcement. The objective of the study was to evaluate whether social exposure provided by SNSs enhances the effects of online public health interventions. As a sample intervention, we developed Food Hero, an online platform for nutritional education in which players feed a virtual character according to their own nutritional needs and complete a set of virtual sport challenges. The platform was developed in 2 versions: a "private version" in which a user can see only his or her own score, and a "social version" in which a user can see other players' scores, including preexisting Facebook friends. We assessed changes in participants' nutritional knowledge using 4 quiz scores and 3 menu-assembly scores. Monitoring feeding and exercising attempts assessed engagement with the platform. The 2 versions of the platform were randomly assigned between a study group (30 members receiving the social version) and a control group (33 members, private version). The study group's performance on the quizzes gradually increased over time, relative to that of the control group, becoming significantly higher by the fourth quiz (P=.02). Furthermore, the study group's menu-assembly scores improved over time compared to the first score, whereas the control group's performance deteriorated. Study group members spent an average of 3:40 minutes assembling each menu compared to 2:50 minutes in the control group, and performed an average of 1.58 daily sport challenges, compared to 1.21 in the control group (P=.03). This work focused on isolating the SNSs' social effects in order to help guide future online interventions. Our results indicate that the social exposure provided by SNSs is associated with increased engagement and learning in an online nutritional educational platform.
Full Text Available The paper presents selected server platforms based on free and open source license, coherent with the standards of the Open Geospatial Consortium. The presented programs are evaluated in the context of the INSPIRE Directive. The first part describes the requirements of the Directive, and afterwards presented are the pros and cons of each platform, to meet these demands. This article provides an answer to the question whether the use of free software can provide interoperable network services in accordance with the requirements of the INSPIRE Directive, on the occasion of presenting the application examples and practical tips on the use of particular programs.[b]Keywords[/b]: GIS, INSPIRE, free software, OGC, geoportal, network services, GeoServer, deegree, GeoNetwork
Full Text Available We adopt the direction towards new educational and pedagogic paradigms, where learning is a process of emergence and co-evolution of the individual, the social group and the wider society. In this direction, Service-Oriented Architectures are becoming a popular system paradigm for e-learning. In this article, we present our research and development efforts to provide a social networking learning platform for developing services which address the personal learning needs of the users and enable them to create value. We also present the specific characteristics of our community driven service framework and discuss our approach in comparison to other similar approaches and frameworks.
Blanke, M.; Nielsen, Jens Frederik Dalsgaard; Degre, T.
added value to the Commission's activities on R&D and on the implementation of results. NEPTUNE will be a platform for the cooperation of large and small research institutes and universities as well as for an organized knowledge transfer between research and users or industry. Many institutes....... For the support to the objectives of NEPTUNE the association is developing the NEPTUNE Information Network. A pilot demonstration on the basis of the world wide web technique on Internet has been established. Two NEPTUNE server, on the premises of ISL in Bremen and NTUA in Athens, can be adressed via the URL......=http://www.isl.uni-bremen.de/NEPTUNE/ and URL=http://www.maritime.deslab.naval.ntua.gr/neptune/framelayout.html The pilot will be enlarged concerning the number of NEPTUNE servers as well as regarding the scope of information provided by the various servers. The implementation and operating of such an European Waterborne Information Network...
Munteanu, Cristian R; Pedreira, Nieves; Dorado, Julián; Pazos, Alejandro; Pérez-Montoto, Lázaro G; Ubeira, Florencio M; González-Díaz, Humberto
Lectins (Ls) play an important role in many diseases such as different types of cancer, parasitic infections and other diseases. Interestingly, the Protein Data Bank (PDB) contains +3000 protein 3D structures with unknown function. Thus, we can in principle, discover new Ls mining non-annotated structures from PDB or other sources. However, there are no general models to predict new biologically relevant Ls based on 3D chemical structures. We used the MARCH-INSIDE software to calculate the Markov-Shannon 3D electrostatic entropy parameters for the complex networks of protein structure of 2200 different protein 3D structures, including 1200 Ls. We have performed a Linear Discriminant Analysis (LDA) using these parameters as inputs in order to seek a new Quantitative Structure-Activity Relationship (QSAR) model, which is able to discriminate 3D structure of Ls from other proteins. We implemented this predictor in the web server named LECTINPred, freely available at http://bio-aims.udc.es/LECTINPred.php. This web server showed the following goodness-of-fit statistics: Sensitivity=96.7 % (for Ls), Specificity=87.6 % (non-active proteins), and Accuracy=92.5 % (for all proteins), considering altogether both the training and external prediction series. In mode 2, users can carry out an automatic retrieval of protein structures from PDB. We illustrated the use of this server, in operation mode 1, performing a data mining of PDB. We predicted Ls scores for +2000 proteins with unknown function and selected the top-scored ones as possible lectins. In operation mode 2, LECTINPred can also upload 3D structural models generated with structure-prediction tools like LOMETS or PHYRE2. The new Ls are expected to be of relevance as cancer biomarkers or useful in parasite vaccine design. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Jereczek, Grzegorz Edmund; for the ATLAS collaboration
The recent trends in software-defined networking (SDN) and network function virtualization (NFV) are boosting the advance of software-based packet processing and forwarding on commodity servers. Although performance has traditionally been the challenge of this approach, this situation changes with modern server platforms. High performance load balancers, proxies, virtual switches and other network functions can be now implemented in software and not limited to specialized commercial hardware, thus reducing cost and increasing the flexibility. In this paper we design a lossless software-based switch for high bandwidth data acquisition (DAQ) networks, using the ATLAS experiment at CERN as a case study. We prove that it can effectively solve the incast pathology arising from the many-to-one communication pattern present in DAQ networks by providing extremely high buffering capabilities. We evaluate this on a commodity server equipped with twelve 10 Gbps Ethernet interfaces providing a total bandwidth of 120 Gbps...
Full Text Available Background: Human life can be further improved if diseases and disorders can be predicted before they become dangerous, by correctly recognizing signals from the human body, so in order to make disease detection more precise, various body-signals need to be measured simultaneously in a synchronized manner. Object: This research aims at developing an integrated system for measuring four signals (EEG, ECG, respiration, and PPG and simultaneously producing synchronous signals on a Wireless Body Sensor Network. Design: We designed and implemented a platform for multiple bio-signals using Bluetooth communication. Results: First, we developed a prototype board and verified the signals from the sensor platform using frequency responses and quantities. Next, we designed and implemented a lightweight, ultra-compact, low cost, low power-consumption Printed Circuit Board. Conclusion: A synchronous multi-body sensor platform is expected to be very useful in telemedicine and emergency rescue scenarios. Furthermore, this system is expected to be able to analyze the mutual effects among body signals.
Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.
We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see . In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of
Full Text Available Offshore floating wind turbine (OFWT has been a challenging research spot because of the high-quality wind power and complex load environment. This paper focuses on the research of variable torque control of offshore wind turbine on Spar floating platform. The control objective in below-rated wind speed region is to optimize the output power by tracking the optimal tip-speed ratio and ideal power curve. Aiming at the external disturbances and nonlinear uncertain dynamic systems of OFWT because of the proximity to load centers and strong wave coupling, this paper proposes an advanced radial basis function (RBF neural network approach for torque control of OFWT system at speeds lower than rated wind speed. The robust RBF neural network weight adaptive rules are acquired based on the Lyapunov stability analysis. The proposed control approach is tested and compared with the NREL baseline controller using the “NREL offshore 5 MW wind turbine” model mounted on a Spar floating platform run on FAST and Matlab/Simulink, operating in the below-rated wind speed condition. The simulation results show a better performance in tracking the optimal output power curve, therefore, completing the maximum wind energy utilization.
Cheali, Peam; Gernaey, Krist; Sin, Gürkan
This study presents the development of an expanded biorefinery processing network for producing biofuels that combines biochemical and thermochemical conversion platforms. The expanded network is coupled to a framework that uses a superstructure based optimization approach to generate and compare...... of 72 processing intervals . This superstructure was integrated with an earlier developed superstructure for biochemical conversion routes thereby forming a formidable number of biorefinery alternatives. The expanded network was demonstrated to be versatile and useful as a decision support tool...
Francisco Javier Ferrández-Pastor
Full Text Available The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water; however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols, the evolution of Internet technologies (Internet of Things and ubiquitous computing (Ubiquitous Sensor Networks allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists when a project is launched.
Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José
The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched.
Kawano, Toshihiko; Sakai, Osamu [Kyushu Univ., Fukuoka (Japan)
We construct a nuclear data server which provides data in the evaluated nuclear data library through the network by means of TCP/IP. The client is not necessarily a user but a computer program. Two examples with a prototype server program are demonstrated, the first is data transfer from the server to a user, and the second is to a computer program. (author)
The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.
Veronica Windha Mahyastuty
Full Text Available Technology development and socio-economic transformation have increased the demand for 5G cellular networks. They are expected to send information quickly and support many use cases emerging from a variety of applications. One of the use cases on the 5G network is the massive MTC (Machine Type Communication, wherein wireless sensor network (WSN is a typical application. Challenges faced by a 5G cellular network are how to model an architecture/topology to support WSN and to solve energy consumption efficiency problem in WSN. So, to overcome these challenges, a HAP system integrated with WSN which uses Low Energy Adaptive Hierarchy routing protocol is implemented. The HAP system is designed to be used at a 20-km altitude, and the topologies used are those with and without clustering. It uses 1,000 sensor nodes and Low Energy Adaptive Clustering Hierarchy protocol. This system was simulated using MATLAB. Simulations were performed to analyze the energy consumption, the number of dead nodes, and the average total packets which were sent to HAP for non-clustered topology and clustered topology. Simulation results showed that the clustered topology could reduce energy consumption and the number of dead nodes while increasing the total packet sent to HAP.*****Perkembangan teknologi dan transformasi sosial-ekonomi telah menyebabkan bisnis jaringan seluler 5G mengalami perubahan, sehingga jaringan seluler 5G diharapkan dapat mengirim informasi dengan cepat dan mendukung kasus penggunaan yang banyak bermunculan dari berbagai aplikasi. Salah satu kasus penggunaan pada jaringan 5G adalah massive Machine Type Communication (MTC. Salah satu aplikasi massive MTC adalah jaringan sensor nirkabel (JSN. Tantangan bagi jaringan seluler 5G ini adalah bagaimana memodelkan arsitektur/topologi untuk mendukung JSN dan bagaimana mengatasi masalah efisiensi konsumsi energi di JSN. Untuk menjawab tantangan ini, maka diterapkan sistem HAP yang terintegrasi JSN dan
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data
"Sorrento Networks, a supplier of optical transport networking equipment for carriers and enterprises worldwide, today announced that SWITCH successfully completed 10 Gbps BER tests on the 220 km Zurich to Manno and 360 km Zurich to Geneva links in September and November 2003, using Sorrento's GigaMux DWDM system" (1/2 page).
SQL server is the most widely-used database platform in the world, and a large percentage of these databases are not properly secured, exposing sensitive customer and business data to attack. In Securing SQL Server, Third Edition, you will learn about the potential attack vectors that can be used to break into SQL server databases as well as how to protect databases from these attacks. In this book, Denny Cherry - a Microsoft SQL MVP and one of the biggest names in SQL server - will teach you how to properly secure an SQL server database from internal and external threats using best practic
Wevers, Nienke R.; van Vught, Remko; Wilschut, Karlijn J.; Nicolas, Arnaud; Chiang, Chiwan; Lanz, Henriette L.; Trietsch, Sebastiaan J.; Joore, Jos; Vulto, Paul
With great advances in the field of in vitro brain modelling, the challenge is now to implement these technologies for development and evaluation of new drug candidates. Here we demonstrate a method for culturing three-dimensional networks of spontaneously active neurons and supporting glial cells in a microfluidic platform. The high-throughput nature of the platform in combination with its compatibility with all standard laboratory equipment allows for parallel evaluation of compound effects. PMID:27934939
Full Text Available A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”, a persistent pseudonym for a tenant server that can be used by a single client to access the server, whose real identity is protected by the cloud from both passive and active network attackers. When instantiated for TLS-based access to web servers, our design works with all major browsers and requires no additional client-side software and minimal changes to the client user experience. Moreover, changes to tenant servers can be hidden in supporting software (operating systems and web-programming frameworks without imposing on web-content development. Perhaps most notably, our system boosts privacy with minimal impact to web-browsing performance, after some initial setup during a user’s first access to each web server.
Full Text Available Recent advances in networking and communications removed the restrictions of time and space in information services. Context-aware service systems can support the predefined services in accordance with user requests regardless of time and space. However, due to their architectural limitations, the recent systems are not so flexible to provide device-independent services by multiple service providers. Recently, researchers have focused on a new service paradigm characterized by high mobility, service continuity, and green characteristics. In line with these efforts, improved context-aware service platforms have been suggested to make the platform possible to manage the contexts to provide the adaptive services for multi-user and locations. However, this platform can only support limited continuity and mobility. In other words, the existing system cannot support seamless service provision among different service providers with respect to the changes of mobility, situation, device, and network. Furthermore, the existing context-aware service platform is significant reliance on always-on infrastructure, which leads to great amounts of energy consumption inevitably. Therefore, we subsequently propose a new concept of context-aware networking and communications, namely a zone-aware service platform. The proposed platform autonomously reconfigures the infrastructure and maintains a service session interacting with the middleware to support cost- and energy-efficient pervasive services for smart-home sustainability.
... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Part I. JSP Application Basics 1. Introducing JavaServer Pages What Is JavaServer Pages? Why Use JSP? What You Need to Get Started...
Full Text Available Network virtualization technology is regarded as one of gradual schemes to network architecture evolution. With the development of network functions virtualization, operators make lots of effort to achieve router virtualization by using general servers. In order to ensure high performance, virtual router platform usually adopts a cluster of general servers, which can be also regarded as a special cloud computing environment. However, due to frequent creation and deletion of router instances, it may generate lots of resource fragmentation to prevent platform from establishing new router instances. In order to solve “resource fragmentation problem,” we firstly propose VR-Cluster, which introduces two extra function planes including switching plane and resource management plane. Switching plane is mainly used to support seamless migration of router instances without packet loss; resource management plane can dynamically move router instances from one server to another server by using VR-mapping algorithms. Besides, three VR-mapping algorithms including first-fit mapping algorithm, best-fit mapping algorithm, and worst-fit mapping algorithm are proposed based on VR-Cluster. At last, we establish VR-Cluster protosystem by using general X86 servers, evaluate its migration time, and further analyze advantages and disadvantages of our proposed VR-mapping algorithms to solve resource fragmentation problem.
Gertsen, Frank; Høgsaa, Asger; Tollestrup, Christian H. T.
and an experience-enriched forum consisting of teachers, industry experts, entrepreneurs and specialty consultants. The interaction unfolds as a series of workshops facilitating the progression of the student teams’ four months project work. The students of concern are enrolled in the international Entrepreneurial...... Engineering Master’s Program  at Aalborg University, Denmark (120 ECTS credits). The paper will describe and elaborate on the functioning of SuperWiseNet including a discussion of advantages for students, faculty, and industry/externals as well as some challenges with the concept.......The area of interests is the development of a potentially new complementary industry-university component, which has been labelled ‘SuperWiseNet’ for the context of academic entrepreneurial programs. The SuperWiseNet is a network-based platform for interaction between students of entrepreneurship...
Meng, X.; Deng, Y.; Li, H.; Yao, L.; Shi, J.
With the acceleration of China's informatization process, our party and government take a substantive stride in advancing development and application of digital technology, which promotes the evolution of e-government and its informatization. Meanwhile, as a service mode based on innovative resources, cloud computing may connect huge pools together to provide a variety of IT services, and has become one relatively mature technical pattern with further studies and massive practical applications. Based on cloud computing technology and national e-government network platform, "National Natural Resources and Geospatial Database (NRGD)" project integrated and transformed natural resources and geospatial information dispersed in various sectors and regions, established logically unified and physically dispersed fundamental database and developed national integrated information database system supporting main e-government applications. Cross-sector e-government applications and services are realized to provide long-term, stable and standardized natural resources and geospatial fundamental information products and services for national egovernment and public users.
Vingelmann, Peter; Fitzek, Frank; Pedersen, Morten Videbæk
This work presents the implementation of synchronized multimedia streaming for the Apple iPhone platform. The idea is to stream multimedia content from a single source to multiple receivers with direct or multihop connections to the source. First we look into existing solutions for video streaming...... on the iPhone that use point-to-point architectures. After acknowledging their limitations, we propose a solution based on network coding to efficiently and reliably deliver the multimedia content to many devices in a synchronized manner. Then we introduce an application that implements this technique...... on the iPhone. We also present our testbed, which consists of 16 iPod Touch devices to showcase the capabilities of our application....
Full Text Available The objective of this paper is to provide a new simulator framework for mobile WSN that emulate a sensor node at a laptop i.e. the laptop will model and replace a sensor node within a network. This platform can implement diﬀerent WSN routing protocols to simulate and validate new developed protocols in terms of energy consumption, loss packets rate, delivery ratio, mobility support, connectivity and exchanged messages number in real time. To evaluate the performance of Mobi-Sim, we implement into it two popular protocols (LEACH-M and LEACH sink-mobile and compare its results to TOSSIM. Then, we propose another routing protocol based on clustering that we compare it to LEACH-M.
Lee, Minho; Heo, Eunyoung; Lim, Heesook; Lee, Jun Young; Weon, Sangho; Chae, Hoseok; Hwang, Hee; Yoo, Sooyoung
We aimed to develop a common health information exchange (HIE) platform that can provide integrated services for implementing the HIE infrastructure in addition to guidelines for participating in an HIE network in South Korea. By exploiting the Health Level 7 (HL7) Clinical Document Architecture (CDA) and Integrating the Healthcare Enterprise (IHE) Cross-enterprise Document Sharing-b (XDS.b) profile, we defined the architectural model, exchanging data items and their standardization, messaging standards, and privacy and security guidelines, for a secure, nationwide, interoperable HIE. We then developed a service-oriented common HIE platform to minimize the effort and difficulty of fulfilling the standard requirements for participating in the HIE network. The common platform supports open application program interfaces (APIs) for implementing a document registry, a document repository, a document consumer, and a master patient index. It could also be used for testing environments for the implementation of standard requirements. As the initial phase of implementing a nationwide HIE network in South Korea, we built a regional network for workers' compensation (WC) hospitals and their collaborating clinics to share referral and care record summaries to ensure the continuity of care for industrially injured workers, using the common HIE platform and verifying the feasibility of our technologies. We expect to expand the HIE network on a national scale with rapid support for implementing HL7 and IHE standards in South Korea.
Full Text Available Heterogeneous wireless networks are capable of providing customers with better services while service providers can offer more applications to more customers with lower costs. To provide services, some applications rely on existing servers in the network. In a vehicular ad-hoc network (VANET some mobile nodes may function as servers. Due to high mobility of nodes and short lifetime of links, server-to-client and server-to-server communications become challenging. In this paper we propose to enhance the performance of server selection by taking link reliability into consideration in the server selection mechanism, thereby avoiding extra client-to-server hand-offs and reducing the need of server-to-server synchronization. As a case study we focus on location management service in a heterogeneous VANET. We provide a routing algorithm for transactions between location servers and mobile nodes. We assume that location servers are vehicles equipped with at least one long- range and one short-range radio interfaces, whereas regular nodes (clients are only equipped with a short-range radio interface. The primary goal of our design is to minimize hand-offs between location servers while limiting the delays of location updates. Taking advantage of vehicle mobility patterns, we propose a mobility-aware server selection scheme and show that it can reduce the number of hand-offs and yet avoid large delays during location updates. We present simulation results to show that proposed scheme significantly lowers the costs of signaling and rate of server hand-offs by increasing the connection lifetimes between clients and servers.
Author presents an idea of remote management with applications using mobile devices. Proposed architecture consists of: applications run in the Java Virtual Machine environment and use JMX technology for representing resources, mobile clients run on Java ME platform, and a proxy server. The access to JMX mechanisms requires an implementation of RMI protocol. Unfortunately the Java ME Platform does not define proper API. That is why we need a proxy server representing JMX services for non-RMI mobile clients. The role of the proxy server is two-directional translation between text descriptions of MBeans and remote method invocations. Advantages of proposed solution: easy extensibility and platform independence.
Finestead, Arlan; Yeager, Nancy
The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.
Cummings, J.; Aisen, P.; Barton, R.; Bork, J.; Doody, R.; Dwyer, J.; Egan, J.C.; Feldman, H.; Lappin, D.; Truyen, L.; Salloway, S.; Sperling, R.; Vradenburg, G.
Alzheimer’s disease (AD) drug development is costly, time-consuming, and inefficient. Trial site functions, trial design, and patient recruitment for trials all require improvement. The Global Alzheimer Platform (GAP) was initiated in response to these challenges. Four GAP work streams evolved in the US to address different trial challenges: 1) registry-to-cohort web-based recruitment; 2) clinical trial site activation and site network construction (GAP-NET); 3) adaptive proof-of-concept clinical trial design; and 4) finance and fund raising. GAP-NET proposes to establish a standardized network of continuously funded trial sites that are highly qualified to perform trials (with established clinical, biomarker, imaging capability; certified raters; sophisticated management system. GAP-NET will conduct trials for academic and biopharma industry partners using standardized instrument versions and administration. Collaboration with the Innovative Medicines Initiative (IMI) European Prevention of Alzheimer’s Disease (EPAD) program, the Canadian Consortium on Neurodegeneration in Aging (CCNA) and other similar international initiatives will allow conduct of global trials. GAP-NET aims to increase trial efficiency and quality, decrease trial redundancy, accelerate cohort development and trial recruitment, and decrease trial costs. The value proposition for sites includes stable funding and uniform training and trial execution; the value to trial sponsors is decreased trial costs, reduced time to execute trials, and enhanced data quality. The value for patients and society is the more rapid availability of new treatments for AD. PMID:28459045
Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
Lin, Yu-Tzu; Chen, Ming-Puu; Chang, Chia-Hu; Chang, Pu-Chen
The benefits of social learning have been recognized by existing research. To explore knowledge distribution in social learning and its effects on learning achievement, we developed a social learning platform and explored students' behaviors of peer interactions by the proposed algorithms based on social network analysis. An empirical study was…
Muhammad Ilyas Syarif
Full Text Available Efficiency energy and stream data mining on Wireless Sensor Networks (WSNs are a very interesting issue to be discussed. Routing protocols technology and resource-aware can be done to improve energy efficiency. In this paper we try to merge routing protocol technology using routing Distance Vector and Resource-Aware (RA framework on heterogeneity wireless sensor networks by combining sun-SPOT and Imote2 platform wireless sensor networks. RA perform resource monitoring process of the battery, memory and CPU load more optimally and efficiently. The process uses Light-Weight Clustering (LWC and Light Weight Frequent Item (LWF. The results obtained that by adapting Resource-Aware in wireless sensor networks, the lifetime of wireless sensor improve up to Â± 16.62%. Efisiensi energi dan stream data mining pada Wireless Sensor Networks (WSN adalah masalah yang sangat menarik untuk dibahas. Teknologi Routing Protocol dan Resource-Aware dapat dilakukan untuk meningkatkan efisiensi energi. Dalam penelitian ini peneliti mencoba untuk menggabungkan teknologi Routing Protocol menggunakan routing Distance Vector dan Resource-Aware (RA framework pada Wireless Sensor Networks heterogen dengan menggabungkan sun-SPOT dan platform Imote2 Wireless Sensor Networks. RA melakukan proses pemantauan sumber daya dari memori, baterai, dan beban CPU lebih optimal dan efisien. Proses ini menggunakan Light-Weight Clustering (LWC dan Light Weight Frequent Item (LWF. Hasil yang diperoleh bahwa dengan mengadaptasi Resource-Aware dalam Wireless Sensor Networks, masa pakai wireless sensor meningkatkan sampai Â± 16,62%.
Full Text Available The development of IP-based technology contribute to the development of telecomunication and information technology. One of IP-based technology application is streaming multicast, as part of broadcasting. The streaming process is made by accessing Telkom-2 broadcast through AKATEL LAN network, then server forward it to clients using multicast IP system. Multicast IP is D-class IP, which is able to send data package in realtime. In multicast system, server only send one data package to some clients with same speed transmition. The Telkom-2 broadcast is already accessed before sent as data package. Server will access Telkom-2 broadcast using parabola antenna and Hughes modem, then forward it to clients through AKATEL LAN network. Clients must conect to server via AKATEL LAN network and already instaled VLC player, in order to be able to access the Telkom-2 broadcast
Menasce, Daniel A.; Singhal, Mukesh
This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.
Microsoft Virtual Server 2005 consistently proves to be worth its weight in gold, with new implementations thought up every day. With this product now a free download from Microsoft, scores of new users are able to experience what the power of virtualization can do for their networks. This guide is aimed at network administrators who are interested in ways that Virtual Server 2005 can be implemented in their organizations in order to save money and increase network productivity. It contains information on setting up a virtual network, virtual consolidation, virtual security, virtual honeypo
Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Muroyama, Masanori
Robot tactile sensation can enhance human–robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as “sensor platform LSI”) as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated. PMID:29061954
Full Text Available Hardware/software (HW/SW cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA technology is presented in this paper. The major contributions of this work are: (1 a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL to reduce memory consumption and load on the processor. (2 The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z. (3 The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.
Navia, Marlon; Campelo, José Carlos; Bonastre, Alberto; Ors, Rafael
Monitoring is one of the best ways to evaluate the behavior of computer systems. When the monitored system is a distributed system-such as a wireless sensor network (WSN)-the monitoring operation must also be distributed, providing a distributed trace for further analysis. The temporal sequence of occurrence of the events registered by the distributed monitoring platform (DMP) must be correctly established to provide cause-effect relationships between them, so the logs obtained in different monitor nodes must be synchronized. Many of synchronization mechanisms applied to DMPs consist in adjusting the internal clocks of the nodes to the same value as a reference time. However, these mechanisms can create an incoherent event sequence. This article presents a new method to achieve global synchronization of the traces obtained in a DMP. It is based on periodic synchronization signals that are received by the monitor nodes and logged along with the recorded events. This mechanism processes all traces and generates a global post-synchronized trace by scaling all times registered proportionally according with the synchronization signals. It is intended to be a simple but efficient offline mechanism. Its application in a WSN-DMP demonstrates that it guarantees a correct ordering of the events, avoiding the aforementioned issues.
Full Text Available Young people today are growing up in a digitalized environment. What challenges do they face in navigating this content-rich, symbolic environment? In this article, the researcher reviews university students' perceptions of media literacy by examining the use of social networking platforms (SN in academic settings. The researcher distributed 1200 surveys evenly split between Chinese and UAE students and 998 were returned and analyzed. The findings reveal that while many students believe that media literacy should become a priority in modern curricula, this urgency is not felt by the majority of students. The researcher reviews current views and methodologies in the literature related to media literacy and its status in current pedagogy. The study draws from gravitation theory to place the use of SN tools within a broader background of communication. The Uses and Gratification Theory is also invoked to explain how SN was made attractive to campus activists and protesters in the two countries.
The Chinese astronomical exploration in Antarctic region has been initialized and stepped forward. The R&D roadmap in this regard identifies each progressive step. For the past several years China has set up Kunlun station at Antarctic Dome-A, and Chinese Small Telescope ARray (CSTAR) has already been up and running regularly. In addition, Antarctic Schmidt Telescopes (AST3_1) was transported to the area in the year of 2011 and has recently been placed in service for some time and followed with telescopes in larger size predictably more to come. Antarctic region is one of a few best sites left on the Earth for astronomical telescope observation, yet with worst fundamental living conditions for human survival and activities. To meet such a tough challenge it is essential to establish an efficient and reliable means of remote access for telescope routine observation. This paper outlines the remote communication for CSTAR and AST3_1, and further proposes an intercontinental network control platform for Chinese Antarctic telescope array with remote full-automatic control and robotic observation and management. A number of technical issues for telescope access such as the unattended operation, the bandwidth based on iridium satellite transmission as well as the means of reliable and secure communication among other things are all reviewed and further analyzed.
SQL Server 2008 is the latest update to Microsoft's flagship database management system. This is the largest update since SQL Server 2005. SQL Server 2008 is a much more significant update than SQL Server 2005, because it brings increased ability to deliver data across more platforms, and thus many different types of devices. New functionality also allows for easy storage and retrieval of digitized images and video. These attributes address the recent explosion in the popularity of web-based video and server and desktop virtualization. The Real MCTS SQL Server 2008 Exam 70-432 Prep Kit prepare
Dezhgosha, Kamyar; Marcus, Robert; Brewster, Stephen
The goal of this project is to find cost-effective and efficient strategies/solutions to integrate existing databases, manage network, and improve productivity of users in a move towards client/server and Integrated Desktop Environment (IDE) at NASA LeRC. The project consisted of two tasks as follows: (1) Data collection, and (2) Database Development/Integration. Under task 1, survey questionnaires and a database were developed. Also, an investigation on commercially available tools for automated data-collection and net-management was performed. As requirements evolved, the main focus has been task 2 which involved the following subtasks: (1) Data gathering/analysis of database user requirements, (2) Database analysis and design, making recommendations for modification of existing data structures into relational database or proposing a common interface to access heterogeneous databases(INFOMAN system, CCNS equipment list, CCNS software list, USERMAN, and other databases), (3) Establishment of a client/server test bed at Central State University (CSU), (4) Investigation of multi-database integration technologies/ products for IDE at NASA LeRC, and (5) Development of prototypes using CASE tools (Object/View) for representative scenarios accessing multi-databases and tables in a client/server environment. Both CSU and NASA LeRC have benefited from this project. CSU team investigated and prototyped cost-effective/practical solutions to facilitate NASA LeRC move to a more productive environment. CSU students utilized new products and gained skills that could be a great resource for future needs of NASA.
Foundations of SQL Server 2008 R2 Business Intelligence introduces the entire exciting gamut of business intelligence tools included with SQL Server 2008. Microsoft has designed SQL Server 2008 to be more than just a database. It's a complete business intelligence (BI) platform. The database is at its core, and surrounding the core are tools for data mining, modeling, reporting, analyzing, charting, and integration with other enterprise-level software packages. SQL Server 2008 puts an incredible amount of BI functionality at your disposal. But how do you take advantage of it? That's what this
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Server usage in small and medium-sized companies: an empirical investigation into the actual need for network servers during the night and over weekends/holidays in small and medium-sized companies in the German-speaking part of Switzerland; Servernutzung in Klein- und Mittelbetrieben: eine empirische Untersuchung zum effektiven Bedarf von Netzwerk-Servern in der Nacht und an Wochenenden/Feiertagen in Klein- und Mittelbetrieben in der Deutschschweiz
Gubler, M.; Peters, M.
In order to acquire a representative statement about the effective number of network servers required for nighttime and weekend/public holidays in small and medium-sized companies, a telephone survey was conducted among 400 relevant companies in German-speaking Switzerland. Results: 1) Nowadays, around 80% of small and medium-sized companies in German-speaking Switzerland have an electronic data processing network. 2) The majority of those who have a network leave all their servers running at night (94%) and at the weekend/on public holidays (90%), although about one quarter of the units left on do nothing during the night and almost half do nothing at weekends. 3) About two thirds of the servers which function and carry out tasks during the night need less than three hours to do so. 4) An automatic switching on and off system would be welcomed: 57% of the respondents think the possibility of switching a server on automatically at a specific time or for a specific event would be good. 46% thought the possibility of automatic switch-off of a server when not in use and the simultaneous possibility of switching it on from the workplace. 5) In contrast, the reasons for rejecting these possibilities were rarely based on practical or technical matters but were caused by uncertainties, habits and previously gained convictions or doubts about the technical feasibility. Conclusion: There is a great deal of room to manoeuvre for the introduction of an automatic switch-on and -off system for network servers. Whether this room to manoeuvre can be utilized - alongside of the provision for technologically mature solutions - depends on the success of efforts to convince those responsible for its technical feasibility, its ecological desirability and economical benefits. (author)
Full Text Available Continuous care models for chronic diseases pose several technology-oriented challenges for home-based continuous care, where assistance services rely on a close collaboration among different stakeholders such as health operators, patient relatives, and social community members. Here we describe Emilia Romagna Mobile Health Assistance Network (ERMHAN a multichannel context-aware service platform designed to support care networks in cooperating and sharing information with the goal of improving patient quality of life. In order to meet extensibility and flexibility requirements, this platform has been developed through ontology-based context-aware computing and a service oriented approach. We also provide some preliminary results of performance analysis and user survey activity.
Paganelli, Federica; Spinicci, Emilio; Giuli, Dino
Continuous care models for chronic diseases pose several technology-oriented challenges for home-based continuous care, where assistance services rely on a close collaboration among different stakeholders such as health operators, patient relatives, and social community members. Here we describe Emilia Romagna Mobile Health Assistance Network (ERMHAN) a multichannel context-aware service platform designed to support care networks in cooperating and sharing information with the goal of improving patient quality of life. In order to meet extensibility and flexibility requirements, this platform has been developed through ontology-based context-aware computing and a service oriented approach. We also provide some preliminary results of performance analysis and user survey activity. PMID:18695739
Interdomain Routing ........................................................................10 C. NEW APPROACHES FOR ROUTING...Internet routing protocols, such as OSPF or RIP. It also aims to provide better performance in avoiding congestion and in achieving fairness than other...Effort networks treat packets equally during congestion , so packets are dropped arbitrarily. This might bottleneck an application that is sensitive
On Wednesday, 26 August, 384 servers from the CERN Computing Centre were donated to the Faculty of Science in Physics and Mathematics (FCFM) and the Mesoamerican Centre for Theoretical Physics (MCTP) at the University of Chiapas, Mexico. CERN’s Director-General, Rolf Heuer, met the Mexican representatives in an official ceremony in Building 133, where the servers were prepared for shipment. From left to right: Frédéric Hemmer, CERN IT Department Head; Raúl Heredia Acosta, Deputy Permanent Representative of Mexico to the United Nations and International Organizations in Geneva; Jorge Castro-Valle Kuehne, Ambassador of Mexico to the Swiss Confederation and the Principality of Liechtenstein; Rolf Heuer, CERN Director-General; Luis Roberto Flores Castillo, President of the Swiss Chapter of the Global Network of Qualified Mexicans Abroad; Virginia Romero Tellez, Coordinator of Institutional Relations of the Swiss Chapter of the Global Network of Qualified Me...
This book is ideal for GIS experts, developers, and system administrators who have had a first glance at GeoServer and who are eager to explore all its features in order to configure professional map servers. Basic knowledge of GIS and GeoServer is required.
Full Text Available Today, the growth of technology is very fast causes people more familiar with the Internet. Web servers are required to more quickly in order to serve a number of user request. Technology uses a single server is underprivileged to handle the load on the web server traffic as the number of users increases, causing the server to fail in serving the request. Technology on cluster server with load balancer is used in order to divide the load evenly so that optimize performance on a web server. Server cluster systems that are designed using six virtual machines using VirtualBox application. Six virtual machine consists of two load balancer servers, two application servers and two database servers. Definition of the system created by outlining system requirements and network topology. System design describes requirements specification of the hardware and software. Analysis and testing conducted to determine the performance of the system designed. Analysis and testing conducted to determine the performance of the system designed. Results of this research is the design of virtual servers that can serve a number of user request. Test result showed maximum ability to serve when all servers are up reach 240 connection, one of the aplication server down is 180 connection and one of the database down is 220.The optimal result when all servers up is 180 connections, one of the aplication server down is 150 connections and when database server down is 160 connections.
A concise and practical guide to using SolarWinds Server & Application Monitor.If you are an IT professionals ranging from an entry-level technician to a more advanced network or system administrator who is new to network monitoring services and/or SolarWinds SAM, this book is ideal for you.
Dharmalingam, Kalaiarul; Collier, Martin
Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advoc...
S. I. Balandin
Full Text Available The paper describes implementation of dataflow networks based on Smart-M3 platform for use cases related to the Internet of Things. The mechanism for automatic substitution of computational agents created on top of Smart-M3 platform is described. The paper reviews concurrency issues of the developed solution regarding Smart-M3 platform, as well as in the broader context of the Internet of Things.
Teguh Prasandy; Whisnumurti Adhiwibowo
Server merupakan bagian paling penting dalam sebuah jaringan karena server merupakan pusat dari berbagai data dan aplikasi. Menurut Bayu, dkk (2010) server bertugas melayani setiap kebutuhan. Server saat ini paling banyak sebagai web server karena pada server tersebut berisi aplikasi web dan database, serta digunakan untuk melayani aplikasi dari klien yang diakses melaui browser. Tujuan dari penelitian ini ialah Mengetahui penggunaan proxmox pada server dan Mengetahui ip routing pada virtu...
Simonetti, Franco L; Teppa, Elin; Chernomoretz, Ariel; Nielsen, Morten; Marino Buslje, Cristina
MISTIC (mutual information server to infer coevolution) is a web server for graphical representation of the information contained within a MSA (multiple sequence alignment) and a complete analysis tool for Mutual Information networks in protein families. The server outputs a graphical visualization of several information-related quantities using a circos representation. This provides an integrated view of the MSA in terms of (i) the mutual information (MI) between residue pairs, (ii) sequence conservation and (iii) the residue cumulative and proximity MI scores. Further, an interactive interface to explore and characterize the MI network is provided. Several tools are offered for selecting subsets of nodes from the network for visualization. Node coloring can be set to match different attributes, such as conservation, cumulative MI, proximity MI and secondary structure. Finally, a zip file containing all results can be downloaded. The server is available at http://mistic.leloir.org.ar. In summary, MISTIC allows for a comprehensive, compact, visually rich view of the information contained within an MSA in a manner unique to any other publicly available web server. In particular, the use of circos representation of MI networks and the visualization of the cumulative MI and proximity MI concepts is novel.
Bleier, T.; Kappler, K. N.; Schneider, D.
QuakeFinder (QF) is a humanitarian research and development project attempting to characterize earth-emitting electromagnetic (EM) signals as potential precursors to earthquakes. Beginning in 2005, QF designed, built, deployed and now maintains an array of 165 remote monitoring stations in 6 countries (US/California, Taiwan, Greece, Indonesia, Peru and Chile). Having amassed approximately 70 TB of data and greater than 140 earthquakes (M4+), QF is focused on the data analysis and signal processing algorithms in our effort to enable a forecasting capability. QF's autonomous stations, located along major fault lines, collect and transmit electromagnetic readings from 3-axis induction magnetometers and positive/negative ion sensors, a geophone, as well as various station health status and local conditions. The induction magnetometers, oriented N-S,E-W and vertically, have a 40 nT range and 1 pT sensitivity. Data is continuously collected at 50 samples/sec (sps), GPS time-stamped and transmitted, primarily through cell phone networks, to our data center in Palo Alto, California. The induction magnetometers routinely detect subtle geomagnetic and ionospheric disturbances as observed worldwide. QF seeks to make available both historic data and the array platform to strategic partners in the EM-related research and operation fields. The QF system will be described in detail with examples of local and regional geomagnetic activity. The stations are robust and will be undergoing a system-level upgrade in the near future. Domestically, QF maintains a 98% `up time' among the 120 stations in California while internationally our metric is typically near 80%. Irregular cell phone reception is chief among the reasons for outages although little data has been lost as the stations can store up to 90 days of data. These data are retrieved by QF personnel or, when communication is reestablished, the QF data ingest process automatically updates the database. Planned station upgrades
Vincent, Jonathan; Martre, Pierre; Gouriou, Benjamin; Ravel, Catherine; Dai, Zhanwu; Petit, Jean-Marc; Pailloux, Marie
With the increasing amount of -omics data available, a particular effort has to be made to provide suitable analysis tools. A major challenge is that of unraveling the molecular regulatory networks from massive and heterogeneous datasets. Here we describe RulNet, a web-oriented platform dedicated to the inference and analysis of regulatory networks from qualitative and quantitative -omics data by means of rule discovery. Queries for rule discovery can be written in an extended form of the RQL query language, which has a syntax similar to SQL. RulNet also offers users interactive features that progressively adjust and refine the inferred networks. In this paper, we present a functional characterization of RulNet and compare inferred networks with correlation-based approaches. The performance of RulNet has been evaluated using the three benchmark datasets used for the transcriptional network inference challenge DREAM5. Overall, RulNet performed as well as the best methods that participated in this challenge and it was shown to behave more consistently when compared across the three datasets. Finally, we assessed the suitability of RulNet to analyze experimental -omics data and to infer regulatory networks involved in the response to nitrogen and sulfur supply in wheat (Triticum aestivum L.) grains. The results highlight putative actors governing the response to nitrogen and sulfur supply in wheat grains. We evaluate the main characteristics and features of RulNet as an all-in-one solution for RN inference, visualization and editing. Using simple yet powerful RulNet queries allowed RNs involved in the adaptation of wheat grain to N and S supply to be discovered. We demonstrate the effectiveness and suitability of RulNet as a platform for the analysis of RNs involving different types of -omics data. The results are promising since they are consistent with what was previously established by the scientific community.
Subramanian, Sureshkumar V
Internet Protocol (IP) telephony is an alternative to the traditional Public Switched Telephone Networks (PSTN), and the Session Initiation Protocol (SIP) is quickly becoming a popular signaling protocol for VoIP-based applications. SIP is a peer-to-peer multimedia signaling protocol standardized by the Internet Engineering Task Force (IETF), and it plays a vital role in providing IP telephony services through its use of the SIP Proxy Server (SPS), a software application that provides call routing services by parsing and forwarding all the incoming SIP packets in an IP telephony network.SIP Pr
Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.
Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.
van Foreest, N.D.; Mandjes, M.R.H.; van Ommeren, Jan C.W.; Scheinhardt, Willem R.W.
We consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a `blocking threshold'. In addition, in variant $2$ the first server decreases its service rate when the second queue exceeds a
N.D. van Foreest; M.R.H. Mandjes (Michel); J.C.W. van Ommeren; W.R.W. Scheinhardt (Werner)
textabstractWe consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a blocking threshold . In addition, in variant 2 the first server decreases its service rate when the second queue
Eguibar, Vicente Rodriguez
Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks.The approach would be in a tutorial manner that will guide the users in an orderly manner toward virtualization.This book is conceived for system administrator and advanced PC enthusiasts who want to venture into the virtualization world. Although this book goes from scratch up, knowledge on server Operative Systems, LAN and networking has to be in place. Having a good background on server administration is desirable, including networking service
Celicourt, P.; Sam, R.; Piasecki, M.
Global phenomena such as climate change and large scale environmental degradation require the collection of accurate environmental data at detailed spatial and temporal scales from which knowledge and actionable insights can be derived using data science methods. Despite significant advances in sensor network technologies, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome and expensive task. These factors demonstrate why environmental data collection remains a challenge especially in developing countries where technical infrastructure, expertise and pecuniary resources are scarce. In addition, they also demonstrate the reason why dense and long-term environmental data collection has been historically quite difficult. Moreover, hydrometeorological data collection efforts usually overlook the (critically important) inclusion of a standards-based system for storing, managing, organizing, indexing, documenting and sharing sensor data. We are developing a cross-platform software framework using the Python programming language that will allow us to develop a low cost end-to-end (from sensor to publication) system for hydrometeorological conditions monitoring. The software framework contains provision for sensor, sensor platforms, calibration and network protocols description, sensor programming, data storage, data publication and visualization and more importantly data retrieval in a desired unit system. It is being tested on the Raspberry Pi microcomputer as end node and a laptop PC as the base station in a wireless setting.
Full Text Available The era of globalization is included era where the komputer virus has been growing rapidly, not only of mere academic research but has become a common problem for komputer users in the world. The effect of this loss is increasingly becoming the widespread use of the Internet as a global communication line between komputer users around the world, based on the results of the survey CSI / FB. Along with the progress, komputer viruses undergo some evolution in shape, characteristics and distribution medium such as Worms, Spyware Trojan horse and program Malcodelain. Through the development of server-based antivirus clien then the user can easily determine the behavior of viruses and worms, knowing what part of an operating system that is being attacked by viruses and worms, making itself a development of network-based antivirus client server and can also be relied upon as an engine fast and reliable scanner to recognize the virus and saving in memory management.
Making Everything Easier!. Mac OS® X Snow Leopard Server for Dummies. Learn to::;. Set up and configure a Mac network with Snow Leopard Server;. Administer, secure, and troubleshoot the network;. Incorporate a Mac subnet into a Windows Active Directory® domain;. Take advantage of Unix® power and security. John Rizzo. Want to set up and administer a network even if you don't have an IT department? Read on!. Like everything Mac, Snow Leopard Server was designed to be easy to set up and use. Still, there are so many options and features that this book will save you heaps of time and effort. It wa
Veale, Hilary J; Sacks-Davis, Rachel; Weaver, Emma Rn; Pedrana, Alisa E; Stoové, Mark A; Hellard, Margaret E
Online social networking platforms such as Facebook and Twitter have grown rapidly in popularity, with opportunities for interaction enhancing their health promotion potential. Such platforms are being used for sexual health promotion but with varying success in reaching and engaging users. We aimed to identify Facebook and Twitter profiles that were able to engage large numbers of users, and to identify strategies used to successfully attract and engage users in sexual health promotion on these platforms. We identified active Facebook (n = 60) and Twitter (n = 40) profiles undertaking sexual health promotion through a previous systematic review, and assessed profile activity over a one-month period. Quantitative measures of numbers of friends and followers (reach) and social media interactions were assessed, and composite scores used to give profiles an 'engagement success' ranking. Associations between host activity, reach and interaction metrics were explored. Content of the top ten ranked Facebook and Twitter profiles was analysed using a thematic framework and compared with five poorly performing profiles to identify strategies for successful user engagement. Profiles that were able to successfully engage large numbers of users were more active and had higher levels of interaction per user than lower-ranked profiles. Strategies used by the top ten ranked profiles included: making regular posts/tweets (median 46 posts or 124 tweets/month for top-ranked profiles versus six posts or six tweets for poorly-performing profiles); individualised interaction with users (85% of top-ranked profiles versus 0% for poorly-performing profiles); and encouraging interaction and conversation by posing questions (100% versus 40%). Uploading multimedia material (80% versus 30%) and highlighting celebrity involvement (70% versus 10%) were also key strategies. Successful online engagement on social networking platforms can be measured through quantitative (user numbers and
Gorton, Ian; Liu, Yan; Trivedi, Nihar
Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definition and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the erformance gains possible by adaptive behavior and the low overhead introduced by ASF.
Kress, V. C.; Ghiorso, M. S.
The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed
The goal of the project was to develop and deploy an Android platform architecture for a turn-based MMORPG. The task included the development of the architecture which should satisfy a client-server model, provide all necessary functionality to interact with a remote server machine and add interactivity through the design and styling of the graphical user interface. In addition to that, the development process involved the engineering of the game world model and the integration of the Observe...
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data, according to ISO and OGC defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timese ries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The transatlantic EarthServer initiative, running from 2011 through 2014, has united 11 partners to establish Big Earth Data Analytics. A key ingredient has been flexibility for users to ask whatever they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level, standards-based query languages which unify data and metadata search in a simple, yet powerful way. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing cod e has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, the pioneer and leading Array DBMS built for any-size multi-dimensional raster data being extended with support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly
Nguyen, Hien; Dang Tran, Frédéric; Menaud, Jean-Marc
International audience; Cloud platforms host several independent applications on a shared resource pool with the ability to allocate com- puting power to applications on a per-demand basis. The use of server virtualization techniques for such platforms provide great ﬂexibility with the ability to consolidate sev- eral virtual machines on the same physical server, to resize a virtual machine capacity and to migrate virtual machine across physical servers. A key challenge for cloud providers is...
This model was a disk storage server used in the Data Centre up until 2012. Each tray contains a hard disk drive (see the 5TB hard disk drive on the main disk display section - this actually fits into one of the trays). There are 16 trays in all per server. There are hundreds of these servers mounted on racks in the Data Centre, as can be seen.
Lodziņš, Gunārs Ernests
Currently in the information technology sector that is responsible for a server infrastructure is a huge development in the field of server virtualization on x86 computer architecture. As a prerequisite for such a virtualization development is growth in server productivity and underutilization of available computing power. Several companies in the market are working on two virtualization architectures – hypervizor and hosting. In this paper several of virtualization products that use host...
Ollero, Anibal; Bernard, Markus; La Civita, Marco; van Hoesel, L.F.W.; Marron, Pedro J.; Lepley, Jason; de Andres, Eduardo
This paper presents the AWARE platform that seeks to enable the cooperation of autonomous aerial vehicles with ground wireless sensor-actuator networks comprising both static and mobile nodes carried by vehicles or people. Particularly, the paper presents the middleware, the wireless sensor network,
Ghanate, Avinash; Ramasamy, Sureshkumar; Suresh, C. G.
Engineering protein molecules with desired structure and biological functions has been an elusive goal. Development of industrially viable proteins with improved properties such as stability, catalytic activity and altered specificity by modifying the structure of an existing protein has widely been targeted through rational protein engineering. Although a range of factors contributing to thermal stability have been identified and widely researched, the in silico implementation of these as strategies directed towards enhancement of protein stability has not yet been explored extensively. A wide range of structural analysis tools is currently available for in silico protein engineering. However these tools concentrate on only a limited number of factors or individual protein structures, resulting in cumbersome and time-consuming analysis. The iRDP web server presented here provides a unified platform comprising of iCAPS, iStability and iMutants modules. Each module addresses different facets of effective rational engineering of proteins aiming towards enhanced stability. While iCAPS aids in selection of target protein based on factors contributing to structural stability, iStability uniquely offers in silico implementation of known thermostabilization strategies in proteins for identification and stability prediction of potential stabilizing mutation sites. iMutants aims to assess mutants based on changes in local interaction network and degree of residue conservation at the mutation sites. Each module was validated using an extensively diverse dataset. The server is freely accessible at http://irdp.ncl.res.in and has no login requirements. PMID:26436543
Full Text Available Engineering protein molecules with desired structure and biological functions has been an elusive goal. Development of industrially viable proteins with improved properties such as stability, catalytic activity and altered specificity by modifying the structure of an existing protein has widely been targeted through rational protein engineering. Although a range of factors contributing to thermal stability have been identified and widely researched, the in silico implementation of these as strategies directed towards enhancement of protein stability has not yet been explored extensively. A wide range of structural analysis tools is currently available for in silico protein engineering. However these tools concentrate on only a limited number of factors or individual protein structures, resulting in cumbersome and time-consuming analysis. The iRDP web server presented here provides a unified platform comprising of iCAPS, iStability and iMutants modules. Each module addresses different facets of effective rational engineering of proteins aiming towards enhanced stability. While iCAPS aids in selection of target protein based on factors contributing to structural stability, iStability uniquely offers in silico implementation of known thermostabilization strategies in proteins for identification and stability prediction of potential stabilizing mutation sites. iMutants aims to assess mutants based on changes in local interaction network and degree of residue conservation at the mutation sites. Each module was validated using an extensively diverse dataset. The server is freely accessible at http://irdp.ncl.res.in and has no login requirements.
Fang, Chin [SLAC National Accelerator Lab., Menlo Park, CA (United States)
This Technical Note describes how the Zettar team came up with a data transfer cluster design that convincingly proved the feasibility of using high-density servers for high-performance Big Data transfers. It then outlines the tests, operations, and observations that address a potential over-heating concern regarding the use of Non-Volatile Memory Host Controller Interface Specification (NVMHCI aka NVM Express or NVMe) Gen 3 PCIe SSD cards in high-density servers. Finally, it points out the possibility of developing a new generation of high-performance Science DMZ data transfer system for the data-intensive research community and commercial enterprises.
Hermans, Frans; Sartas, Murat; van Schagen, Boudy; van Asten, Piet
Multi-stakeholder platforms (MSPs) are seen as a promising vehicle to achieve agricultural development impacts. By increasing collaboration, exchange of knowledge and influence mediation among farmers, researchers and other stakeholders, MSPs supposedly enhance their ‘capacity to innovate’ and contribute to the ‘scaling of innovations’. The objective of this paper is to explore the capacity to innovate and scaling potential of three MSPs in Burundi, Rwanda and the South Kivu province located in the eastern part of Democratic Republic of Congo (DRC). In order to do this, we apply Social Network Analysis and Exponential Random Graph Modelling (ERGM) to investigate the structural properties of the collaborative, knowledge exchange and influence networks of these MSPs and compared them against value propositions derived from the innovation network literature. Results demonstrate a number of mismatches between collaboration, knowledge exchange and influence networks for effective innovation and scaling processes in all three countries: NGOs and private sector are respectively over- and under-represented in the MSP networks. Linkages between local and higher levels are weak, and influential organisations (e.g., high-level government actors) are often not part of the MSP or are not actively linked to by other organisations. Organisations with a central position in the knowledge network are more sought out for collaboration. The scaling of innovations is primarily between the same type of organisations across different administrative levels, but not between different types of organisations. The results illustrate the potential of Social Network Analysis and ERGMs to identify the strengths and limitations of MSPs in terms of achieving development impacts. PMID:28166226
Hermans, Frans; Sartas, Murat; van Schagen, Boudy; van Asten, Piet; Schut, Marc
Multi-stakeholder platforms (MSPs) are seen as a promising vehicle to achieve agricultural development impacts. By increasing collaboration, exchange of knowledge and influence mediation among farmers, researchers and other stakeholders, MSPs supposedly enhance their 'capacity to innovate' and contribute to the 'scaling of innovations'. The objective of this paper is to explore the capacity to innovate and scaling potential of three MSPs in Burundi, Rwanda and the South Kivu province located in the eastern part of Democratic Republic of Congo (DRC). In order to do this, we apply Social Network Analysis and Exponential Random Graph Modelling (ERGM) to investigate the structural properties of the collaborative, knowledge exchange and influence networks of these MSPs and compared them against value propositions derived from the innovation network literature. Results demonstrate a number of mismatches between collaboration, knowledge exchange and influence networks for effective innovation and scaling processes in all three countries: NGOs and private sector are respectively over- and under-represented in the MSP networks. Linkages between local and higher levels are weak, and influential organisations (e.g., high-level government actors) are often not part of the MSP or are not actively linked to by other organisations. Organisations with a central position in the knowledge network are more sought out for collaboration. The scaling of innovations is primarily between the same type of organisations across different administrative levels, but not between different types of organisations. The results illustrate the potential of Social Network Analysis and ERGMs to identify the strengths and limitations of MSPs in terms of achieving development impacts.
Full Text Available Multi-stakeholder platforms (MSPs are seen as a promising vehicle to achieve agricultural development impacts. By increasing collaboration, exchange of knowledge and influence mediation among farmers, researchers and other stakeholders, MSPs supposedly enhance their 'capacity to innovate' and contribute to the 'scaling of innovations'. The objective of this paper is to explore the capacity to innovate and scaling potential of three MSPs in Burundi, Rwanda and the South Kivu province located in the eastern part of Democratic Republic of Congo (DRC. In order to do this, we apply Social Network Analysis and Exponential Random Graph Modelling (ERGM to investigate the structural properties of the collaborative, knowledge exchange and influence networks of these MSPs and compared them against value propositions derived from the innovation network literature. Results demonstrate a number of mismatches between collaboration, knowledge exchange and influence networks for effective innovation and scaling processes in all three countries: NGOs and private sector are respectively over- and under-represented in the MSP networks. Linkages between local and higher levels are weak, and influential organisations (e.g., high-level government actors are often not part of the MSP or are not actively linked to by other organisations. Organisations with a central position in the knowledge network are more sought out for collaboration. The scaling of innovations is primarily between the same type of organisations across different administrative levels, but not between different types of organisations. The results illustrate the potential of Social Network Analysis and ERGMs to identify the strengths and limitations of MSPs in terms of achieving development impacts.
Veen, J.S. van der; Bastiaans, M.; Jonge, M. de; Strijkers, R.J.
This paper discusses a cloud storage platform in the defense context. The mobile and dismounted domains of defense organizations typically use devices that are light in storage, processing and communication capabilities. This means that it is difficult to store a lot of information on these devices
Utz, S.; Comunello, F.
This chapter compares the SNS use of Dutch students across time and platforms. Between 2009 (n = 194) and 2010 (n = 212), many users migrated from Hyves, the hitherto largest Dutch SNS, to Facebook. Comparisons between the two years showed that SNS use remained relatively stable over time; only
Full Text Available As the gradual deterioration of the environment, the method of environmental risk assessment has been developed from basing only on a single source to basing on a cumulative risk source. In accordance with the water environment features of the plain river network area, a cumulative risk assessment system of water environment in the plain river network area was established in this paper, the design process for which could be divided into three step: (1 Control unit divided reasonably was chosen as the basic unit for water quality management. (2 On that basis, according to the characteristics of the plain river network area, the cumulative risk indexes were selected. The index weight is calculated using entropy method and analytic hierarchy process (AHP, which could determine the risk grade of each control unit. (3 The cumulative risk assessment method is coupled to the existing water environment management platform. The platform with a dynamic database can realize the dynamic calculation and visualization of the cumulative risk grade. In this paper, the Zhejiang area of Taihu Basin was selected to be the research target as the typical plain river network area. Thirty-five control units were divided with regional water environment and control section. Taking the data in the year 2011 as example, the proposed cumulative risk assessment method was used to identify the control units in different grades and the results demonstrated that the numbers of high-, medium-, low- and extremely low-risk control units are 13, 12, 5 and 5, respectively. It is necessary to give priority to the high-risk control unit. Therefore, the cumulative risk assessment method based on the control unit provides an essential theoretical basis for reducing the probability of water pollution and reducing the degree of water pollution damage.
Wan, Jiguang; Xie, ChangSheng; Tan, Zhihu
With the increasing of CD data in internet, CD mirror server has become the new technology. Considering the performance requirement of the traditional CD mirror server, we present a novel high performance VCL (Virtual CD Library) server. What makes VCL server superior is the two patented technologies: a new caching architecture and an efficient network protocol specifically tailored to VCL applications. VCL server is built based on an innovative caching technology. It employs a two-level cache structure on both a client side and the server side. Instead of using existing network and file protocols such as SMB/CIFS etc that are generally used by existing CD server, we have developed a set of new protocols specifically suitable to VCL environment. The new protocol is a native VCL protocol built directly on TCP/IP protocol. VCL protocol optimizes data transfer performance for block level data as opposed to file system level data. The advantage of using block level native protocol is reduced network-bandwidth requirement to transfer same amount of data as compared to file system level protocol. Our experiment and independent testing have shown that VCL servers allow much more number of concurrent users than existing products. For very high resolution DVD videos, VCL with 100Mbps NIC supports over 10 concurrent users viewing the same or different videos simultaneously. For VCD videos, the same VCL can support over 65 concurrent users viewing videos simultaneously. For data CDs, the VCL can support over 500 concurrent data stream users.
Huesch, Marco D; Galstyan, Aram; Ong, Michael K; Doctor, Jason N
To pilot public health interventions at women potentially interested in maternity care via campaigns on social media (Twitter), social networks (Facebook), and online search engines (Google Search). Primary data from Twitter, Facebook, and Google Search on users of these platforms in Los Angeles between March and July 2014. Observational study measuring the responses of targeted users of Twitter, Facebook, and Google Search exposed to our sponsored messages soliciting them to start an engagement process by clicking through to a study website containing information on maternity care quality information for the Los Angeles market. Campaigns reached a little more than 140,000 consumers each day across the three platforms, with a little more than 400 engagements each day. Facebook and Google search had broader reach, better engagement rates, and lower costs than Twitter. Costs to reach 1,000 targeted users were approximately in the same range as less well-targeted radio and TV advertisements, while initial engagements-a user clicking through an advertisement-cost less than $1 each. Our results suggest that commercially available online advertising platforms in wide use by other industries could play a role in targeted public health interventions. © Health Research and Educational Trust.
Delphinanto, A.; Koonen, A.M.J.; Peeters, M.E.; Hartog, F.T.H. den
The current service- and device discovery protocols are not platform- and network independent. Therefore, proxy servers will be needed to extend the range of IP-based discovery protocols to non-IP domains. We developed an architecture of a proxy that enables Universal Plug and Play (UPnP) devices
He, Fang; Chen, Xi
The accelerating accumulation and risk concentration of Chinese local financing platforms debts have attracted wide attention throughout the world. Due to the network of financial exposures among institutions, the failure of several platforms or regions of systemic importance will probably trigger systemic risk and destabilize the financial system. However, the complex network of credit relationships in Chinese local financing platforms at the state level remains unknown. To fill this gap, we presented the first complex networks and hierarchical cluster analysis of the credit market of Chinese local financing platforms using the ;bottom up; method from firm-level data. Based on balance-sheet channel, we analyzed the topology and taxonomy by applying the analysis paradigm of subdominant ultra-metric space to an empirical data in 2013. It is remarked that we chose to extract the network of co-financed financing platforms in order to evaluate the effect of risk contagion from platforms to bank system. We used the new credit similarity measure by combining the factor of connectivity and size, to extract minimal spanning trees (MSTs) and hierarchical trees (HTs). We found that: (1) the degree distributions of credit correlation backbone structure of Chinese local financing platforms are fat tailed, and the structure is unstable with respect to targeted failures; (2) the backbone is highly hierarchical, and largely explained by the geographic region; (3) the credit correlation backbone structure based on connectivity and size is significantly heterogeneous; (4) key platforms and regions of systemic importance, and contagion path of systemic risk are obtained, which are contributed to preventing systemic risk and regional risk of Chinese local financing platforms and preserving financial stability under the framework of macro prudential supervision. Our approach of credit similarity measure provides a means of recognizing ;systemically important; institutions and regions
Sun, Baitao; Zhang, Lei; Chen, Xiangzhao; Zhang, Xinghua
This paper describes a set of on-site earthquake safety evaluation systems for buildings, which were developed based on a network platform. The system embedded into the quantitative research results which were completed in accordance with the provisions from Post-earthquake Field Works, Part 2: Safety Assessment of Buildings, GB18208.2 -2001, and was further developed into an easy-to-use software platform. The system is aimed at allowing engineering professionals, civil engineeing technicists or earthquake-affected victims on site to assess damaged buildings through a network after earthquakes. The authors studied the function structure, process design of the safety evaluation module, and hierarchical analysis algorithm module of the system in depth, and developed the general architecture design, development technology and database design of the system. Technologies such as hierarchical architecture design and Java EE were used in the system development, and MySQL5 was adopted in the database development. The result is a complete evaluation process of information collection, safety evaluation, and output of damage and safety degrees, as well as query and statistical analysis of identified buildings. The system can play a positive role in sharing expert post-earthquake experience and promoting safety evaluation of buildings on a seismic field.
Lei Wang; Shan Zuo; Y. D. Song; Zheng Zhou
Offshore floating wind turbine (OFWT) has been a challenging research spot because of the high-quality wind power and complex load environment. This paper focuses on the research of variable torque control of offshore wind turbine on Spar floating platform. The control objective in below-rated wind speed region is to optimize the output power by tracking the optimal tip-speed ratio and ideal power curve. Aiming at the external disturbances and nonlinear uncertain dynamic systems of OFWT becau...
Full Text Available We live in an era where typical measures towards the mitigation of environmental degradation follow the identification and recording of natural parameters closely associated with it. In addition, current scientific knowledge on the one hand may be applied to minimize the environmental impact of anthropogenic activities, whereas informatics on the other, playing a key role in this ecosystem, do offer new ways of implementing complex scientific processes regarding the collection, aggregation and analysis of data concerning environmental parameters. Furthermore, another related aspect to consider is the fact that almost all relevant data recordings are influenced by their given spatial characteristics. Taking all aforementioned inputs into account, managing such a great amount of complex and remote data requires specific digital structures; these structures are typically deployed over the Web on an attempt to capitalize existing open software platforms and modern developments of hardware technology. In this paper we present an effort to provide a technical solution based on sensing devices that are based on the well-known Arduino platform and operate continuously for gathering and transmitting of environmental state information. Controls, user interface and extensions of the proposed project rely on the Android mobile device platform (both from the software and hardware side. Finally, a crucial novel aspect of our work is the fact that all herein gathered data carry spatial information, which is rather fundamental for the successful correlation between pollutants and their place of origin. The latter is implemented by an interactive Web GIS platform operating oversight in situ and on a timeline basis.
This book is for messaging professionals who want to build real-world scripts with Windows PowerShell 5 and the Exchange Management Shell. If you are a network or systems administrator responsible for managing and maintaining Exchange Server 2013, you will find this highly useful.
Ganzinger, Matthias; Knaup, Petra
Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper.
body motions and the other measuring the ECG and respiratory patterns. At the second layer, called the personal network layer ( PNL ), the wireless body...sensors on a single subject communicate with a mobile base station, which supports Linux OS and the IEEE 802.15.4 protocol. The BSL and PNL functions...scalable, and can be reconfigured on-the-fly via SPINE. At the third layer, called the global network layer (GNL), multiple PNLs communicate with a remote
Based on the review of the development and current situation of CAD technology, the necessity of combination of artificial neural network and expert system, and then present an intelligent design system based on artificial neural network. Moreover, it discussed the feasibility of realization of a design-oriented expert system development tools on the basis of above combination. In addition, knowledge representation strategy and method and the solving process are given in this paper.
Biffi, Emilia; Piraino, Francesco; Pedrocchi, Alessandra; Fiore, Gianfranco B; Ferrigno, Giancarlo; Redaelli, Alberto; Menegon, Andrea; Rasponi, Marco
Spatially and temporally resolved delivery of soluble factors is a key feature for pharmacological applications. In this framework, microfluidics coupled to multisite electrophysiology offers great advantages in neuropharmacology and toxicology. In this work, a microfluidic device for biochemical stimulation of neuronal networks was developed. A micro-chamber for cell culturing, previously developed and tested for long term neuronal growth by our group, was provided with a thin wall, which partially divided the cell culture region in two sub-compartments. The device was reversibly coupled to a flat micro electrode array and used to culture primary neurons in the same microenvironment. We demonstrated that the two fluidically connected compartments were able to originate two parallel neuronal networks with similar electrophysiological activity but functionally independent. Furthermore, the device allowed to connect the outlet port to a syringe pump and to transform the static culture chamber in a perfused one. At 14 days invitro, sub-networks were independently stimulated with a test molecule, tetrodotoxin, a neurotoxin known to block action potentials, by means of continuous delivery. Electrical activity recordings proved the ability of the device configuration to selectively stimulate each neuronal network individually. The proposed microfluidic approach represents an innovative methodology to perform biological, pharmacological, and electrophysiological experiments on neuronal networks. Indeed, it allows for controlled delivery of substances to cells, and it overcomes the limitations due to standard drug stimulation techniques. Finally, the twin network configuration reduces biological variability, which has important outcomes on pharmacological and drug screening.
Hu, Zhenjun; Chang, Yi-Chien; Wang, Yan; Huang, Chia-Ling; Liu, Yang; Tian, Feng; Granger, Brian; Delisi, Charles
With the rapid accumulation of our knowledge on diseases, disease-related genes and drug targets, network-based analysis plays an increasingly important role in systems biology, systems pharmacology and translational science. The new release of VisANT aims to provide new functions to facilitate the convenient network analysis of diseases, therapies, genes and drugs. With improved understanding of the mechanisms of complex diseases and drug actions through network analysis, novel drug methods (e.g., drug repositioning, multi-target drug and combination therapy) can be designed. More specifically, the new update includes (i) integrated search and navigation of disease and drug hierarchies; (ii) integrated disease-gene, therapy-drug and drug-target association to aid the network construction and filtering; (iii) annotation of genes/drugs using disease/therapy information; (iv) prediction of associated diseases/therapies for a given set of genes/drugs using enrichment analysis; (v) network transformation to support construction of versatile network of drugs, genes, diseases and therapies; (vi) enhanced user interface using docking windows to allow easy customization of node and edge properties with build-in legend node to distinguish different node type. VisANT is freely available at: http://visant.bu.edu.
Tämän opinnäytetyön tavoitteena oli tutkia ja testata Microsoftin Windows Server 2008 -käyttöjärjestelmän keskeisimpiä tietoturvaominaisuuksia keskittyen erityisesti Network Access Protection -suojausominaisuuteen. Opinnäytetyöprosessi alkoi marraskuussa 2009 ja valmistui lokakuun 2009 lopussa. Opinnäytetyön toimeksiantaja Savonia-ammattikorkeakoulu toimitti työtä varten tarvitun laitteiston ja käyttöjärjestelmän. Tämä opinnäytetyö jakaantuu kahteen osioon, joista ensimmäisessä eli teor...
Mui, Amy B.; Nelson, Sarah; Huang, Bruce; He, Yuhong; Wilson, Kathi
This paper describes a web-enabled learning platform providing remote access to geospatial software that extends the learning experience outside of the laboratory setting. The platform was piloted in two undergraduate courses, and includes a software server, a data server, and remote student users. The platform was designed to improve the quality…
Full Text Available Novel embedded applications are characterized by increasing requirements on processing performance as well as the demand for communication between several or many devices. Networked Multiprocessor System-on-Chips (MPSoCs are a possible solution to cope with this increasing complexity. Such systems require a detailed exploration on both architectures and system design. An approach that allows investigating interdependencies between system and network domain is the cooperative execution of system design tools with a network simulator. Within previous work, synchronization mechanisms have been developed for parallel system simulation and system/network co-simulation using the high level architecture (HLA. Within this contribution, a methodology is presented that extends previous work with further building blocks towards a construction kit for system/network co-simulation. The methodology facilitates flexible assembly of components and adaptation to the specific needs of use cases in terms of performance and accuracy. Underlying concepts and made extensions are discussed in detail. Benefits are substantiated by means of various benchmarks.
In vitro diagnostics (IVD) has huge potential. Primary drivers in the global market are the patient's awareness of infectious diseases, the introduction of advanced molecular and tissue diagnostic tests for patient-stratified and targeted anti-cancer therapy and, last but not least, the growing geriatric population. Rapid progress in device miniaturization and information technology (IT) offers new possibilities in decentralized testing. Grand View Research Inc. expects the global market for IVD to reach US $ 74.3 billion by 2020. Hence the launch in 2015 by the NTN Swiss Biotech - together with the driving forces of Biotechnet Switzerland - of the 'Thematic Platform in vitro Diagnostics'.
Gustafson, Carl; Bug, William J; Nissanov, Jonathan
Three dimensional biomedical image sets are becoming ubiquitous, along with the canonical atlases providing the necessary spatial context for analysis. To make full use of these 3D image sets, one must be able to present views for 2D display, either surface renderings or 2D cross-sections through the data. Typical display software is limited to presentations along one of the three orthogonal anatomical axes (coronal, horizontal, or sagittal). However, data sets precisely oriented along the major axes are rare. To make fullest use of these datasets, one must reasonably match the atlas' orientation; this involves resampling the atlas in planes matched to the data set. Traditionally, this requires the atlas and browser reside on the user's desktop; unfortunately, in addition to being monolithic programs, these tools often require substantial local resources. In this article, we describe a network-capable, client-server framework to slice and visualize 3D atlases at off-axis angles, along with an open client architecture and development kit to support integration into complex data analysis environments. Here we describe the basic architecture of a client-server 3D visualization system, consisting of a thin Java client built on a development kit, and a computationally robust, high-performance server written in ANSI C++. The Java client components (NetOStat) support arbitrary-angle viewing and run on readily available desktop computers running Mac OS X, Windows XP, or Linux as a downloadable Java Application. Using the NeuroTerrain Software Development Kit (NT-SDK), sophisticated atlas browsing can be added to any Java-compatible application requiring as little as 50 lines of Java glue code, thus making it eminently re-useable and much more accessible to programmers building more complex, biomedical data analysis tools. The NT-SDK separates the interactive GUI components from the server control and monitoring, so as to support development of non-interactive applications
Baumann, P.; Rossi, A. P.
The unprecedented increase of imagery, in-situ measurements, and simulation data produced by Earth (and Planetary) Science observations missions bears a rich, yet not leveraged potential for getting insights from integrating such diverse datasets and transform scientific questions into actual queries to data, formulated in a standardized way.The intercontinental EarthServer  initiative is demonstrating new directions for flexible, scalable Earth Science services based on innovative NoSQL technology. Researchers from Europe, the US and Australia have teamed up to rigorously implement the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users (scientist, planners, decision makers) will always see just a few datacubes they can slice and dice.EarthServer has established client  and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman [3,4], enables direct interaction, including 3-D visualization, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS). Conversely, EarthServer has shaped and advanced WCS based on the experience gained. The first phase of EarthServer has advanced scalable array database technology into 150+ TB services. Currently, Petabyte datacubes are being built for ad-hoc and cross-disciplinary querying, e.g. using climate, Earth observation and ocean data.We will present the EarthServer approach, its impact on OGC / ISO / INSPIRE standardization, and its platform technology, rasdaman.References:  Baumann, et al. (2015) DOI: 10.1080/17538947.2014.1003106  Hogan, P., (2011) NASA World Wind, Proceedings of the 2nd International Conference on Computing for Geospatial Research
Full Text Available In this paper we have proposed to present a wearable system for automatic recording of the main physiological parameters of the human body: body temperature, galvanic skin response, respiration rate, blood pressure, pulse, blood oxygen content, blood glucose content, electrocardiogram (ECG, electromyography(EMG, and patient position. To realize this system, we have developed a program that can read and automatically save in a file, the data from specialized sensors. The results can be later interpreted, by comparing them with known normal values and thus offering the possibility for a primary health status diagnosis by specialized personnel. The data received from the wearable sensors is taken by an interface circuit, provided with signal conditioning (filtering, amplification, etc. A microcontroller controls the data acquisition. In this applications we used an Arduino Uno standard development platform. The data are transferred to a PC, using serial communication port of Arduino platform and a communications shield. The whole process of health assessment is commissioned by a program developed by us in the Python programming language. The program provides automatic recording of the aforementioned parameters in a predetermined sequence, or only certain parameters are registered.
Margreitter, Christian; Petrov, Drazen; Zagrovic, Bojan
Post-translational modifications (PTMs) play a key role in numerous cellular processes by directly affecting structure, dynamics and interaction networks of target proteins. Despite their importance, our understanding of protein PTMs at the atomistic level is still largely incomplete. Molecular dynamics (MD) simulations, which provide high-resolution insight into biomolecular function and underlying mechanisms, are in principle ideally suited to tackle this problem. However, because of the challenges associated with the development of novel MD parameters and a general lack of suitable computational tools for incorporating PTMs in target protein structures, MD simulations of post-translationally modified proteins have historically lagged significantly behind the studies of unmodified proteins. Here, we present Vienna-PTM web server (http://vienna-ptm.univie.ac.at), a platform for automated introduction of PTMs of choice to protein 3D structures (PDB files) in a user-friendly visual environment. With 256 different enzymatic and non-enzymatic PTMs available, the server performs geometrically realistic introduction of modifications at sites of interests, as well as subsequent energy minimization. Finally, the server makes available force field parameters and input files needed to run MD simulations of modified proteins within the framework of the widely used GROMOS 54A7 and 45A3 force fields and GROMACS simulation package.
Marin Perianu, Mihai; Meratnia, Nirvana; Havinga, Paul J.M.; Moreira Sá de Souza, L.; Müller, J.; Spiess, P.; Haller, S.; Riedel, T.; Decker, C.; Stromberg, G.
Massively deployed wireless sensor and actuator networks, co-existing with RFID technology, can bring clear benefits to large-scale enterprise systems, by delegating parts of the business functionality closer to the point of action. However, a major impediment in the integration process is
Marin Perianu, Mihai; Meratnia, Nirvana; Havinga, Paul J.M.; Moreira Sá de Souza, L.; Müller, J.; Spiess, P.; Haller, S.; Riedel, T.; Decker, C.; Stromberg, G.
Massively deployed wireless sensor and actuator networks (WSAN), co-existing with RFID technology, can bring clear benefits to large-scale enterprise systems, by delegating parts of the business functionality closer to the point of action. However, a major impediment in the integration process is
Duque, M.; Cando, E.; Aguinaga, A.; Llulluna, F.; Jara, N.; Moreno, T.
In this document, I propose a theory about the impact of systems based on microgrids in non-industrialized countries that have the goal to improve energy exploitation through alternatives methods of a clean and renewable energy generation and the creation of the app to manage the behavior of the micro-grids based on the NodeJS, Django and IOJS technologies. The micro-grids allow the optimal way to manage energy flow by electric injection directly in electric network small urban's cells in a low cost and available way. In difference from conventional systems, micro-grids can communicate between them to carry energy to places that have higher demand in accurate moments. This system does not require energy storage, so, costs are lower than conventional systems like fuel cells, solar panels or else; even though micro-grids are independent systems, they are not isolated. The impact that this analysis will generate, is the improvement of the electrical network without having greater control than an intelligent network (SMART-GRID); this leads to move to a 20% increase in energy use in a specified network; that suggest there are others sources of energy generation; but for today's needs, we need to standardize methods and remain in place to support all future technologies and the best option are the Smart Grids and Micro-Grids.
Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi
In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.
Social networking sites (SNSs) are increasingly used to communicate and to maintain relationships with people around the globe, and their usage has certainly led to incidental language gains for second language (L2) users. Language instructors are just beginning to utilize SNS sites to manage their courses or to have students practice language…
Hartog, F.T.H. den; Blom, M.A.; Lageweg, C.R.; Peeters, E.M.; Schmidt, J.R.; Veer, R. van der; Veldhuis, R.N.J.; Baken, N.H.G.; Selgert, F.; Vries, A. de; Werff, M.R. van der; Tao, Q.
By developing demonstrators and performing small-scale user trials, we found various opportunities and pitfalls for deploying Personal Networks (PNs) on a commercial basis. The demonstrators were created using as many as possible legacy devices and proven technologies. They deal with applications in
Li, Peng; Zang, Weidong; Li, Yuhua; Xu, Feng; Wang, Jigang; Shi, Tieliu
Protein interactions are involved in important cellular functions and biological processes that are the fundamentals of all life activities. With improvements in experimental techniques and progress in research, the overall protein interaction network frameworks of several model organisms have been created through data collection and integration. However, most of the networks processed only show simple relationships without boundary, weight or direction, which do not truly reflect the biological reality. In vivo, different types of protein interactions, such as the assembly of protein complexes or phosphorylation, often have their specific functions and qualifications. Ignorance of these features will bring much bias to the network analysis and application. Therefore, we annotate the Arabidopsis proteins in the AtPID database with further information (e.g. functional annotation, subcellular localization, tissue-specific expression, phosphorylation information, SNP phenotype and mutant phenotype, etc.) and interaction qualifications (e.g. transcriptional regulation, complex assembly, functional collaboration, etc.) via further literature text mining and integration of other resources. Meanwhile, the related information is vividly displayed to users through a comprehensive and newly developed display and analytical tools. The system allows the construction of tissue-specific interaction networks with display of canonical pathways. The latest updated AtPID database is available at http://www.megabionet.org/atpid/.
Small Business Administration — SBA’s Network Components & Software Inventory contains a complete inventory of all devices connected to SBA’s network including workstations, servers, routers,...
Full Text Available Session Initiation Protocol (SIP is a signaling protocol emerged with an aim to enhance the IP network capabilities in terms of complex service provision. SIP server scalability with load balancing has a greater concern due to the dramatic increase in SIP service demand. Load balancing of session method (request/response and security measures optimizes the SIP server to regulate of network traffic in Voice over Internet Protocol (VoIP. Establishing a honeywall prior to the load balancer significantly reduces SIP traffic and drops inbound malicious load. In this paper, we propose Active Least Call in SIP Server (ALC_Server algorithm fulfills objectives like congestion avoidance, improved response times, throughput, resource utilization, reducing server faults, scalability and protection of SIP call from DoS attacks. From the test bed, the proposed two-tier architecture demonstrates that the ALC_Server method dynamically controls the overload and provides robust security, uniform load distribution for SIP servers.
Jiang, Li-Min; Yan, Hua-Guang; Meng, Jun-Xia; Yin, Zhong-Dong; Wei, Wen-Si
Based on the study of quantitative energy consumption reduction model, a test platform was established to test and verify the theoretical method. In the experiment, a power supply device with different power quality disturbances is required. This paper proposes a series multi-objective VQDG which can generate typical voltage disturbance, such as flicker, sag or swell, harmonics, unbalance and their superimposition applied to testing load. In the application, the cascade H-bridges inverter is seriesly connected between the gird source and the testing load. The device has two advantages: the output disturbance voltage level is low and the power absorbed by load is mostly provided by grid. Compared with those devices with high power rating, the size of the capacitor of VQDG will be decreased remarkably. The device is designed and physical tests are performed to demonstrate the variety of functions. Therefore, it can provide the power quality disturbance signal for the simulation experiment platform of energy saving and loss reduction of distribution network.
Park, Bu Kyung; Calamaro, Christina
To review the evidence to determine if social networking sites (SNS) are effective tools for health research in the adolescent and young adult populations. Systematic review of published research articles focused on use of SNS for youth health research. Seventeen articles were selected that met the following criteria: used SNS at any stage of study, participants between 13 and 25 years of age, English language, and both international and national studies. Reviewers categorized selected studies based on the way SNS were used. Utilization of SNS for effectively implementing research with adolescents and young adults include (a) recruitment, (b) intervention, and (c) measurement. Four findings about advantages of using SNS apparent in this review are (a) ease of access to youth, (b) cost effectiveness in recruitment, (c) ease of intervention, and (d) reliable screening venue of mental status and high-risk behaviors. Although this literature review showed relatively minimal research to date on the use of SNS for research targeting adolescents and young adults, the impact of using SNS for health research is of considerable importance for researchers as well as participants. With careful focus, SNS can become a valuable platform to access, recruit, and deliver health interventions in a cost-effective manner to youth populations as well as hard-to-reach minority or underserved populations. The evidence demonstrates the usefulness of SNS as innovative platforms for health promotion among adolescents and young adults. © 2013 Sigma Theta Tau International.
Full Text Available With the development of client server technology and multilayer architectures the systems ef-ficiency issue has been increasingly discussed. Lacking knowledge in optimization methods and tools offered by DBMS's, database administrators and developers of applications based on Microsoft technologies cannot optimally design and service performing systems. In this article we review the objectives that should be considered (in order to improve performance of SQL Server instances and we describe the techniques used to optimize queries. Also, we explain and illustrate the new optimization features offered by SQL Server 2008.
Tämä työ käsittelee Windows 2008 -verkkoinfrastrukstuuri-kurssin materiaalin suunnittelua ja testausta. Työ toteutettiin Metropolia Ammattikorkeakoululle keväällä 2010. Työn alussa esitellään työssä käytetty virtuaalisointiohjelmisto ja toiminta, sekä Windows Server 2008:n ominaisuuksia. Työssä käydään läpi virtuaaliympäristön luonti sekä Win-dows Server 2008 -palvelinten konfigurointia. Konfigurointi tapahtuu Windows Server 2008 infrastructure -materaalin harjoitustöiden pohjalta. Työssä...
A standard tutorial approach which will guide the readers on all of the intricacies of the Zimbra Server.If you are any kind of Zimbra user, this book will be useful for you, from newbies to experts who would like to learn how to setup a Zimbra server. If you are an IT administrator or consultant who is exploring the idea of adopting, or have already adopted Zimbra as your mail server, then this book is for you. No prior knowledge of Zimbra is required.
Bauer, Michael D
Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--
Full Text Available In order to analyze the channel estimation performance of near space high altitude platform station (HAPS in wireless communication system, the structure and formation of HAPS are studied in this paper. The traditional Least Squares (LS channel estimation method and Singular Value Decomposition-Linear Minimum Mean-Squared (SVD-LMMS channel estimation method are compared and investigated. A novel channel estimation method and model are proposed. The channel estimation performance of HAPS is studied deeply. The simulation and theoretical analysis results show that the performance of the proposed method is better than the traditional methods. The lower Bit Error Rate (BER and higher Signal Noise Ratio (SNR can be obtained by the proposed method compared with the LS and SVD-LMMS methods.
Full Text Available The paper is based on the experience of the author with the FreeBSD server operating system administration on three servers in use under academicdirect.ro domain.The paper describes a set of installation, preparation, and administration aspects of a FreeBSD server.First issue of the paper is the installation procedure of FreeBSD operating system on i386 computer architecture. Discussed problems are boot disks preparation and using, hard disk partitioning and operating system installation using a existent network topology and a internet connection.Second issue is the optimization procedure of operating system, server services installation, and configuration. Discussed problems are kernel and services configuration, system and services optimization.The third issue is about client-server applications. Using operating system utilities calls we present an original application, which allows displaying the system information in a friendly web interface. An original program designed for molecular structure analysis was adapted for systems performance comparisons and it serves for a discussion of Pentium, Pentium II and Pentium III processors computation speed.The last issue of the paper discusses the installation and configuration aspects of dial-in service on a UNIX-based operating system. The discussion includes serial ports, ppp and pppd services configuration, ppp and tun devices using.
Mandar Lakshmikant Bhanushe
Full Text Available This article is a review of theCN, an online virtual learning environment. CN is more than a learning management system, as it not only focuses on course content delivery and management, but takes it a step further by introducing the networking of courses and their content. In comparison with existing LMS’s, which are housed in closed walls with limited access to learners and instructors within institutions, used to merely manage courses online, CN is an open, free, academic and social networking framework scalable to massive numbers of learners from any place in the world within a single environment. CN is free for all to use across the globe. With some minor improvements, CN, as an LMS is surely one of very useful and helpful virtual learning technology tool available to distance learners and institutions to make learning entertaining and fruitful in achieving its learning objectives.
Full Text Available The Session Initiation Protocol (SIP is a multimedia signalling protocol that has evolved into a widely adopted communication standard. The integration of SIP into existing IP networks has fostered IP networks becoming a convergence platform for both real-time and non-real-time multimedia communications. This converged platform integrates data, voice, video, presence, messaging, and conference services into a single network that offers new communication experiences for users. The open source community has contributed to SIP adoption through the development of open source software for both SIP clients and servers. In this paper, we provide a survey on open SIP systems that can be built using publically available software. We identify SIP features for service development and programming, services and applications of a SIP-converged platform, and the most important technologies supporting SIP functionalities. We propose an advanced converged IP communication platform that uses SIP for service delivery. The platform supports audio and video calls, along with media services such as audio conferences, voicemail, presence, and instant messaging. Using SIP Application Programming Interfaces (APIs, the platform allows the deployment of advanced integrated services. The platform is implemented with open source software. Architecture components run on standardized hardware with no need for special purpose investments.
Rong, Qinfeng; Han, Hongliang; Feng, Feng; Ma, Zhanfang
In this work, a new network nanocomposite composed of polypyrrole hydrogel (PPy hydrogel) loaded gold nanoparticles (AuNPs) was prepared. The PPy hydrogel was directly synthesized by mixing the pyrrole monomer and phytic acid, and the mixed solution can be gelated to form hydrogel at once. The three-dimensional network nanostructured PPy hydrogel not only provided a greater effective surface area for increasing the quantity of immobilized biomolecules and facilitated the transport of electrons and ions, but also exhibited an improved conductivity. Meanwhile, the electrodeposited AuNPs on the PPy hydrogel can further increase the specific surface area to capture a large amount of antibodies as well as improve the capability of electron transfer. The network PPy hydrogel/Au nanocomposites were successfully employed for the fabrication of a sensitive label-free amperometric immunosensor. Carcinoembryonic antigen (CEA) was used as a model protein. The proposed immunosensor exhibited a wide linear detection range from 1 fg mL-1 to 200 ng mL-1, and an ultralow limit of detection of 0.16 fg mL-1 (S/N = 3), and it also possessed good selectivity. Moreover, the detection of CEA in ten human serums showed satisfactory accuracy compared with the data determined by ELISA, indicating that the immunosensor provided potential application for clinical diagnosis.
Bowman, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
A variety of Earth surface and atmospheric sources generate low frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at the Earth's surface. The experiments have been limited to at most two stations at altitude, limiting their utility in acoustic event detection and localization. We describe the deployment of five drifting microphone stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based explosions as well as the ocean microbarom while traveling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic broad band signals similar to those seen on previous flights in the same region were noted as well, but their source remains unclear. Background noise levels were commensurate with those on infrasound stations in the International Monitoring System (IMS) below 2 seconds, but sensor self noise appears to dominate at higher frequencies.
Rong, Qinfeng; Han, Hongliang; Feng, Feng; Ma, Zhanfang
In this work, a new network nanocomposite composed of polypyrrole hydrogel (PPy hydrogel) loaded gold nanoparticles (AuNPs) was prepared. The PPy hydrogel was directly synthesized by mixing the pyrrole monomer and phytic acid, and the mixed solution can be gelated to form hydrogel at once. The three-dimensional network nanostructured PPy hydrogel not only provided a greater effective surface area for increasing the quantity of immobilized biomolecules and facilitated the transport of electrons and ions, but also exhibited an improved conductivity. Meanwhile, the electrodeposited AuNPs on the PPy hydrogel can further increase the specific surface area to capture a large amount of antibodies as well as improve the capability of electron transfer. The network PPy hydrogel/Au nanocomposites were successfully employed for the fabrication of a sensitive label-free amperometric immunosensor. Carcinoembryonic antigen (CEA) was used as a model protein. The proposed immunosensor exhibited a wide linear detection range from 1 fg mL−1 to 200 ng mL−1, and an ultralow limit of detection of 0.16 fg mL−1 (S/N = 3), and it also possessed good selectivity. Moreover, the detection of CEA in ten human serums showed satisfactory accuracy compared with the data determined by ELISA, indicating that the immunosensor provided potential application for clinical diagnosis. PMID:26074185
Braker, Gesche; Wang, Yiming; Glessmer, Mirjam; Kirchgaessner, Amelie
The Earth Science Women's Network (ESWN; ESWNonline.org) is an international peer-mentoring network of women in the Earth Sciences, many in the early stages of their careers. ESWN's mission is to promote career development, build community, provide opportunities for informal mentoring and support, and facilitate professional collaborations. This has been accomplished via email and a listserv, on Facebook, at in-person networking events, and at professional development workshops. In an effort to facilitate international connections among women in the Earth Sciences, ESWN has developed a password protected community webpage supported by AGU and a National Science Foundation ADVANCE grant where members can create an online presence and interact with each other. For example, groups help women to connect with co-workers or center around a vast array of topics ranging from research interests, funding opportunities, work-life balance, teaching, scientific methods, and searching for a job to specific challenges faced by women in the earth sciences. Members can search past discussions and share documents like examples of research statements, useful interview materials, or model recommendation letters. Over the last 10 years, ESWN has grown by word of mouth to include more than 1600 members working on all 7 continents. ESWN also offers professional development workshops at major geologic conferences around the world and at ESWN-hosted workshops mostly exclusively throughout the United States. In 2014, ESWN offers a two day international workshop on communication and networking skills and career development. Women working in all disciplines of Earth Sciences from later PhD level up to junior professors in Europe are invited to the workshop that will be held in Kiel, Germany. The workshop offers participants an individual personality assessment and aims at providing participants with improved communication and networking skills. The second focus will be to teach them how to
Oeverlier, Lasse; Syverson, Paul F
.... Announced properties include server resistance to distributed DoS. Both the EFF and Reporters Without Borders have issued guides that describe using hidden services via Tor to protect the safety of dissidents as well as to resist censorship...
Davidson, Louis; Machanic, Adam
Represents a massive leap forward for developers in terms of programming options, productivity, database management and analysis. This book takes a deep look at the full range of SQL Server enhancements that give meaningful and relevant examples.
Yang, Xin; He, Zhen-yu; Jiang, Xiao-bo; Lin, Mao-sheng; Zhong, Ning-shan; Hu, Jiang; Qi, Zhen-yu; Bao, Yong; Li, Qiao-qiao; Li, Bao-yue; Hu, Lian-ying; Lin, Cheng-guang; Gao, Yuan-hong; Liu, Hui; Huang, Xiao-yan; Deng, Xiao-wu; Xia, Yun-fei; Liu, Meng-zhong; Sun, Ying
To meet the special demands in China and the particular needs for the radiotherapy department, a MOSAIQ Integration Platform CHN (MIP) based on the workflow of radiation therapy (RT) has been developed, as a supplement system to the Elekta MOSAIQ. The MIP adopts C/S (client-server) structure mode, and its database is based on the Treatment Planning System (TPS) and MOSAIQ SQL Server 2008, running on the hospital local network. Five network servers, as a core hardware, supply data storage and network service based on the cloud services. The core software, using C# programming language, is developed based on Microsoft Visual Studio Platform. The MIP server could offer network service, including entry, query, statistics and print information for about 200 workstations at the same time. The MIP was implemented in the past one and a half years, and some practical patient-oriented functions were developed. And now the MIP is almost covering the whole workflow of radiation therapy. There are 15 function modules, such as: Notice, Appointment, Billing, Document Management (application/execution), System Management, and so on. By June of 2016, recorded data in the MIP are as following: 13546 patients, 13533 plan application, 15475 RT records, 14656 RT summaries, 567048 billing records and 506612 workload records, etc. The MIP based on the RT workflow has been successfully developed and clinically implemented with real-time performance, data security, stable operation. And it is demonstrated to be user-friendly and is proven to significantly improve the efficiency of the department. It is a key to facilitate the information sharing and department management. More functions can be added or modified for further enhancement its potentials in research and clinical practice.
Lukitasari, Desy; Oklilas, Ahmad Fali
Virtual server adalah server yang mempunyai skalabilitas dan ketersedian yang tinggi yang dibangun diatas sebuah cluster dari beberapa real server. Real server dan load balancer akan saling terkoneksi baik dalam jaringan lokal kecepatan tinggi atau yang terpisah secara geografis. Load balancer dapat mengirim permintaan-permintaan ke server yang berbeda dan membuat paralel service dari sebuah cluster pada sebuah alamat IP tunggal dan meminta pengiriman dapat menggunakan teknologi IP load...
This paper introduces web-server monitoring, explaining its importance and describing various monitoring concepts and types. A set of reasons of web-server monitoring are enumerated. Then, in a monitoring strategy, various types of monitor are presented along with a comparison between deep and shallow monitors. Monitoring process meaning and elements are described, revealing that the monitoring process is largely dictated by the monitoring software package in use. A comparison between interna...
Eckels, Joshua; Hussey, Peter; Nelson, Elizabeth K; Myers, Tamra; Rauch, Adam; Bellew, Matthew; Connolly, Brian; Law, Wendy; Eng, Jimmy K; Katz, Jonathan; McIntosh, Martin; Mallick, Parag; Igra, Mark
LabKey Server (formerly CPAS, the Computational Proteomics Analysis System) provides a Web-based platform for mining data from liquid chromatography-tandem mass spectrometry (LC-MS/MS) proteomic experiments. This open source platform supports systematic proteomic analyses and secure data management, integration, and sharing. LabKey Server incorporates several tools currently used in proteomic analysis, including the X! Tandem search engine, the ProteoWizard toolkit, and the PeptideProphet and ProteinProphet data mining tools. These tools and others are integrated into LabKey Server, which provides an extensible architecture for developing high-throughput biological applications. The LabKey Server analysis pipeline acts on data in standardized file formats, so that researchers may use LabKey Server with other search engines, including Mascot or SEQUEST, that follow a standardized format for reporting search engine results. Supported builds of LabKey Server are freely available at http://www.labkey.com/. Documentation and source code are available under the Apache License 2.0 at http://www.labkey.org. © 2011 by John Wiley & Sons, Inc.
For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.
Full Text Available The UNESCO office in Venice (the Regional Bureau for Science and Culture in Europe has promoted, in collaboration with the Italian Agency for New Technologies, Energy, and the Environment (ENEA, an e-learning project on renewable energy: the DESIRE-net project (Development and Sustainability with International Renewable Energies network. The project's aim is to share the best available knowledge on renewable energies among all the countries that have joined the project and exploit this knowledge at every level. Currently the project involves 30 Eastern European and Southern Mediterranean countries as well as Australia, Indonesia, and China.
Full Text Available Most remote controllers for entrance gates operate on free frequencies 433 or 868 MHz. However, this technology limits the user comfort, as it is usually not common that bi-directional communication is established. A higher comfort of controlling the entrance gates can be achieved by employing the GSM network for transmission of commands and messages between the gate controller and the user. In this case, only a conventional GSM cellular phone is needed to control the gate. A description of such a controller based on the GSM module and Arduino controller is provided in this paper.
Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit
With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…
Schmidt-Kloiber, Astrid; De Wever, Aaike; Bremerich, Vanessa; Strackbein, Jörg; Hering, Daniel; Jähnig, Sonja; Kiesel, Jens; Martens, Koen; Tockner, Klement
Species distribution data is crucial for improving our understanding of biodiversity and its threats. This is especially the case for freshwater environments, which are heavily affected by the global biodiversity crisis. Currently, a huge body of freshwater biodiversity data is often difficult to access, because systematic data publishing practices have not yet been adopted by the freshwater research community. The Freshwater Information Platform (FIP; www.freshwaterplatform.eu) - initiated through the BioFresh project - aims at pooling freshwater related research information from a variety of projects and initiatives to make it easily accessible for scientists, water managers and conservationists as well as the interested public. It consists of several major components, three of which we want to specifically address: (1) The Freshwater Biodiversity Data Portal aims at mobilising freshwater biodiversity data, making them online available Datasets in the portal are described and documented in the (2) Freshwater Metadatabase and published as open access articles in the Freshwater Metadata Journal. The use of collected datasets for large-scale analyses and models is demonstrated in the (3) Global Freshwater Biodiversity Atlas that publishes interactive online maps featuring research results on freshwater biodiversity, resources, threats and conservation priorities. Here we present the main components of the FIP as tools to streamline open access freshwater data publication arguing this will improve the capacity to protect and manage freshwater biodiversity in the face of global change.
Laan-Luijkx, van der I.T.; Karstens, U.; Steinbach, J.; Gerbig, C.; Sirignano, C.; Neubert, R.E.M.; Laan, van der S.; Meijer, H.A.J.
We report results from our atmospheric flask sampling network for three European sites: Lutjewad in the Netherlands, Mace Head in Ireland and the North Sea F3 platform. The air samples from these stations are analyzed for their CO2 and O2 concentrations. In this paper we present the CO2 and O2 data
Laan-Luijkx, I.T. van der; Karstens, U.; Steinbach, J.; Gerbig, C.; Sirignano, C.; Neubert, R.E.M.; Laan, S. van der; Meijer, H.A.J.
We report results from our atmospheric flask sampling network for three European sites: Lutjewad in the Netherlands, Mace Head in Ireland and the North Sea F3 platform. The air samples from these stations are analyzed for their CO2 and O-2 concentrations. In this paper we present the CO2 and O2 data
Abu Riza Sudiyatmoko
Full Text Available Software Defined Network (SDN merupakan paradigma baru dalam sistem jaringan. Konsep dasar yang diusung oleh SDN adalah pemisahan antara layer control dan forward dalam perangkat yang berbeda. Konsep inilah yang menjadi perbedaan anatar SDN dan jaringan konvensional. Selain itu SDN memberikan konsep network topology virtualisation dan memungkinkan administrator untuk melakukan customize pada control plane. Dengan diterapkannya protokol OpenFlow pada SDN maka terdapat peluang untuk menerapkan perutingan flow based pada jaringan SDN dalam pendistribusian data dari source sampai ke destination. Link state IS-IS merupakan protokol routing yang menggunakan algoritma djikstra untuk menentukan jalur terbaik dalam pendistribusian paket. Dalam penelitian ini dilakukan analisis terhadap implementasi Link State IS-IS pada paltform SDN dengan menggunakan arsitektur RouteFlow. Parameter yang digunakan adalah throughput, delay, jitter dan packet loss serta performansi perangkat controller. Hasil pengujian pada kondisi overload yaitu dengan background traffic 125 Mb nilai packet loss mencapai 1,23%, nilai throughput 47,6 Mbp/s dan jitter 2.012 ms. Nilai delay terbesar adalah pada topology 11 switch 11 host yaitu berkisar diangka 553 ms. Sedangkan performansi perangkat controller dengan konsumsi memory pada saat menjalankan mengontrol jaringan berkisar diantara 25,638% sampai 39,04%
Yu, Chuan-Yih; Tsui, Yin-Hao; Yian, Yi-Hwa; Sung, Ting-Yi; Hsu, Wen-Lian
The Multi-Q web server provides an automated data analysis tool for multiplexed protein quantitation based on the iTRAQ labeling method. The web server is designed as a platform that can accommodate various input data formats from search engines and mass spectrometer manufacturers. Compared to the previous stand-alone version, the new web server version provides many enhanced features and flexible options for quantitation. The workflow of the web server is represented by a quantitation wizard so that the tool is easy to use. It also provides a friendly interface that helps users configure their parameter settings before running the program. The web server generates a standard report for quantitation results. In addition, it allows users to customize their output reports and information of interest can be easily highlighted. The output also provides visualization of mass spectral data so that users can conveniently validate the results. The Multi-Q web server is a fully automated and easy to use quantitation tool that is suitable for large-scale multiplexed protein quantitation. Users can download the Multi-Q Web Server from http://ms.iis.sinica.edu.tw/Multi-Q-Web. PMID:17553828
Ameme, Dan Selorm Kwami [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guttromson, Ross [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
This report characterizes communications network latency under various network topologies and qualities of service (QoS). The characterizations are probabilistic in nature, allowing deeper analysis of stability for Internet Protocol (IP) based feedback control systems used in grid applications. The work involves the use of Raspberry Pi computers as a proxy for a controlled resource, and an ns-3 network simulator on a Linux server to create an experimental platform (testbed) that can be used to model wide-area grid control network communications in smart grid. Modbus protocol is used for information transport, and Routing Information Protocol is used for dynamic route selection within the simulated network.
Server virtualization is a widespread and well known technology that has fundamentally changed the operations in data centers. Virtual servers and data storage can be fast and easily provisioned. On the other hand network requires a lot of administrative changes and configurations that increase time of adoption. The consequences of server virtualization are changed requirements for network resources therefore the next logical step is network virtualization. The different approaches for netwo...
Knight, Brian; Snyder, Wayne; Armand, Jean-Claude; LoForte, Ross; Ji, Haidong
SQL Server 2005 is the largest leap forward for SQL Server since its inception. With this update comes new features that will challenge even the most experienced SQL Server DBAs. Written by a team of some of the best SQL Server experts in the industry, this comprehensive tutorial shows you how to navigate the vastly changed landscape of the SQL Server administration. Drawing on their own first-hand experiences to offer you best practices, unique tips and tricks, and useful workarounds, the authors help you handle even the most difficult SQL Server 2005 administration issues, including blockin
Full Text Available The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients’ families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients’ time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users’ physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users’ environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel.
Li, S.; Ma, L.; Li, H.
Snap (Single Nucleotide Polymorphism Annotation Platform) is a server designed to comprehensively analyze single genes and relationships between genes basing on SNPs in the human genome. The aim of the platform is to facilitate the study of SNP finding and analysis within the framework of medical...
Nakamura, S; Sakurai, K; Uchiyama, M; Yoshii, Y; Tachibana, N
SMILE (St. Luke's Medical Center Information Linkage Environment) is a HIS which is a client/server system using a UNIX workstation under an open network, LAN(FDDI&10BASE-T). It provides a multivendor environment, high performance with low cost and a user-friendly GUI. However, the client/server architecture with a UNIX workstation does not have the same OLTP environment (ex. TP monor) as the mainframe. So, our system problems and the steps used to solve them were reviewed. Several points that are necessary for a client/server system with a UNIX workstation in the future are presented.
John R Cary; David Alexander; Johan Carlsson; Kelly Luetkemeyer; Nathaniel Sizemore
OAK-B135 Tech-X Corporation designed and developed all the networking code tying together the NTCC data server with the data client and the physics server with the data server and physics client. We were also solely responsible for the data and physics clients and the vast majority of the work on the data server. We also performed a number of other tasks.
Full Text Available Building a complete free and open source GIS computing and data publication platform can be a relatively easy task. This paper describes an automated deployment of such platform using two open source software projects – GIS.lab and Gisquick. GIS.lab (http: //web.gislab.io is a project for rapid deployment of a complete, centrally managed and horizontally scalable GIS infrastructure in the local area network, data center or cloud. It provides a comprehensive set of free geospatial software seamlessly integrated into one, easy-to-use system. A platform for GIS computing (in our case demonstrated on hydrological data processing requires core components as a geoprocessing server, map server, and a computation engine as eg. GRASS GIS, SAGA, or other similar GIS software. All these components can be rapidly, and automatically deployed by GIS.lab platform. In our demonstrated solution PyWPS is used for serving WPS processes built on the top of GRASS GIS computation platform. GIS.lab can be easily extended by other components running in Docker containers. This approach is shown on Gisquick seamless integration. Gisquick (http://gisquick.org is an open source platform for publishing geospatial data in the sense of rapid sharing of QGIS projects on the web. The platform consists of QGIS plugin, Django-based server application, QGIS server, and web/mobile clients. In this paper is shown how to easily deploy complete open source GIS infrastructure allowing all required operations as data preparation on desktop, data sharing, and geospatial computation as the service. It also includes data publication in the sense of OGC Web Services and importantly also as interactive web mapping applications.
Asokan, N; Dmitrienko, Alexandra
Recently, mobile security has garnered considerable interest in both the research community and industry due to the popularity of smartphones. The current smartphone platforms are open systems that allow application development, also for malicious parties. To protect the mobile device, its user, and other mobile ecosystem stakeholders such as network operators, application execution is controlled by a platform security architecture. This book explores how such mobile platform security architectures work. We present a generic model for mobile platform security architectures: the model illustrat
VURAL, Bülent; KIZIL, Ali; UZUNOĞLU, Mehmet
The power quality (PQ) requirement is one of the most important issues for power companies and their customers. Continuously monitoring the PQ from remote and distributed centers will help to improve the PQ. In this study, a remote power quality monitoring system for low voltage sub-networks is developed using MATLAB Server Pages (MSP). MATLAB Server Pages, which is an open source technical web programming language, combines MATLAB with integrated J2EE specifications. The proposed PQ...
Full Text Available Wireless personal area networks (WPANs have gained interest in the last few years, and several air interfaces have been proposed to cover WPAN applications. A multicarrier spread spectrum (MC-SS air interface specified to achieve 130Ã¢Â€Â‰Mbps in typical WPAN channels is presented in this paper. It operates in the 5.2Ã¢Â€Â‰GHz ISM band and achieves a spectral efficiency of 3.25Ã¢Â€Â‰bÃ‚Â·sÃ¢ÂˆÂ’1Ã‚Â·HzÃ¢ÂˆÂ’1. Besides the robustness of the MC-SS approach, this air interface yields to reasonable implementation complexity. This paper focuses on the hardware design and prototype of this MC-SS air interface. The prototype includes RF, baseband, and IEEE802.15.3 compliant medium access control (MAC features. Implementation aspects are carefully analyzed for each part of the prototype, and key hardware design issues and solutions are presented. Hardware complexity and implementation loss are compared to theoretical expectations, as well as flexibility is discussed. Measurement results are provided for a real condition of operations.
Helikar, Tomáš; Rogers, Jim A
Background New mathematical models of complex biological structures and computer simulation software allow modelers to simulate and analyze biochemical systems in silico and form mathematical predictions. Due to this potential predictive ability, the use of these models and software has the possibility to compliment laboratory investigations and help refine, or even develop, new hypotheses. However, the existing mathematical modeling techniques and simulation tools are often difficult to use by laboratory biologists without training in high-level mathematics, limiting their use to trained modelers. Results We have developed a Boolean network-based simulation and analysis software tool, ChemChains, which combines the advantages of the parameter-free nature of logical models while providing the ability for users to interact with their models in a continuous manner, similar to the way laboratory biologists interact with laboratory data. ChemChains allows users to simulate models in an automatic fashion under tens of thousands of different external environments, as well as perform various mutational studies. Conclusion ChemChains combines the advantages of logical and continuous modeling and provides a way for laboratory biologists to perform in silico experiments on mathematical models easily, a necessary component of laboratory research in the systems biology era. PMID:19500393
Rogers Jim A
Full Text Available Abstract Background New mathematical models of complex biological structures and computer simulation software allow modelers to simulate and analyze biochemical systems in silico and form mathematical predictions. Due to this potential predictive ability, the use of these models and software has the possibility to compliment laboratory investigations and help refine, or even develop, new hypotheses. However, the existing mathematical modeling techniques and simulation tools are often difficult to use by laboratory biologists without training in high-level mathematics, limiting their use to trained modelers. Results We have developed a Boolean network-based simulation and analysis software tool, ChemChains, which combines the advantages of the parameter-free nature of logical models while providing the ability for users to interact with their models in a continuous manner, similar to the way laboratory biologists interact with laboratory data. ChemChains allows users to simulate models in an automatic fashion under tens of thousands of different external environments, as well as perform various mutational studies. Conclusion ChemChains combines the advantages of logical and continuous modeling and provides a way for laboratory biologists to perform in silico experiments on mathematical models easily, a necessary component of laboratory research in the systems biology era.
Ullman, Richard; Bane, Bob; Yang, Jingli
A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.
Furfari, Francesco; et al., .
Deliverable 2.2 is part of work package 2 - "Open source AAL Platform and Implementation" - whose main objective is to design, configure and implement an operational universAAL platform. This platform will be available in the Developer Depot and can be deployed to execution platforms like mobile phones, laptops, high-performance servers, etc. Special attention is given to operational issues such as reliability, security, interoperability and maintainability as perceived by service and applica...
The first website at CERN - and in the world - was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer. The website described the basic features of the web; how to access other people's documents and how to set up your own server. This NeXT machine - the original web server - is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world's first website to its original address.
Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim
Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.
In this article, I inquire into Facebook’s development as a platform by situating it within the transformation of social network sites into social media platforms. I explore this shift with a historical perspective on, what I refer to as, platformization, or the rise of the platform as the dominant
Full Text Available Video games are typically executed on Windows platforms with DirectX API and require high performance CPUs and graphics hardware. For pervasive gaming in various environments like at home, hotels, or internet cafes, it is beneficial to run games also on mobile devices and modest performance CE devices avoiding the necessity of placing a noisy workstation in the living room or costly computers/consoles in each room of a hotel. This paper presents a new cross-platform approach for distributed 3D gaming in wired/wireless local networks. We introduce the novel system architecture and protocols used to transfer the game graphics data across the network to end devices. Simultaneous execution of video games on a central server and a novel streaming approach of the 3D graphics output to multiple end devices enable the access of games on low cost set top boxes and handheld devices that natively lack the power of executing a game with high-quality graphical output.
Valentic, T. A.
The Data Transport Network is designed for the delivery of data from scientific instruments located at remote field sites with limited or unreliable communications. Originally deployed at the Sondrestrom Research Facility in Greenland over a decade ago, the system supports the real-time collection and processing of data from large instruments such as incoherent scatter radars and lidars. In recent years, the Data Transport Network has been adapted to small, low-power embedded systems controlling remote instrumentation platforms deployed throughout the Arctic. These projects include multiple buoys from the O-Buoy, IceLander and IceGoat programs, renewable energy monitoring at the Imnavait Creek and Ivotuk field sites in Alaska and remote weather observation stations in Alaska and Greenland. This presentation will discuss the common communications controller developed for these projects. Although varied in their application, each of these systems share a number of common features. Multiple instruments are attached, each of which needs to be power controlled, data sampled and files transmitted offsite. In addition, the power usage of the overall system must be minimized to handle the limited energy available from sources such as solar, wind and fuel cells. The communications links are satellite based. The buoys and weather stations utilize Iridium, necessitating the need to handle the common drop outs and high-latency, low-bandwidth nature of the link. The communications controller is an off-the-shelf, low-power, single board computer running a customized version of the Linux operating system. The Data Transport Network provides a Python-based software framework for writing individual data collection programs and supplies a number of common services for configuration, scheduling, logging, data transmission and resource management. Adding a new instrument involves writing only the necessary code for interfacing to the hardware. Individual programs communicate with the
Full Text Available The development of internet technology has many organizations that expanded service website. Initially used single web server that is accessible to everyone through the Internet, but when the number of users that access the web server is very much the traffic load to the web server and the web server anyway. It is necessary for the optimization of web servers to cope with the overload received by the web server when traffic is high. Methodology of this final project research include the study of literature, system design, and testing of the system. Methods from the literature reference books related as well as from several sources the internet. The design of this thesis uses Haproxy and Pound Links as a load balancing web server. The end of this reaserch is testing the network system, where the system will be tested this stage so as to create a web server system that is reliable and safe. The result is a web server system that can be accessed by many user simultaneously rapidly as load balancing Haproxy and Pound Links system which is set up as front-end web server performance so as to create a web server that has performance and high availability.
Gustafson, Carl; Bug, William J; Nissanov, Jonathan
Background Three dimensional biomedical image sets are becoming ubiquitous, along with the canonical atlases providing the necessary spatial context for analysis. To make full use of these 3D image sets, one must be able to present views for 2D display, either surface renderings or 2D cross-sections through the data. Typical display software is limited to presentations along one of the three orthogonal anatomical axes (coronal, horizontal, or sagittal). However, data sets precisely oriented along the major axes are rare. To make fullest use of these datasets, one must reasonably match the atlas' orientation; this involves resampling the atlas in planes matched to the data set. Traditionally, this requires the atlas and browser reside on the user's desktop; unfortunately, in addition to being monolithic programs, these tools often require substantial local resources. In this article, we describe a network-capable, client-server framework to slice and visualize 3D atlases at off-axis angles, along with an open client architecture and development kit to support integration into complex data analysis environments. Results Here we describe the basic architecture of a client-server 3D visualization system, consisting of a thin Java client built on a development kit, and a computationally robust, high-performance server written in ANSI C++. The Java client components (NetOStat) support arbitrary-angle viewing and run on readily available desktop computers running Mac OS X, Windows XP, or Linux as a downloadable Java Application. Using the NeuroTerrain Software Development Kit (NT-SDK), sophisticated atlas browsing can be added to any Java-compatible application requiring as little as 50 lines of Java glue code, thus making it eminently re-useable and much more accessible to programmers building more complex, biomedical data analysis tools. The NT-SDK separates the interactive GUI components from the server control and monitoring, so as to support development of non
Full Text Available Abstract Background Three dimensional biomedical image sets are becoming ubiquitous, along with the canonical atlases providing the necessary spatial context for analysis. To make full use of these 3D image sets, one must be able to present views for 2D display, either surface renderings or 2D cross-sections through the data. Typical display software is limited to presentations along one of the three orthogonal anatomical axes (coronal, horizontal, or sagittal. However, data sets precisely oriented along the major axes are rare. To make fullest use of these datasets, one must reasonably match the atlas' orientation; this involves resampling the atlas in planes matched to the data set. Traditionally, this requires the atlas and browser reside on the user's desktop; unfortunately, in addition to being monolithic programs, these tools often require substantial local resources. In this article, we describe a network-capable, client-server framework to slice and visualize 3D atlases at off-axis angles, along with an open client architecture and development kit to support integration into complex data analysis environments. Results Here we describe the basic architecture of a client-server 3D visualization system, consisting of a thin Java client built on a development kit, and a computationally robust, high-performance server written in ANSI C++. The Java client components (NetOStat support arbitrary-angle viewing and run on readily available desktop computers running Mac OS X, Windows XP, or Linux as a downloadable Java Application. Using the NeuroTerrain Software Development Kit (NT-SDK, sophisticated atlas browsing can be added to any Java-compatible application requiring as little as 50 lines of Java glue code, thus making it eminently re-useable and much more accessible to programmers building more complex, biomedical data analysis tools. The NT-SDK separates the interactive GUI components from the server control and monitoring, so as to support
Windows Home Server brings the idea of centralized storage, backup and computer management out of the enterprise and into the home. Windows Home Server is built for people with multiple computers at home and helps to synchronize them, keep them updated, stream media between them, and back them up centrally. Built on a similar foundation as the Microsoft server operating products, it's essentially Small Business Server for the home.This book details how to install, configure, and use Windows Home Server and explains how to connect to and manage different clients such as Windows XP, Windows Vist
Jorgensen, Adam; LeBlanc, Patrick; Cherry, Denny; Nelson, Aaron
Harness the powerful new SQL Server 2012 Microsoft SQL Server 2012 is the most significant update to this product since 2005, and it may change how database administrators and developers perform many aspects of their jobs. If you're a database administrator or developer, Microsoft SQL Server 2012 Bible teaches you everything you need to take full advantage of this major release. This detailed guide not only covers all the new features of SQL Server 2012, it also shows you step by step how to develop top-notch SQL Server databases and new data connections and keep your databases performing at p
van Mulligen, E; Timmers, T
.... After a period of prototyping to assess possible alternative solutions, a system based on an indirect client-server model was implemented with help of the industry. In this paper, its architecture is described together with the most important features currently covered. Based on the HERMES architecture, both systems for clinical data analysis and patient care (cardiology) are currently developed.
The OPC UA IPbus server is a software tool which intends to connect IPbus-compatible devices such as data acquisition FPGAs to industry standard Unified Architecture clients (e.g. WinCC OA UA client, UA expert). It has been the missing link between IPbus devices and the ATLAS DCS.
Rao, Nageswara S. [ORNL; Ma, Chris Y. T. [Hang Seng Management College, Hon Kong; He, Fei [Texas A& M University, Kingsville, TX, USA
We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, and also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.
It can be challenging to interface National Instruments LabVIEW (http://www.ni.com/labview/) with EPICS (http://www.aps.anl.gov/epics/). Such interface is required when an instrument control program was developed in LabVIEW but it also has to be part of global control system. This is frequently useful in big accelerator facilities. The Channel Access Server is written in LabVIEW, so it works on any hardware/software platform where LabVIEW is available. It provides full server functionality, so any EPICS client can communicate with it.
Jayanty, Satya SK
Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Instant Microsoft SQL Server Analysis Services 2012 Cube Security is a practical, hands-on guide that provides a number of clear, step-by-step exercises for getting started with cube security.This book is aimed at Database Administrators, Data Architects, and Systems Administrators who are managing the SQL Server data platform. It is also beneficial for analysis services developers who already have some experience with the technology, but who want to go into more detail on advanced
Pro SQL Server 2008 Analytics provides everything you need to know to develop sophisticated and visually appealing sales and marketing dashboards using SQL Server 2008 and to integrate those dashboards with SharePoint, PerformancePoint, and other key Microsoft technologies. The book begins by addressing the many misconceptions that surround the use of Key Performance Indicators (KPIs) and giving a brief overview of the business intelligence (BI) and reporting tools that can be combined on the Microsoft platform to help you generate the results that you need. The focus of the book is to help yo
Full Text Available A wireless image real-time transmission system is designed by using 3G wireless communication platform and ARM + DSP embedded system. In the environment of 3G networks, the embedded equipment has realized the functions of coding, acquisition, network transmission, decoding and playing. It is realized for real-time video of intelligent control and video compression, storage and playback in the 3G embedded image transmission system. It is especially suitable for remote location or irregular cable network transmission conditions applications. It is shown that in the 3G network video files are transferred quickly. The real-time transmission of H.264 video is broadcasted smoothly, and color distortion is less. The server can control client by remote intelligent units.
Nils Crabeel; Betina Campos Neves; Benedita Malheiro
This paper reports on a first step towards the implementation of a framework for remote experimentation of electric machines â€“ the RemoteLabs platform. This project was focused on the development of two main modules: the user Web-based and the electric machines interfaces. The Web application provides the user with a front-end and interacts with the back-end â€“ the user and experiment persistent data. The electric machines interface is implemented as a distributed client server application...
This paper traces the expansion of the Internet into Russian and Commonwealth of Independent States (CIS) libraries from basic access to the development of World Wide Web (WWW) servers. An analysis of the most representative groups of library WWW-servers arranged by projects, by corporate library network, or by geographical characteristics is…
Adler, Richard M.; Hughes, Craig S.
This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.
Full Text Available Internet of things (IoT aims at bringing together large business enterprise solutions and architectures for handling the huge amount of data generated by millions of devices. For this aim, IoT is necessary to connect various devices and provide a common platform for storage and retrieval of information without fail. However, the success of IoT depends on the novelty of network and its capability in sustaining the increasing demand by users. In this paper, a self-aware communication architecture (SACA is proposed for sustainable networking over IoT devices. The proposed approach employs the concept of mobile fog servers which make relay using the train and unmanned aerial vehicle (UAV networks. The problem is presented based on Wald’s maximum model, which is resolved by the application of a distributed node management (DNM system and state dependency formulations. The proposed approach is capable of providing prolonged connectivity by increasing the network reliability and sustainability even in the case of failures. The effectiveness of the proposed approach is demonstrated through numerical and network simulations in terms of significant gains attained with lesser delay and fewer packet losses. The proposed approach is also evaluated against Sybil, wormhole, and DDoS attacks for analyzing its sustainability and probability of connectivity in unfavorable conditions.
AUTHOR|(CDS)2096025; Marian, Ludmila
CERN Document Server (CDS) is the institutional repository of the European Organization for Nuclear Research (CERN). It hosts all the research material produced at CERN, as well as multi- media and administrative documents. It currently has more than 1.5 million records grouped in more than 1000 collections. It’s underlying platform is Invenio, an open source digital library system created at CERN. As the size of CDS increases, discovering useful and interesting records becomes more challenging. Therefore, the goal of this work is to create a system that supports the user in the discovery of related interesting records. To achieve this, a set of recommended records are displayed on the record page. These recommended records are based on the analyzed behavior (page views and downloads) of other users. This work will describe the methods and algorithms used for creating, implementing, and the integration with the underlying software platform, Invenio. A very important decision factor when designing a recomme...
Hadi, Eko Sasmito; Adietya, Berlian Arswendo; S.P, Firdaus
Engine room monitoring control system is monitoring and controlling main engine and auxiliary engine from long distance by powerline communication network and wireless network to ease the operator in operating the ship and save operational cost. To prevent error in programming the main engine and auxiliary engine, a simulation using instrument software is needed to know the machine characteristic. After simulation result fulfills the requirement which is approached the value of test record, i...
Eko Sasmito Hadi; Berlian Arswendo Adietya; Firdaus S.P
Engine room monitoring control system is monitoring and controlling main engine and auxiliary engine from long distance by powerline communication network and wireless network to ease the operator in operating the ship and save operational cost. To prevent error in programming the main engine and auxiliary engine, a simulation using instrument software is needed to know the machine characteristic. After simulation result fulfills the requirement which is approached the value of test record, i...
Gardner, Robert; The ATLAS collaboration
As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.
Gardner, Robert; The ATLAS collaboration
As many Tier 3 and some Tier 2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.
Gardner, R. W.; Hanushevsky, A.; Vukotic, I.; Yang, W.
As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.
The Internet has exponentially grown and now it is part of our everyday life. Internet services and applications rely on back-end servers that are deployed on local servers and data centers. With the growing use of data centers and cloud computing, the locations of these servers have been externalized and centralized, taking advantage of economies of scale. However, some applications need to define complex network topologies and require more than simple connectivity to the remote sites. Ther...
Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser
Technology & Learning, 2005
In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…
Puche, William S.; Sierra, Javier E.; Moreno, Gustavo A.
The convergence of new technologies in the digital world has made devices with internet connectivity such as televisions, smatphone, Tablet, Blu-ray, game consoles, among others, to increase more and more. Therefore the major research centers are in the task of improving the network performance to mitigate the bottle neck phenomenon regarding capacity and high transmission rates in information and data. The implementation of standard HbbTV (Hybrid Broadcast Broadband TV), and technological platforms OTT (Over the Top), capable of distributing video, audio, TV, and other Internet services via devices connected directly to the cloud. Therefore a model to improve the transmission capacity required by content distribution networks (CDN) for online TV, with high-capacity optical networks is proposed.
Expert SQL Server 2008 Development is aimed at SQL Server developers ready to move beyond Books Online. Author and experienced developer Alastair Aitchison shows you how to think about SQL Server development as if it were any other type of development. You'll learn to manage testing in SQL Server and to properly deal with errors and exceptions. The book also covers critical, database-centric topics such as managing concurrency and securing your data and code through proper privileges and authorization. Alastair places focus on sound development and architectural practices that will help you be
The bestselling guide to Exchange Server, fully updated for the newest version Microsoft Exchange Server 2013 is touted as a solution for lowering the total cost of ownership, whether deployed on-premises or in the cloud. Like the earlier editions, this comprehensive guide covers every aspect of installing, configuring, and managing this multifaceted collaboration system. It offers Windows systems administrators and consultants a complete tutorial and reference, ideal for anyone installing Exchange Server for the first time or those migrating from an earlier Exchange Server version.Microsoft
Chai, X; Liu, L; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States)
Hammer, K.E.; Gilman, T.L.
Over the last ten years the development of PC's and workstations has changed the way computing is performed. Previously, extensive computations were performed on large high speed mainframe machines with substantial storage capacity. Large capital and operational costs were associated with these machines. The advent of more powerful workstations has brought more computational cycles to the users at lower cost than was achieved with busy timesharing systems. However, many users still can't afford individual special purpose hardware or gigabytes of storage. A successful distributed processing environment must share these resources. Client/Server models have been proposed to address the issues of shared resources. They are not a new idea, but their implementation has been difficult. With the introduction of SUN's public domain Remote Procedure Call (RPC) Protocol and SUN's interface generator, RPCGEN, their implementation has been made easier. SUN has developed a set of C'' callable routines that handle the Client/Server operations. The availability of Network File System (NFS) on the SRL CRAY and the arrival of Wollongong's latest version of NFS has allowed applications and information sharing between computing platforms. This paper reviews the Client/Server model with respect to SUN's RPC implementation. The discussion will focus on the RPC connection between local and remote machines, the RPC Paradigm for making remote procedure calls, and the programming levels of the RPC libraries. The paper will conclude with summaries of two applications developed at SRL using the protocol and their effect on our computing environment. These include the Nuclear Plant Analyzer and an animation of fluids using behavioral simulation of atom-like particles.
Baumann, P.; Rossi, A. P.
With the unprecedented increase of orbital sensor, in-situ measurement, and simulation data there is a rich, yet not leveraged potential for getting insights from dissecting datasets and rejoining them with other datasets. Obviously, the goal is to allow users to "ask any question, any time" thereby enabling them to "build their own product on the go".One of the most influential initiatives in Big Geo Data is EarthServer which has demonstrated new directions for flexible, scalable EO services based on innovative NewSQL technology. Researchers from Europe, the US and recently Australia have teamed up to rigourously materialize the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users will always see just a few datacubes they can slice and dice. EarthServer has established client and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman, enables direct interaction, including 3-D visualization, what-if scenarios, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS) including the Web Coverage Processing Service (WCPS). Conversely, EarthServer has significantly shaped and advanced the OGC Big Geo Data standards landscape based on the experience gained.Phase 1 of EarthServer has advanced scalable array database technology into 100+ TB services; in phase 2, Petabyte datacubes will be built in Europe and Australia to perform ad-hoc querying and merging. Standing between EarthServer phase 1 (from 2011 through 2014) and phase 2 (from 2015 through 2018) we present the main results and outline the impact on the international standards landscape; effectively, the Big Geo Data standards established through initiative of
Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.
Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory
Full Text Available The paper describes a system designed to manage and collect data from the network of heterogeneous sensors. It was implemented using Erlang OTP and CouchDB for maximum fault tolerance, scalability and ease of deployment. It is resistant to poor network quality, shows high tolerance for software errors and power failures, operates on ﬂexible data model. Additionally, it is available to users through an Web application, which shows just how easy it is to use the server HTTP API to communicate with it. The whole platform was implemented and tested on variety of devices like PC, Mac, ARM-based embedded devices and Android tablets.
Nelson, Austin; Chakraborty, Sudipta; Wang, Dexin; Singh, Pawan; Cui, Qiang; Yang, Liuqing; Suryanarayanan, Siddharth
This paper presents a cyber-physical testbed, developed to investigate the complex interactions between emerging microgrid technologies such as grid-interactive power sources, control systems, and a wide variety of communication platforms and bandwidths. The cyber-physical testbed consists of three major components for testing and validation: real time models of a distribution feeder model with microgrid assets that are integrated into the National Renewable Energy Laboratory's (NREL) power hardware-in-the-loop (PHIL) platform; real-time capable network-simulator-in-the-loop (NSIL) models; and physical hardware including inverters and a simple system controller. Several load profiles and microgrid configurations were tested to examine the effect on system performance with increasing channel delays and router processing delays in the network simulator. Testing demonstrated that the controller's ability to maintain a target grid import power band was severely diminished with increasing network delays and laid the foundation for future testing of more complex cyber-physical systems.
Tuijn, Chris; Stokes, Earle
The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven
Currey, J. C.; Bartle, A.
Intercalibration, validation, and data mining use cases require more efficient access to the massive volumes of observation data distributed across multiple agency data centers. The traditional paradigm of downloading large volumes of data to a centralized server or desktop computer for analysis is no longer viable. More analysis should be performed within the host data centers using server-side functions. Many comparative analysis tasks require far less than 1% of the available observation data. The Multi-Instrument Intercalibration (MIIC) Framework provides web services to find, match, filter, and aggregate multi-instrument observation data. Matching measurements from separate spacecraft in time, location, wavelength, and viewing geometry is a difficult task especially when data are distributed across multiple agency data centers. Event prediction services identify near coincident measurements with matched viewing geometries near orbit crossings using complex orbit propagation and spherical geometry calculations. The number and duration of event opportunities depend on orbit inclinations, altitude differences, and requested viewing conditions (e.g., day/night). Event observation information is passed to remote server-side functions to retrieve matched data. Data may be gridded, spatially convolved onto instantaneous field-of-views, or spectrally resampled or convolved. Narrowband instruments are routinely compared to hyperspectal instruments such as AIRS and CRIS using relative spectral response (RSR) functions. Spectral convolution within server-side functions significantly reduces the amount of hyperspectral data needed by the client. This combination of intelligent selection and server-side processing significantly reduces network traffic and data to process on local servers. OPeNDAP is a mature networking middleware already deployed at many of the Earth science data centers. Custom OPeNDAP server-side functions that provide filtering, histogram analysis (1D
.... Since then it has become the number one network operating system on the market. With the release of Windows 2000, Microsoft has followed through on its strategy of operating system consolidation and formed a new family of servers...
Camnasio, Maurizio; Trombetta, Laura
Viene presentato il nuovo prodotto Universal Server di INFORMIX installato al CILEA su macchina SUN Server ultra-2. Si tratta di un database relazionale ad oggetti capace di memorizzare e gestire in modo efficace dati alfanumerici, immagini, video, pagine Web e dati diversi definiti dall'utente, all'interno di un'unico repository.
Gandhi, A.; Harchol-Balter, M.; Adan, I.
In this paper we consider server farms with a setup cost. This model is common in manufacturing systems and data centers, where there is a cost to turn servers on. Setup costs always take the form of a time delay, and sometimes there is additionally a power penalty, as in the case of data centers.
Madsen, Anders Østergaard; Hoser, Anna Agnieszka
A major update of the SHADE server (http://shade.ki.ku.dk) is presented. In addition to all of the previous options for estimating H-atom anisotropic displacement parameters (ADPs) that were offered by SHADE2, the newest version offers two new methods. The first method combines the original...... translation-libration-screw analysis with input from periodic ab initio calculations. The second method allows the user to input experimental information from spectroscopic measurements or from neutron diffraction experiments on related structures and utilize this information to evaluate ADPs of H atoms...
SQL Server 2005 Integration Services (SSIS) lets you build high-performance data integration solutions. SSIS solutions wrap sophisticated workflows around tasks that extract, transform, and load (ETL) data from and to a wide variety of data sources. This Short Cut begins with an overview of key SSIS concepts, capabilities, standard workflow and ETL elements, the development environment, execution, deployment, and migration from Data Transformation Services (DTS). Next, you'll see how to apply the concepts you've learned through hands-on examples of common integration scenarios. Once you've
AUTHOR|(SzGeCERN)698154; The ATLAS collaboration; Lehmann Miotto, Giovanna
The planned upgrades of the experiments at the Large Hadron Collider at CERN will require higher bandwidth networks for their data acquisition systems. The network congestion problem arising from the bursty many-to-one communication pattern, typical for these systems, will become more demanding. It is questionable whether commodity TCP/IP and Ethernet technologies in their current form will be still able to effectively adapt to the bursty traffic without losing packets due to the scarcity of buffers in the networking hardware. We continue our study of the idea of lossless switching in software running on commercial-off-the-shelf servers for data acquisition systems, using the ATLAS experiment as a case study. The flexibility of design in software, performance of modern computer platforms, and buffering capabilities constrained solely by the amount of DRAM memory are a strong basis for building a network dedicated to data acquisition with commodity hardware, which can provide reliable transport in congested co...
Al-Shuwaili, A.; Simone, O.; Kliewer, J.
Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off-the-shelf ha......Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off...
Niturkar Priyanka; Prof. V.D.Shinde
This paper describes the design of embedded web server based on ARM9 processor and Linux platform. It analyses hardware configuration and software implementation for monitoring and controlling systems or devices. User can monitor and control temperature and smoke information. It consists of application program written in „C‟ for accessing data through the serial port and updating the web page, porting of Linux 2.6.3x Kernel with application program on ARM9 board and booting it fro...
Yip, Kevin Y; Yu, Haiyuan; Kim, Philip M; Schultz, Martin; Gerstein, Mark
Biological processes involve complex networks of interactions between molecules. Various large-scale experiments and curation efforts have led to preliminary versions of complete cellular networks for a number of organisms. To grapple with these networks, we developed TopNet-like Yale Network Analyzer (tYNA), a Web system for managing, comparing and mining multiple networks, both directed and undirected. tYNA efficiently implements methods that have proven useful in network analysis, including identifying defective cliques, finding small network motifs (such as feed-forward loops), calculating global statistics (such as the clustering coefficient and eccentricity), and identifying hubs and bottlenecks. It also allows one to manage a large number of private and public networks using a flexible tagging system, to filter them based on a variety of criteria, and to visualize them through an interactive graphical interface. A number of commonly used biological datasets have been pre-loaded into tYNA, standardized and grouped into different categories. The tYNA system can be accessed at http://networks.gersteinlab.org/tyna. The source code, JavaDoc API and WSDL can also be downloaded from the website. tYNA can also be accessed from the Cytoscape software using a plugin.
Eko Sasmito Hadi
Full Text Available Engine room monitoring control system is monitoring and controlling main engine and auxiliary engine from long distance by powerline communication network and wireless network to ease the operator in operating the ship and save operational cost. To prevent error in programming the main engine and auxiliary engine, a simulation using instrument software is needed to know the machine characteristic. After simulation result fulfills the requirement which is approached the value of test record, it can be applied to the real machine. In this study, some steps were done such as: getting know type of main engine and auxiliary engine which performance will be simulated and programmed, getting test record of main engine and auxiliary engine, getting know how the main engine and auxiliary engine work, making simulation of main engine and auxiliary engine work system, doing monitoring control simulation by powerline communication and wireless network, comparing the results between simulation and test record of main engine and auxiliary engine. Engine programming can be set after simulation result meets the requirement. Result of simulation fulfills the requirement, the difference values between machine simulation using software instrument and test record of main engine and auxiliary engine are around 1%-2%. If engine room monitoring control system by wireless and powerline communication is applied in the ship, the ship owner will get advantages because it will prolong ship durability and can monitor operasional of main engine or Auxiliary engine from long distance ; while the operator will be easier in operating the ship. The disadvantage is only the higher cost
Joannah Caborn Wengler
Did you know that computer centres are like people? They breathe air in and out like a person, they have to be kept at the right temperature, and they can even be organ donors. As part of a regular cycle of equipment renewal, the CERN Computer Centre has just donated 161 retired servers to universities in Morocco. Prof. Abdeslam Hoummada and CERN DG Rolf Heuer seeing off the servers on the beginning of their journey to Morocco. “Many people don’t realise, but the Computer Centre is like a living thing. You don’t just install equipment and it runs forever. We’re continually replacing machines, broken parts and improving things like the cooling.” Wayne Salter, Leader of the IT Computing Facilities Group, watches over the Computer Centre a bit like a nurse monitoring a patient’s temperature, especially since new international recommendations for computer centre environmental conditions were released. “A new international s...
Cross, J. N.; Meinig, C.; Mordy, C. W.; Lawrence-Slavas, N.; Cokelet, E. D.; Jenkins, R.; Tabisola, H. M.; Stabeno, P. J.
New autonomous sensors have dramatically increased the resolution and accuracy of oceanographic data collection, enabling rapid sampling over extremely fine scales. Innovative new autonomous platofrms like floats, gliders, drones, and crawling moorings leverage the full potential of these new sensors by extending spatiotemporal reach across varied environments. During 2015 and 2016, The Innovative Technology for Arctic Exploration Program at the Pacific Marine Environmental Laboratory tested several new types of fully autonomous platforms with increased speed, durability, and power and payload capacity designed to deliver cutting-edge ecosystem assessment sensors to remote or inaccessible environments. The Expendable Ice-Tracking (EXIT) gloat developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) is moored near bottom during the ice-free season and released on an autonomous timer beneath the ice during the following winter. The float collects a rapid profile during ascent, and continues to collect critical, poorly-accessible under-ice data until melt, when data is transmitted via satellite. The autonomous Oculus sub-surface glider developed by the University of Washington and PMEL has a large power and payload capacity and an enhanced buoyancy engine. This 'coastal truck' is designed for the rapid water column ascent required by optical imaging systems. The Saildrone is a solar and wind powered ocean unmanned surface vessel (USV) developed by Saildrone, Inc. in partnership with PMEL. This large-payload (200 lbs), fast (1-7 kts), durable (46 kts winds) platform was equipped with 15 sensors designed for ecosystem assessment during 2016, including passive and active acoustic systems specially redesigned for autonomous vehicle deployments. The senors deployed on these platforms achieved rigorous accuracy and precision standards. These innovative platforms provide new sampling capabilities and cost efficiencies in high-resolution sensor deployment
de Vries, Andrie
Many astronomers working in the field of AstroInformatics write code as part of their work. Although the programming language of choice is Python, a small number (8%) use R. R has its specific strengths in the domain of statistics, and is often viewed as limited in the size of data it can handle. However, Microsoft R Server is a product that removes these limitations by being able to process much larger amounts of data. I present some highlights of R Server, by illustrating how to fit a convolutional neural network using R. The specific task is to classify galaxies, using only images extracted from the Sloan Digital Skyserver.
Full Text Available This paper presents an innovative structural health monitoring (SHM platform in terms of how it integrates smartphone sensors, the web, and crowdsourcing. The ubiquity of smartphones has provided an opportunity to create low-cost sensor networks for SHM. Crowdsourcing has given rise to citizen initiatives becoming a vast source of inexpensive, valuable but heterogeneous data. Previously, the authors have investigated the reliability of smartphone accelerometers for vibration-based SHM. This paper takes a step further to integrate mobile sensing and web-based computing for a prospective crowdsourcing-based SHM platform. An iOS application was developed to enable citizens to measure structural vibration and upload the data to a server with smartphones. A web-based platform was developed to collect and process the data automatically and store the processed data, such as modal properties of the structure, for long-term SHM purposes. Finally, the integrated mobile and web-based platforms were tested to collect the low-amplitude ambient vibration data of a bridge structure. Possible sources of uncertainties related to citizens were investigated, including the phone location, coupling conditions, and sampling duration. The field test results showed that the vibration data acquired by smartphones operated by citizens without expertise are useful for identifying structural modal properties with high accuracy. This platform can be further developed into an automated, smart, sustainable, cost-free system for long-term monitoring of structural integrity of spatially distributed urban infrastructure. Citizen Sensors for SHM will be a novel participatory sensing platform in the way that it offers hybrid solutions to transitional crowdsourcing parameters.
Ozer, Ekin; Feng, Maria Q.; Feng, Dongming
This paper presents an innovative structural health monitoring (SHM) platform in terms of how it integrates smartphone sensors, the web, and crowdsourcing. The ubiquity of smartphones has provided an opportunity to create low-cost sensor networks for SHM. Crowdsourcing has given rise to citizen initiatives becoming a vast source of inexpensive, valuable but heterogeneous data. Previously, the authors have investigated the reliability of smartphone accelerometers for vibration-based SHM. This paper takes a step further to integrate mobile sensing and web-based computing for a prospective crowdsourcing-based SHM platform. An iOS application was developed to enable citizens to measure structural vibration and upload the data to a server with smartphones. A web-based platform was developed to collect and process the data automatically and store the processed data, such as modal properties of the structure, for long-term SHM purposes. Finally, the integrated mobile and web-based platforms were tested to collect the low-amplitude ambient vibration data of a bridge structure. Possible sources of uncertainties related to citizens were investigated, including the phone location, coupling conditions, and sampling duration. The field test results showed that the vibration data acquired by smartphones operated by citizens without expertise are useful for identifying structural modal properties with high accuracy. This platform can be further developed into an automated, smart, sustainable, cost-free system for long-term monitoring of structural integrity of spatially distributed urban infrastructure. Citizen Sensors for SHM will be a novel participatory sensing platform in the way that it offers hybrid solutions to transitional crowdsourcing parameters. PMID:26102490
Jongsawat, Nipat; Tungkasthan, Anunucha; Premchaiswadi, Wichian
GeNIe, Graphical Network Interface, is designed for a windows environment. It works well on a windows platform. It cannot be run on a web or Internet-based platform. That is why there is some limitation for its use on a worldwide basis. Another thing is that it does not support is real-time data processing. To overcome the limitations of GeNIe, the SMILE web application was designed and implemented on a client/server architecture mentioned in section 3. GeNIe is an outer shell of SMILE. SMILE...
Gilbert E. Pérez
Full Text Available To avoid the high cost and arduous effort usually associated with field analysis of Wireless Sensor Network (WSN, Modeling and Simulation (M&S is used to predict the behavior and performance of the network. However, the simulation models utilized to imitate real life networks are often used for general purpose. Therefore, they are less likely to provide accurate predictions for different real life networks. In this paper, a comparison methodology based on hypothesis testing is proposed to evaluate and compare simulation output versus real-life network measurements. Performance related parameters such as traffic generation rates and goodput rates for a small WSN are considered. To execute the comparison methodology, a "Comparison Tool", composed of MATLAB scripts is developed and used. The comparison tool demonstrates the need for model verification and the analysis of good agreements between the simulation and empirical measurements.
This book is intended for system administrators and IT professionals with experience in Windows Server 2008 or Windows Server 2012 environments who are looking to acquire the skills and knowledge necessary to manage and maintain the core infrastructure required for a Windows Server 2012 and Windows Server 2012 R2 environment.
Full Text Available This paper deals with determining the capacity supply for virtualized servers. First, a server is modeled as a queue based on a Markov chain. Then, the effect of server virtualization on the capacity supply will be analyzed with the distribution function of the server load.
Full Text Available For installing many sensors in a limited space with a limited computing resource, the digitization of the sensor output at the site of sensation has advantages such as a small amount of wiring, low signal interference and high scalability. For this purpose, we have developed a dedicated Complementary Metal-Oxide-Semiconductor (CMOS Large-Scale Integration (LSI (referred to as “sensor platform LSI” for bus-networked Micro-Electro-Mechanical-Systems (MEMS-LSI integrated sensors. In this LSI, collision avoidance, adaptation and event-driven functions are simply implemented to relieve data collision and congestion in asynchronous serial bus communication. In this study, we developed a network system with 48 sensor platform LSIs based on Printed Circuit Board (PCB in a backbone bus topology with the bus length being 2.4 m. We evaluated the serial communication performance when 48 LSIs operated simultaneously with the adaptation function. The number of data packets received from each LSI was almost identical, and the average sampling frequency of 384 capacitance channels (eight for each LSI was 73.66 Hz.
Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.
Faden, J.; Vandegriff, J. D.; Weigel, R. S.
Autoplot was introduced in 2008 as an easy-to-use plotting tool for the space physics community. It reads data from a variety of file resources, such as CDF and HDF files, and a number of specialized data servers, such as the PDS/PPI's DIT-DOS, CDAWeb, and from the University of Iowa's RPWG Das2Server. Each of these servers have optimized methods for transmitting data to display in Autoplot, but require coordination and specialized software to work, limiting Autoplot's ability to access new servers and datasets. Likewise, groups who would like software to access their APIs must either write thier own clients, or publish a specification document in hopes that people will write clients. The HAPI specification was written so that a simple, standard API could be used by both Autoplot and server implementations, to remove these barriers to free flow of time series data. Autoplot's software for communicating with HAPI servers is presented, showing the user interface scientists will use, and how data servers might implement the HAPI specification to provide access to their data. This will also include instructions on how Autoplot is used and installed desktop computers, and used to view data from the RBSP, Juno, and other missions.
Daidone, Alessandro; Renier, Thibault; Bondavalli, Andrea
Server replication is a common fault-tolerance strategy to improve transaction dependability for services in communications networks. In distributed architectures, fault-diagnosis and recovery are implemented via the interaction of the server replicas with the clients and other entities...... such as enhanced name servers. Such architectures provide an increased number of redundancy configuration choices. The influence of a (wide area) network connection can be quite significant and induce trade-offs between dependability and user-perceived performance. This paper develops a quantitative stochastic...... model using stochastic activity networks (SAN) for the evaluation of performance and dependability metrics of a generic transaction-based service implemented on a distributed replication architecture. The composite SAN model can be easily adapted to a wide range of client-server applications deployed...
CERN relies on OPC Server implementations from 3rd party device vendors to provide a software interface to their respective hardware. Each time a vendor releases a new OPC Server version it is regression tested internally to verify that existing functionality has not been inadvertently broken during the process of adding new features. In addition bugs and problems must be communicated to the vendors in a reliable and portable way. This presentation covers the automated test approach used at CERN to cover both cases: Scripts are written in a domain specific language specifically created for describing OPC tests and executed by a custom software engine driving the OPC Server implementation.
Masood-Al-Farooq, Basit A
This book is an easy-to-follow, comprehensive guide that is full of hands-on examples, which you can follow to successfully design, build, and deploy mission-critical database applications with SQL Server 2014. If you are a database developer, architect, or administrator who wants to learn how to design, implement, and deliver a successful database solution with SQL Server 2014, then this book is for you. Existing users of Microsoft SQL Server will also benefit from this book as they will learn what's new in the latest version.
on the righte of the trademark holder Table of Contents 1. Introduction 1 1.1. Background 2 2. The Sporadic Server Algorithm 5 2.1. SS Algorithm...Technical Information Service, U.S. Department of Commerce, Springfield VA 22161. Use of any trademarks in this report is not intended in any way to infringe... unregister aperiodic tasks from the sporadic server if sporadic service is ever terminated (e.g., during a mode change ). 4.3.3. The Sporadic Server
This is a comprehensive guide with a step-by-step approach that enables you to host and manage servers using QlikView Server and QlikView Publisher.If you are a server administrator wanting to learn about how to deploy QlikView Server for server management,analysis and testing, and QlikView Publisher for publishing of business content then this is the perfect book for you. No prior experience with QlikView is expected.
Allaucca P, J.J.; Picon C, C.; Zaharia B, M. [Departamento de Radioterapia, Instituto de Enfermedades Neoplasicas, Av. Angamos Este 2520, Lima 34 (Peru)
It has been designed a hardware and software system for a radiotherapy Department. It runs under an Operative system platform Novell Network sharing the existing resources and of the server, it is centralized, multi-user and of greater safety. It resolves a variety of problems and calculation necessities, patient steps and administration, it is very fast and versatile, it contains a set of menus and options which may be selected with mouse, direction arrows or abbreviated keys. (Author)
CERN. Geneva; Costa, Flavio
A short online tutorial introducing the CERN Document Server (CDS). Basic functionality description, the notion of Revisions and the CDS test environment. Links: CDS Production environment CDS Test environment
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 2.1 ENERGY STAR Program Requirements for Enterprise Servers that are effective as of...
Pepe, A.; Baron, T.; M Gracco; J.Y. Le Meur; Robinson, N; Simko, T.; Vesely, M.
CERN as the international European Organization for Nuclear Research has been involved since its early beginnings with the open dissemination of scientific results. The dissemination started by free paper distribution of preprints by CERN Library and continued electronically via FTP bulletin boards, the World Wide Web to the current OAI-compliant CERN Document Server. CERN Document Server Software (CDSware) is a suite of applications which provides the framework and tools for building and ma...
Henwood, Ruth; Patten, Gabriela; Barnett, Whitney; Hwang, Bella; Metcalf, Carol; Hacking, Damian; Wilkinson, Lynne
Médecins Sans Frontières supports human immunodeficiency virus (HIV)-infected youth, aged 12-25 years, at a clinic in Khayelitsha, South Africa. Patients are enrolled in youth clubs, and provided with a virtual chat room, using the cell-phone-based social networking platform, MXit, to support members between monthly/bimonthly club meetings. The acceptability and uptake of MXit was assessed. MXit was facilitated by lay counsellors, was password protected, and participants could enter and leave at will. Club members were asked to complete self-administered questionnaires and participate in two focus-group discussions. In total, 60 club members completed the questionnaire, and 12 participated in the focus groups. Fifty-eight percentage were aged 23-25 years, 63% were female and 83% had a cell phone. Sixty percentage had used MXit before, with 38% having used it in the past month. Sixty-five percentage were aware of the chat-room and 39% knew how to access it. Thirty-four percentage used the chat-room at least once, 20% had visited the chat-room in the past month, and 29% had used MXit to have private conversations with other club members. Fifty-seven percentage used the chat-room to get advice, and 84% of all respondents felt that offering a service outside the youth club meetings was important and would like to see one to continue. The cost of using social media platforms was an issue with some, as well as the need for anonymity. Preference for other platforms, logistical obstacles, or loss of interest contributed to non-use. Reported usage of the MXit chat-room was low, but participants indicated acceptance of the programme and their desire to interact with their peers through social media. Suggestions to improve the platform included accessible chat histories, using more popular platforms such as Facebook or WhatsApp, and to have topical discussions where pertinent information for youth is provided.
Strijkers, R.J.; Meulenhoff, P.J.
The invention enables placement and use of a network node function in a second network node instead of using the network node function in a first network node. The network node function is e.g. a server function or a router function. The second network node is typically located in or close to the
Syed Tahir Hussain Rizvi
Full Text Available The realization of a deep neural architecture on a mobile platform is challenging, but can open up a number of possibilities for visual analysis applications. A neural network can be realized on a mobile platform by exploiting the computational power of the embedded GPU and simplifying the flow of a neural architecture trained on the desktop workstation or a GPU server. This paper presents an embedded platform-based Italian license plate detection and recognition system using deep neural classifiers. In this work, trained parameters of a highly precise automatic license plate recognition (ALPR system are imported and used to replicate the same neural classifiers on a Nvidia Shield K1 tablet. A CUDA-based framework is used to realize these neural networks. The flow of the trained architecture is simplified to perform the license plate recognition in real-time. Results show that the tasks of plate and character detection and localization can be performed in real-time on a mobile platform by simplifying the flow of the trained architecture. However, the accuracy of the simplified architecture would be decreased accordingly.
Full Text Available Artificial Neural Network (ANN based pattern recognition technique is used for ensuring the reliable evaluation of responses from an array of Zinc Oxide (ZnO based sensors comprising of pure ZnO nano-rods and composites of ZnO–SnO2. All the sensors were fabricated in the lab. The paper first reports the development of an artificial neural network based model for successfully recognizing different concentration of hydrogen, methane and carbon mono-oxide. Feed forward back propagation neural network was used for the classification of the gases at critical concentrations. The optimized ANN algorithm is then embedded in the microcontroller based circuit and finally verified under lab conditions.
Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific
Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.
Suman Dutta; Shouman Barua; Jishu Sen
Availability is one of the most important concerns in the networking world. For any high available network, we need to maintain 99.99999% availability . That is why it is one of the most important facto rs to find out the single point of failure in the network architecture and eliminate that single point of fa ilure (SPOF) from physical network and logical network. S POF in our server infrastructure has been analysed in terms of communicating with the router for ...
Paraskevas, Michael; Zarouchas, Thomas; Angelopoulos, Panagiotis; Perikos, Isidoros
Now days the growing need for highly qualified computer science educators in modern educational environments is commonplace. This study examines the potential use of Greek School Network (GSN) to provide a robust and comprehensive e-training course for computer science educators in order to efficiently exploit advanced IT services and establish a…
LeMoyne, Robert; Mastroianni, Timothy
Natural gait consists of synchronous and rhythmic patterns for both the lower and upper limb. People with hemiplegia can experience reduced arm swing, which can negatively impact the quality of gait. Wearable and wireless sensors, such as through a smartphone, have demonstrated the ability to quantify various features of gait. With a software application the smartphone (iPhone) can function as a wireless gyroscope platform capable of conveying a gyroscope signal recording as an email attachment by wireless connectivity to the Internet. The gyroscope signal recordings of the affected hemiplegic arm with reduced arm swing arm and the unaffected arm are post-processed into a feature set for machine learning. Using a multilayer perceptron neural network a considerable degree of classification accuracy is attained to distinguish between the affected hemiplegic arm with reduced arm swing arm and the unaffected arm.
Staykova, Kalina Stefanova; Damsgaard, Jan
This research paper presents an initial attempt to introduce and explain the emergence of new phenomenon, which we refer to as platform constellations. Functioning as highly modular systems, the platform constellations are collections of highly connected platforms which co-exist in parallel...
The book is packed with clear instructions and plenty of screenshots, providing all the support and guidance you will need as you begin to generate reports with SQL Server 2012 Reporting Services.This book is for those who are new to SQL Server Reporting Services 2012 and aspiring to create and deploy cutting edge reports. This book is for report developers, report authors, ad-hoc report authors and model developers, and Report Server and SharePoint Server Integrated Report Server administrators. Minimal knowledge of SQL Server is assumed and SharePoint experience would be helpful.
Simon, Alan R
Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment
Implementing Citrix XenServer Quick Starter is a practical, hands-on guide that will help you get started with the Citrix XenServer Virtualization technology with easy-to-follow instructions.Implementing Citrix XenServer Quick Starter is for system administrators who have little to no information on virtualization and specifically Citrix XenServer Virtualization. If you're managing a lot of physical servers and are tired of installing, deploying, updating, and managing physical machines on a daily basis over and over again, then you should probably explore your option of XenServer Virtualizati
Get up to speed on the extensive changes to the newest release of Microsoft SQL Server The 2012 release of Microsoft SQL Server changes how you develop applications for SQL Server. With this comprehensive resource, SQL Server authority Robert Vieira presents the fundamentals of database design and SQL concepts, and then shows you how to apply these concepts using the updated SQL Server. Publishing time and date with the 2012 release, Beginning Microsoft SQL Server 2012 Programming begins with a quick overview of database design basics and the SQL query language and then quickly proceeds to sho
Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)
Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,
Cheng, Ronghai; Leung, Ross Ka-Kit; Chen, Yao; Pan, Yidan; Tong, Yin; Li, Zhoufang; Ning, Luwen; Ling, Xuefeng B; He, Jiankui
We present Virtual Pharmacist, a web-based platform that takes common types of high-throughput data, namely microarray SNP genotyping data, FASTQ and Variant Call Format (VCF) files as inputs, and reports potential drug responses in terms of efficacy, dosage and toxicity at one glance. Batch submission facilitates multivariate analysis or data mining of targeted groups. Individual analysis consists of a report that is readily comprehensible to patients and practioners who have basic knowledge in pharmacology, a table that summarizes variants and potential affected drug response according to the US Food and Drug Administration pharmacogenomic biomarker labeled drug list and PharmGKB, and visualization of a gene-drug-target network. Group analysis provides the distribution of the variants and potential affected drug response of a target group, a sample-gene variant count table, and a sample-drug count table. Our analysis of genomes from the 1000 Genome Project underlines the potentially differential drug responses among different human populations. Even within the same population, the findings from Watson's genome highlight the importance of personalized medicine. Virtual Pharmacist can be accessed freely at http://www.sustc-genome.org.cn/vp or installed as a local web server. The codes and documentation are available at the GitHub repository (https://github.com/VirtualPharmacist/vp). Administrators can download the source codes to customize access settings for further development.
de Chassey Benoit
Full Text Available Abstract Background Comprehensive understanding of molecular mechanisms underlying viral infection is a major challenge towards the discovery of new antiviral drugs and susceptibility factors of human diseases. New advances in the field are expected from systems-level modelling and integration of the incessant torrent of high-throughput "-omics" data. Results Here, we describe the Human Infectome protein interaction Network, a novel systems virology model of a virtual virus-infected human cell concerning 110 viruses. This in silico model was applied to comprehensively explore the molecular relationships between viruses and their associated diseases. This was done by merging virus-host and host-host physical protein-protein interactomes with the set of genes essential for viral replication and involved in human genetic diseases. This systems-level approach provides strong evidence that viral proteomes target a wide range of functional and inter-connected modules of proteins as well as highly central and bridging proteins within the human interactome. The high centrality of targeted proteins was correlated to their essentiality for viruses' lifecycle, using functional genomic RNAi data. A stealth-attack of viruses on proteins bridging cellular functions was demonstrated by simulation of cellular network perturbations, a property that could be essential in the molecular aetiology of some human diseases. Networking the Human Infectome and Diseasome unravels the connectivity of viruses to a wide range of diseases and profiled molecular basis of Hepatitis C Virus-induced diseases as well as 38 new candidate genetic predisposition factors involved in type 1 diabetes mellitus. Conclusions The Human Infectome and Diseasome Networks described here provide a unique gateway towards the comprehensive modelling and analysis of the systems level properties associated to viral infection as well as candidate genes potentially involved in the molecular aetiology
Navratil, Vincent; de Chassey, Benoit; Combe, Chantal Rabourdin; Lotteau, Vincent
Comprehensive understanding of molecular mechanisms underlying viral infection is a major challenge towards the discovery of new antiviral drugs and susceptibility factors of human diseases. New advances in the field are expected from systems-level modelling and integration of the incessant torrent of high-throughput "-omics" data. Here, we describe the Human Infectome protein interaction Network, a novel systems virology model of a virtual virus-infected human cell concerning 110 viruses. This in silico model was applied to comprehensively explore the molecular relationships between viruses and their associated diseases. This was done by merging virus-host and host-host physical protein-protein interactomes with the set of genes essential for viral replication and involved in human genetic diseases. This systems-level approach provides strong evidence that viral proteomes target a wide range of functional and inter-connected modules of proteins as well as highly central and bridging proteins within the human interactome. The high centrality of targeted proteins was correlated to their essentiality for viruses' lifecycle, using functional genomic RNAi data. A stealth-attack of viruses on proteins bridging cellular functions was demonstrated by simulation of cellular network perturbations, a property that could be essential in the molecular aetiology of some human diseases. Networking the Human Infectome and Diseasome unravels the connectivity of viruses to a wide range of diseases and profiled molecular basis of Hepatitis C Virus-induced diseases as well as 38 new candidate genetic predisposition factors involved in type 1 diabetes mellitus. The Human Infectome and Diseasome Networks described here provide a unique gateway towards the comprehensive modelling and analysis of the systems level properties associated to viral infection as well as candidate genes potentially involved in the molecular aetiology of human diseases.
Wang, Jie; Lin, Chung-Chih; Yu, Yan-Shuo; Yu, Tsang-Chu
The goal of this study is to use wireless sensor technologies to develop a smart clothes service platform for health monitoring. Our platform consists of smart clothes, a sensor node, a gateway server, and a health cloud. The smart clothes have fabric electrodes to detect electrocardiography (ECG) signals. The sensor node improves the accuracy of QRS complexes detection by morphology analysis and reduces power consumption by the power-saving transmission functionality. The gateway server prov...
Sone, M; Sasaki, M; Oikawa, H; Yoshioka, K; Ehara, S; Tamakawa, Y
This paper will attempt to examine the industry requirements for shared network data storage and sustained high speed (10's to 100's to thousands of megabytes per second) network data serving via the NFS and FTP protocol suite. It will discuss the current structural and architectural impediments to achieving these sorts of data rates cost effectively today on many general purpose servers and will describe and architecture and resulting product family that addresses these problems. The sustained performance levels that were achieved in the lab will be shown as well as a discussion of early customer experiences utilizing both the HIPPI-IP and ATM OC3-IP network interfaces.
Lahrmann, Harry; Agerholm, Niels; Juhl, Jens
This paper presents the ITS Platform Northern Denmark, which is an open platform to test ITS solutions. The platform consists of a new developed GNSS/GPRS On Board Unit installed in nearly 500 cars, a backend server and a specially designed digital road map for ITS applications. The platform is o...... is open for third part application. This paper presents the platform’s potentials and explains a series of test applications, which are developed on the plat- form. Moreover, a number of new projects, which are planned for ITS Platform is introduced....
Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.
Chevalier, Scott [Indiana Univ., Bloomington, IN (United States). International Networks; Schopf, Jennifer M. [Indiana Univ., Bloomington, IN (United States). International Networks; Miller, Kenneth [Pennsylvania State Univ., University Park, PA (United States). Telecommunications and Networking Services; Zurawski, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Sciences Network
Todays science collaborations depend on reliable, high performance networks, but monitoring the end-to-end performance of a network can be costly and difficult. The most accurate approaches involve using measurement equipment in many locations, which can be both expensive and difficult to manage due to immobile or complicated assets. The perfSONAR framework facilitates network measurement making management of the tests more reasonable. Traditional deployments have used over-provisioned servers, which can be expensive to deploy and maintain. As scientific network uses proliferate, there is a desire to instrument more facets of a network to better understand trends. This work explores low cost alternatives to assist with network measurement. Benefits include the ability to deploy more resources quickly, and reduced capital and operating expenditures. Finally, we present candidate platforms and a testing scenario that evaluated the relative merits of four types of small form factor equipment to deliver accurate performance measurements.
Manage crypto / security plan / IATOs / accreditation process Manage VTC conference servers and VTC equipment at all sites Provide instrumentation to...Joint experimentation Upgrade and maintain technological currency of deployable network assets Maintain Joint / Strategic network gateway (DISN-LES
Farzan, Faranak; Atluri, Sravya; Frehlich, Matthew; Dhami, Prabhjot; Kleffner, Killian; Price, Rae; Lam, Raymond W; Frey, Benicio N; Milev, Roumen; Ravindran, Arun; McAndrews, Mary Pat; Wong, Willy; Blumberger, Daniel; Daskalakis, Zafiris J; Vila-Rodriguez, Fidel; Alonso, Esther; Brenner, Colleen A; Liotti, Mario; Dharsee, Moyez; Arnott, Stephen R; Evans, Kenneth R; Rotzinger, Susan; Kennedy, Sidney H
Subsequent to global initiatives in mapping the human brain and investigations of neurobiological markers for brain disorders, the number of multi-site studies involving the collection and sharing of large volumes of brain data, including electroencephalography (EEG), has been increasing. Among the complexities of conducting multi-site studies and increasing the shelf life of biological data beyond the original study are timely standardization and documentation of relevant study parameters. We present the insights gained and guidelines established within the EEG working group of the Canadian Biomarker Integration Network in Depression (CAN-BIND). CAN-BIND is a multi-site, multi-investigator, and multi-project network supported by the Ontario Brain Institute with access to Brain-CODE, an informatics platform that hosts a multitude of biological data across a growing list of brain pathologies. We describe our approaches and insights on documenting and standardizing parameters across the study design, data collection, monitoring, analysis, integration, knowledge-translation, and data archiving phases of CAN-BIND projects. We introduce a custom-built EEG toolbox to track data preprocessing with open-access for the scientific community. We also evaluate the impact of variation in equipment setup on the accuracy of acquired data. Collectively, this work is intended to inspire establishing comprehensive and standardized guidelines for multi-site studies.
.... Experiments use the Linux operating system and the Flash web server. All experiments are repeated under a range of server loads and under both trace-based workloads and those generated by a Web workload generator...
.... One of the factors that boosted this ability was the evolution of the Web Servers. Using the web server technology man can be connected and exchange information with the most remote places all over the...
Segawa, Katsunori; Nakano, Tatsuya; Saito, Yoshiro
Updated version of National Institute of Health Sciences Computer Network System (NIHS-NET) is described. In order to reduce its electric power consumption, the main server system was newly built using the virtual machine technology. The service that each machine provided in the previous network system should be maintained as much as possible. Thus, the individual server was constructed for each service, because a virtual server often show decrement in its performance as compared with a physical server. As a result, though the number of virtual servers was increased and the network communication became complicated among the servers, the conventional service was able to be maintained, and security level was able to be rather improved, along with saving electrical powers. The updated NIHS-NET bears multiple security countermeasures. To maximal use of these measures, awareness for the network security by all users is expected.
Zhou, Tingting; Zhang, Tong; Zhang, Rui; Lou, Zheng; Deng, Jianan; Wang, Lili
Development of high performance room temperature sensors remains a grand challenge for high demand of practical application. Metal oxide semiconductors (MOSs) have many advantages over others due to their easy functionalization, high surface area, and low cost. However, they typically need a high work temperature during sensing process. Here, p-type sensing layer is reported, consisting of pore-rich dumbbell-like Co3O4 particles (DP-Co3O4) with intrinsic high catalytic activity. The gas sensor (GS) based DP-Co3O4 catalyst exhibits ultrahigh NH3 sensing activity along with excellent stability over other structure based NH3 GSs in room temperature work environment. In addition, the unique structure of DP-Co3O4 with pore-rich and high catalytic activity endows fast gas diffusion rate and high sensitivity at room temperature. Taken together, the findings in this work highlight the merit of integrating highly active materials in p-type materials, offering a framework to develop high-sensitivity room temperature sensing platforms.
Internet Scanner 5.2 User Guide for Windows NT”, Internet Security Systems, Inc., 1998. “SBIR Topic AF97-043 Network Security Visualization...to the Server application to import into the NSV system database data that gets queried from ISS Internet Security Scanner 5.4. Objective #5 was... Internet Security Scanner scan of a live network and imported through a Cartridge component. The data was accessed through the Server component and
Full Text Available Abstract Background Legumes (Leguminosae or Fabaceae play a major role in agriculture. Transcriptomics studies in the model legume species, Medicago truncatula, are instrumental in helping to formulate hypotheses about the role of legume genes. With the rapid growth of publically available Affymetrix GeneChip Medicago Genome Array GeneChip data from a great range of tissues, cell types, growth conditions, and stress treatments, the legume research community desires an effective bioinformatics system to aid efforts to interpret the Medicago genome through functional genomics. We developed the Medicago truncatula Gene Expression Atlas (MtGEA web server for this purpose. Description The Medicago truncatula Gene Expression Atlas (MtGEA web server is a centralized platform for analyzing the Medicago transcriptome. Currently, the web server hosts gene expression data from 156 Affymetrix GeneChip® Medicago genome arrays in 64 different experiments, covering a broad range of developmental and environmental conditions. The server enables flexible, multifaceted analyses of transcript data and provides a range of additional information about genes, including different types of annotation and links to the genome sequence, which help users formulate hypotheses about gene function. Transcript data can be accessed using Affymetrix probe identification number, DNA sequence, gene name, functional description in natural language, GO and KEGG annotation terms, and InterPro domain number. Transcripts can also be discovered through co-expression or differential expression analysis. Flexible tools to select a subset of experiments and to visualize and compare expression profiles of multiple genes have been implemented. Data can be downloaded, in part or full, in a tabular form compatible with common analytical and visualization software. The web server will be updated on a regular basis to incorporate new gene expression data and genome annotation, and is accessible
Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes
EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client
Jelmert, Stian Opsahl
Master i nettverks- og systemadministrasjon Today, most literature about services in system administration is about conventional services like email servers. How could one monitor and analyze a scenario where the service in question is a game server? As these two services are technologically different, conventional monitoring tools may miss vital information in the context of game servers. This thesis focuses on developing a monitoring system for a game server in order to...
Ferraris, M.; Frixione, P.; Squarcia, S.
In this paper the basic ideas of NORMA (Network Oriented Radiological and Medical Archive) are discussed. NORMA is an original project built by a team of physicists in collaboration with radiologists in order to select the best Treatment Planning in radiotherapy. It allows physicians and health physicists, working in different places, to discuss on interesting clinical cases visualizing the same diagnostic images, at the same time, and highlighting zones of interest (tumors and organs at risk). NORMA has a client/server architecture in order to be platform independent. Applying World Wide Web technologies, it can be easily used by people with no specific computer knowledge providing a verbose help to guide the user through the right steps of execution. The client side is an applet while the server side is a Java application. In order to optimize execution the project also includes a proprietary protocol, lying over TCP/IP suite, that organizes data exchanges and control messages. Diagnostic images are retrieved from a relational database or from a standard DICOM (Digital Images and COmmunications in Medicine) PACS through the DICOM-WWW gateway allowing connection of the usual Web browsers, used by the NORMA system, to DICOM applications via the HTTP protocol. Browser requests are sent to the gateway from the Web server through CGI (Common Gateway Interface). DICOM software translates the requests in DICOM messages and organizes the communication with the remote DICOM Application.
Full Text Available While carrying out formative assessment activities over social network services (SNS, it has been noted that personalized notifications have a high chance of “the important post getting lost” in the notification feed. In order to highlight this problem, this paper compares within a posttest only quasi-experiment, a total of 104 first year undergraduate students, all of which are prospective ICT teachers, in two groups. A formative assessment system in the ubiquitous learning context is delivered over an SNS in both groups. In the first group, the SNS has been used for the entire assessment task. In the second group, the questions have been delivered and responses were received over mobile phone “SMS” messages, while the SNS was used solely for providing feedback. The cases were compared in terms of voluntary participation rates and academic success. Both response rates and academic success have been significantly higher in the SMS group. When asked their reasons for not responding to questions; the SNS-only group frequently reported “not noticing the questions being sent”. This may indicate a flaw in message design for using social networks as LMS's. Sensible use of push-messages is advised.
Hoepner, Petra; Eckert, Klaus-Peter
Within the European HARP project, a Java-based Open Platform has been specified and implemented to support trustworthy distributed applications for health. Emphasis was put on security services for enabling both communication and application security. The Open Platform is Web-based and comprises the Client environment, Web/Application server, as well as Database and Archive servers. Servlets composed and executed according to the user's authorisation create signed XML messages. From those messages, user-role-related applets are generated. The technical details of the realisation are presented. Possible future enhancements for user-centric, adaptable services based on next-generation mobile service environments are outlined.
The book is an example based, hands-on guide where you will learn how to make a game from scratch, and learn how to develop games on the iOS platform.If you have great ideas for games and want to learn iOS game development, then this book is the right choice for you. Being familiar with iOS development is a plus, but is not mandatory. You will gradually get to grips with the new Sprite Kit framework with the help of this book.
de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.
Polling models are used as an analytical performance tool in several application areas. In these models, the focus often is on controlling the operation of the server as to optimize some performance measure. For several applications, controlling the server is not an issue as the server moves
Basil El Jundi
Full Text Available Many insects use the pattern of polarized light in the sky for spatial orientation and navigation. We have investigated the polarization vision system in the desert locust. To create a common platform for anatomical studies on polarization vision pathways, Kurylas et al. (2008 have generated a three-dimensional (3D standard brain from confocal microscopy image stacks of 10 male brains, using two different standardization methods, the Iterative Shape Averaging (ISA procedure and the Virtual Insect Brain (VIB protocol. Comparison of both standardization methods showed that the VIB standard is ideal for comparative volume analysis of neuropils, whereas the ISA standard is the method of choice to analyze the morphology and connectivity of neurons. The central complex is a key processing stage for polarization information in the locust brain. To investigate neuronal connections between diverse central-complex neurons, we generated a higher-resolution standard atlas of the central complex and surrounding areas, using the ISA method based on brain sections from 20 individual central complexes. To explore the usefulness of this atlas, two central-complex neurons, a polarization-sensitive columnar neuron (type CPU1a and a tangential neuron that is activated during flight, the giant-fan shaped (GFS neuron, were reconstructed three-dimensionally from brain sections. To examine whether the GFS neuron is a candidate to contribute to synaptic input to the CPU1a neuron, we registered both neurons into the standardized central complex. Visualization of both neurons revealed a potential connection of the CPU1a and GFS neurons in layer II of the upper division of the central body.
Blankenship, Ed; Holliday, Grant; Keller, Brian
Authoritative guide to TFS 2010 from a dream team of Microsoft insiders and MVPs!Microsoft Visual Studio Team Foundation Server (TFS) has evolved until it is now an essential tool for Microsoft?s Application Lifestyle Management suite of productivity tools, enabling collaboration within and among software development teams. By 2011, TFS will replace Microsoft?s leading source control system, VisualSourceSafe, resulting in an even greater demand for information about it. Professional Team Foundation Server 2010, written by an accomplished team of Microsoft insiders and Microsoft MVPs, provides
Blankenship, Ed; Holliday, Grant; Keller, Brian
A comprehensive guide to using Microsoft Team Foundation Server 2012 Team Foundation Server has become the leading Microsoft productivity tool for software management, and this book covers what developers need to know to use it effectively. Fully revised for the new features of TFS 2012, it provides developers and software project managers with step-by-step instructions and even assists those who are studying for the TFS 2012 certification exam. You'll find a broad overview of TFS, thorough coverage of core functions, a look at extensibility options, and more, written by Microsoft ins
Step-by-step instructions are included and the needs of a beginner are totally satisfied by the book. The book consists of plenty of examples with accompanying screenshots and code for an easy learning curve. You are a web developer with knowledge of server side scripting, and have experience with installing applications on the server. You have a desire to want more than Google maps, by offering dynamically built maps on your site with your latest geospatial data stored in MySQL, PostGIS, MsSQL or Oracle. If this is the case, this book is meant for you.
Full Text Available For the problem of losing and missing of vulnerable groups, a track record system is designed. The mobile terminal Android system is used as a platform, with the help of Auto Navi Map Android SDK positioning function, realize the positioning data acquisition of mobile terminals; using Apache Tomcat Server and MySQL database to build a Server which haves C/S(the client and the server server architecture. The mobile terminal interacts with the server through the JSON data transmission mode based on the HTTP protocol, and the server saves the relevant information provided by the mobile terminal through the JDBC to the corresponding table in the database. It can be used to monitor the trace of the family and friends, compared with the PC terminal, it is not only more flexible, convenient and fast, but also has the characteristics of real-time and high efficiency. Through the test, all functions can be used normally.
Full Text Available Mobile agents are programs that can move from one site to another in a network with their data and states. Mobile agents are expected to be an essential tool in pervasive computing. In multi platform environment, it is important to communicate with mobile agents only using their universal or logical name not using their physical locations. More, in an ad-hoc network environment, an agent can migrate autonomously and communicate with other agents on demand. It is difficult that mobile agent grasps the position information on other agents correctly each other, because mobile agent processes a task while moving a network successively. In order to realize on-demand mutual communication among mobile agents without any centralized servers, we propose a new information sharing mechanism within mobile agents. In this paper, we present a new information sharing mechanism within mobile agents. The method is a complete peer based and requires no agent servers to manage mobile agent locations. Therefore, a mobile agent can get another mobile agent, communicate with it and shares information stored in the agent without any knowledge of the location of the target mobile agent. The basic idea of the mechanism is an introduction of Agent Ring, Agent Chain and Shadow Agent. With this mechanism, each agent can communicate with other agents in a server-less environment, which is suitable for ad-hoc agent network and an agent system can manage agents search and communications efficiently.
Berchet, Antoine; Zink, Katrin; Arfire, Adrian; Marjovi, Ali; Martinoli, Alcherio; Emmenegger, Lukas; Brunner, Dominik
As the fraction of people living in urban areas is rapidly increasing worldwide, the impact of air quality on human health in cities is a growing concern not only in developing countries but also in Europe despite the achievements of European air quality legislation. One obstacle to the quantitative assessment of the connections between health and air quality is the very high temporal and spatial variability of air pollutant concentrations within cities. Yet, an important issue for obtaining accurate and spatially highly resolved air pollution data is the trade-off between the high costs of accurate air pollution sensors and the number of such devices required for succinctly monitoring a given geographical area. The OpenSense 2 project aims at establishing air quality data at very high temporal and spatial resolution in the cities of Lausanne and Zurich in Switzerland in order to provide reliable information for epidemiologic studies and for the design of air pollution controls and urban planning. Towards this goal, observations from both stationary reference monitoring stations and low-cost mobile sensors (including sensing platforms anchored on public transport vehicles) are combined with high-resolution air quality modeling throughout the two cities. As a first step, we simulate the 3-dimensional, high-resolution dispersion and distribution of key pollutants using the GRAMM/GRAL modeling system. The GRAMM meteorological meso-scale model calculates wind fields at 100 m resolution accounting for the complex topography and land use within and around the two cities. GRAMM outputs are then used to drive the building-resolving dispersion model GRAL at 5-10m resolution. Further key inputs for GRAL are high resolution emission inventories and the 3-D building structure which are available for both cities. Here, in order to evaluate the ability of the GRAMM/GRAL modeling system to reproduce air pollutant distributions within the two cities of Lausanne and Zurich, we
Jorgensen, Adam; LoForte, Ross; Knight, Brian
An essential how-to guide for experienced DBAs on the most significant product release since 2005! Microsoft SQL Server 2012 will have major changes throughout the SQL Server and will impact how DBAs administer the database. With this book, a team of well-known SQL Server experts introduces the many new features of the most recent version of SQL Server and deciphers how these changes will affect the methods that administrators have been using for years. Loaded with unique tips, tricks, and workarounds for handling the most difficult SQL Server admin issues, this how-to guide deciphers topics s
Gambini, Fabrizio; Pintus, Paolo; Faralli, Stefano; Chiesa, Marco; Preve, Giovan Battista; Cerutti, Isabella; Andriolli, Nicola
A 24-port packaged multi-microring optical network-on-chip has been tested for simultaneous co- and counter-propagating transmissions at the same wavelength at 10 Gbps. In the co-propagating scenario communications up to five hops with one interfering signal have been tested, together with transmissions impaired by up to three interfering signals. In the counter-propagating scenario the device performance has been investigated exploiting the ring resonators in both shared-source and shared-destination configurations. The spectral characterization is in good agreement with the theoretical results. Bit-error-rate measurements indicate power penalties at BER=10 -9 limited to (i) 0.5 dB in the co-propagating scenarios independently from the number of interfering transmissions, (ii) 0.8 dB in the counter-propagating scenario with shared-source configuration, and (iii) 2 dB in the counter-propagating scenario with shared-destination configuration.
Oprea, Tudor I; Nielsen, Sonny Kim; Ursu, Oleg; Yang, Jeremy J; Taboureau, Olivier; Mathias, Stephen L; Kouskoumvekaki, Lrene; Sklar, Larry A; Bologa, Cristian G
Finding new uses for old drugs is a strategy embraced by the pharmaceutical industry, with increasing participation from the academic sector. Drug repurposing efforts focus on identifying novel modes of action, but not in a systematic manner. With intensive data mining and curation, we aim to apply bio- and cheminformatics tools using the DRUGS database, containing 3,837 unique small molecules annotated on 1,750 proteins. These are likely to serve as drug targets and antitargets (i.e., associated with side effects, SE). The academic community, the pharmaceutical sector and clinicians alike could benefit from an integrated, semantic-web compliant computer-aided drug repurposing (CADR) effort, one that would enable deep data mining of associations between approved drugs (D), targets (T), clinical outcomes (CO) and SE. We report preliminary results from text mining and multivariate statistics, based on 7,684 approved drug labels, ADL (Dailymed) via text mining. From the ADL corresponding to 988 unique drugs, the "adverse reactions" section was mapped onto 174 SE, then clustered via principal component analysis into a 5x5 self-organizing map that was integrated into a Cytoscape network of SE-D-T-CO. This type of data can be used to streamline drug repurposing and may result in novel insights that can lead to the identification of novel drug actions.
van Cayseele, P.; Reynaerts, J.
We introduce an analytical framework close to the canonical model of platform competition investigated by Rochet and Tirole (2006) to study pricing decisions in two-sided markets when two or more platforms are needed simultaneously for the successful completion of a transaction. The model developed
Under this project SETECS performed research, created the design, and the initial prototype of three groups of security technologies: (a) middleware security platform, (b) Web services security, and (c) group security system. The results of the project indicate that the three types of security technologies can be used either individually or in combination, which enables effective and rapid deployment of a number of secure applications in open networking environments. The middleware security platform represents a set of object-oriented security components providing various functions to handle basic cryptography, X.509 certificates, S/MIME and PKCS No.7 encapsulation formats, secure communication protocols, and smart cards. The platform has been designed in the form of security engines, including a Registration Engine, Certification Engine, an Authorization Engine, and a Secure Group Applications Engine. By creating a middleware security platform consisting of multiple independent components the following advantages have been achieved - Object-oriented, Modularity, Simplified Development, and testing, Portability, and Simplified extensions. The middleware security platform has been fully designed and a preliminary Java-based prototype has been created for the Microsoft Windows operating system. The Web services security system, designed in the project, consists of technologies and applications that provide authentication (i.e., single sign), authorization, and federation of identities in an open networking environment. The system is based on OASIS SAML and XACML standards for secure Web services. Its topology comprises three major components: Domain Security Server (DSS) is the main building block of the system Secure Application Server (SAS) Secure Client In addition to the SAML and XACML engines, the authorization system consists of two sets of components An Authorization Administration System An Authorization Enforcement System Federation of identities in multi
Wong, Yen F.; Kegege, Obadiah; Schaire, Scott H.; Bussey, George; Altunc, Serhat; Zhang, Yuwen; Patel, Chitra
National Aeronautics and Space Administration (NASA) CubeSat missions are expected to grow rapidly in the next decade. Higher data rate CubeSats are transitioning away from Amateur Radio bands to higher frequency bands. A high-level communication architecture for future space-to-ground CubeSat communication was proposed within NASA Goddard Space Flight Center. This architecture addresses CubeSat direct-to-ground communication, CubeSat to Tracking Data Relay Satellite System (TDRSS) communication, CubeSat constellation with Mothership direct-to-ground communication, and CubeSat Constellation with Mothership communication through K-Band Single Access (KSA).A Study has been performed to explore this communication architecture, through simulations, analyses, and identifying technologies, to develop the optimum communication concepts for CubeSat communications. This paper will present details of the simulation and analysis that include CubeSat swarm, daughter shipmother ship constellation, Near Earth Network (NEN) S and X-band direct to ground link, TDRS Multiple Access (MA) array vs Single Access mode, notional transceiverantenna configurations, ground asset configurations and Code Division Multiple Access (CDMA) signal trades for daughter mother CubeSat constellation inter-satellite crosslink. Results of Space Science X-band 10 MHz maximum achievable data rate study will be summarized. Assessment of Technology Readiness Level (TRL) of current CubeSat communication technologies capabilities will be presented. Compatibility test of the CubeSat transceiver through NEN and Space Network (SN) will be discussed. Based on the analyses, signal trade studies and technology assessments, the functional design and performance requirements as well as operation concepts for future CubeSat end-to-end communications will be derived.
This book utilizes a tutorial based approach, focused on the practical customization of key features of the Team Foundation Server for collaborative enterprise software projects.This practical guide is intended for those who want to extend TFS. This book is for intermediate users who have an understanding of TFS, and basic coding skills will be required for the more complex customizations.
Chmielewski, Ł.; Hoepman, J.H.; Rossum, P. van
Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the
Chmielewski, L.; Hoepman, J.H.; Rossum, P. van
Human memory is not perfect – people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the
Adeyemi, Oluseyi; Slepniov, Dmitrij; Wæhrens, Brian Vejrum
The purpose of this paper is to further our understanding of multinational companies building server capabilities in China. The paper is based on the cases of two western companies with operations in China. The findings highlight a number of common patterns in the 1) managerial challenges related...
Although multimedia compression formats and protocols to stream such content have been around for a long time, there has been limited success in the adoption of open standards for streaming over IP (Internet Protocol) networks. The elements of such an end-to-end system will be introduced outlining the responsibilities of each element. The technical and financial challenges in building a viable multimedia streaming end-to-end system will be analyzed in detail in this paper outlining some solutions and areas for further research. Also, recent migration to IP in the backend video delivery network infrastructures have made it possible to use IP based media streaming solutions in non-IP last mile access networks like cable and wireless networks in addition to the DSL networks. The advantages of using IP streaming solutions in such networks will be outlined. However, there is a different set of challenges posed by such applications. The real time constraints are acute in each element of the media delivery end-to-end system. Meeting these real time constraints in general purpose non real time server systems is quite demanding. Quality of service, resource management, session management, fail-over, reliability, and cost are some important but challenging requirements in such systems. These will also be analyzed with suggested solutions. Content protection and rights management requirements are also very challenging for open standards based multimedia delivery systems. Interoperability unfortunately interferes with security in most of the current day systems. Some approaches to solve the interoperability problems will also be presented. The requirements, challenges, and possible solutions for delivering broadcast, on demand, and interactive video delivery applications for IP based media streaming systems will be analyzed in detail.
Full Text Available ABSTRAK Penanggulangan permasalahan jaringan yang ada saat ini tidak efektif karena penanggulangan baru akan dilakukan apabila ada laporan dari user bahwa telah terjadi gangguan pada layanan yang digunakannya. Karena pada umumnya seorang admin akan merasa servernya baik-baik saja apabila tidak ada keluhan dari user yang menggunakan layanan servernya. Server SMS gateway dirancang dan direalisasikan agar dapat mengirimkan notifikasi langsung kepada admin tanpa melibatkan user sehingga admin dapat melakukan penanggulangan sebelum user menyampaikan pengaduannya. Nagios digunakan sebagai monitoring server dengan memberikan output notifikasi status pada server. Pengecekan server dilakukan setiap 10 detik. Notifikasi diberikan apabila tidak terjadi perubahan state pada server dalam rentang waktu 180 detik. Gammu sebagai SMS gateway digunakan untuk mengirim notifikasi tersebut kepada admin. Informasi yang diberikan dari nagios kepada gammu diolah menjadi format text untuk dikirimkan melalui layanan SMS sebanyak 160 karakter. Informasi tersebut berisikan IP address dari server yang mengalami gangguan. Notifikasi dikirimkan pada rentang hari dan waktu yang telah ditentukan pada konfigurasi file sistem nagios agar tidak mengganggu kegiatan seorang admin diluar jam kerja . Kata Kunci : Sms, Nagios, Monitoring, Gammu, Notifikasi ABSTRACT Today’s countermeasure of problem network is not effective because it works only when the user reports the problem on the service used. generally an admin think that the server is alright if there is no complain from the user. gateway SMS server is designed and realized in order to send admin a notification directly without involving the user, so that the admin is able to anticipate before he gets complain from the user. Nagois is used to monitoring the server by giving an output notification status on the server. Checking the server is in every ten seconds. A notification will appear when there are no state changes on the
Tugas Akhir ini membahas mengenai cara untuk membangun sebuah proxy server dalam jaringan LAN. Jaringan LAN yang dibangun menggunakan arsitektur topologi star dengan menjadikan komputer server sebagai Gateway Server dan Proxy Server, sehingga tidak membutuhkan perangkat tambahan Router yang berfungsi sebagai Gateway Server. Proxy Server yang yang dibangun menggunakan metode Transparent Mode, sehingga pada komputer klien tidak perlu mengkonfigurasi port proxy server pada Web Browser. Hasil ya...
Maneta, M. P.; Johnson, L.; Kimball, J. S.
Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in atypical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight `app` that
Johnson, Lee F.; Maneta, Marco P.; Kimball, John S.
Water cycle extremes such as droughts and floods present a challenge for water managers and for policy makers responsible for the administration of water supplies in agricultural regions. In addition to the inherent uncertainties associated with forecasting extreme weather events, water planners need to anticipate water demands and water user behavior in a typical circumstances. This requires the use decision support systems capable of simulating agricultural water demand with the latest available data. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. In previous work we have demonstrated novel methodologies to use satellite-based observational technologies, in conjunction with hydro-economic models and state of the art data assimilation methods, to enable robust regional assessment and prediction of drought impacts on agricultural production, water resources, and land allocation. These methods create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents. The methods can be driven with information from existing satellite-derived operational products, such as the Satellite Irrigation Management Support system (SIMS) operational over California, the Cropland Data Layer (CDL), and using a modified light-use efficiency algorithm to retrieve crop yield from the synergistic use of MODIS and Landsat imagery. Here we present an integration of this modeling framework in a client-server architecture based on the Hydra platform. Assimilation and processing of resource intensive remote sensing data, as well as hydrologic and other ancillary information occur on the server side. This information is processed and summarized as attributes in water demand nodes that are part of a vector description of the water distribution network. With this architecture, our decision support system becomes a light weight 'app' that
Buchan, Daniel W A; Jones, David T
In this paper, we present the results for the MetaPSICOV2 contact prediction server in the CASP12 community experiment (http://predictioncenter.org). Over the 35 assessed Free Modelling target domains the MetaPSICOV2 server achieved a mean precision of 43.27%, a substantial increase relative to the server's performance in the CASP11 experiment. In the following paper, we discuss improvements to the MetaPSICOV2 server, covering both changes to the neural network and attempts to integrate contact predictions on a domain basis into the prediction pipeline. We also discuss some limitations in the CASP12 assessment which may have overestimated the performance of our method. © 2017 The Authors Proteins: Structure, Function and Bioinformatics Published by Wiley Periodicals, Inc.
Diaz, Philip; Harris, W. C.
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Yan, Bin; Wang, Panwen; Wang, Junwen; Boheler, Kenneth R
Integration and analysis of high content omics data have been critical to the investigation of molecule interactions (e.g., DNA-protein, protein-protein, chemical-protein) in biological systems. Human proteomic strategies that provide enriched information on cell surface proteins can be utilized for repurposing of drug targets and discovery of disease biomarkers. Although several published resources have proved useful to the analysis of these interactions, our newly developed web-based platform Targets-search has the capability of integrating multiple types of omics data to unravel their association with diverse molecule interactions and disease. Here, we describe how to use Targets-search, for the integrated and systemic exploitation of surface proteins to identify potential drug targets, which can further be used to analyze gene regulation, protein networks, and possible biomarkers for diseases and cancers. To illustrate this process, we have taken data from Ewing's sarcoma to identify surface proteins differentially expressed in Ewing's sarcoma cells. These surface proteins were then analyzed to determine which ones were known drug targets. The information suggested putative targets for drug repurposing and subsequent analyses illustrated their regulation by the transcription factor EWSR1.
Gille, Christoph; Fähling, Michael; Weyand, Birgit; Wieland, Thomas; Gille, Andreas
Alignment-Annotator is a novel web service designed to generate interactive views of annotated nucleotide and amino acid sequence alignments (i) de novo and (ii) embedded in other software. All computations are performed at server side. Interactivity is implemented in HTML5, a language native to web browsers. The alignment is initially displayed using default settings and can be modified with the graphical user interfaces. For example, individual sequences can be reordered or deleted using drag and drop, amino acid color code schemes can be applied and annotations can be added. Annotations can be made manually or imported (BioDAS servers, the UniProt, the Catalytic Site Atlas and the PDB). Some edits take immediate effect while others require server interaction and may take a few seconds to execute. The final alignment document can be downloaded as a zip-archive containing the HTML files. Because of the use of HTML the resulting interactive alignment can be viewed on any platform including Windows, Mac OS X, Linux, Android and iOS in any standard web browser. Importantly, no plugins nor Java are required and therefore Alignment-Anotator represents the first interactive browser-based alignment visualization. http://www.bioinformatics.org/strap/aa/ and http://strap.charite.de/aa/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Bart-Pedersen, S; CERN. Geneva. BE Department
CERN’s Proton Synchrotron (CPS) has been fitted with a new Trajectory Measurement System (TMS). Analogue signals from forty Beam Position Monitors (BPM) are digitized at 125 MS/s, and then further treated in the digital domain to derive positions of all individual particle bunches on the fly. Large FPGAs are used to handle the digital processing. The system fits in fourteen plug-in modules distributed over three half-width cPCI crates that store data in circular buffers. They are connected to a Linux computer by means of a private Gigabit Ethernet segment. Dedicated server software, running under Linux, knits the system into a coherent whole . The corresponding low-level software using FESA (BPMOPS class) was implemented while respecting the standard interface for beam position measurements. The BPMOPS server publishes values on request after data extraction and conversion from the TMS server. This software is running on a VME Lynx-OS platform and through dedicated electronics it can therefore control th...
Teach yourself to use SQL Server 2008 Analysis Services for business intelligence-one step at a time. You'll start by building your understanding of the business intelligence platform enabled by SQL Server and the Microsoft Office System, highlighting the role of Analysis Services. Then, you'll create a simple multidimensional OLAP cube and progressively add features to help improve, secure, deploy, and maintain an Analysis Services database. You'll explore core Analysis Services 2008 features and capabilities, including dimension, cube, and aggregation design wizards; a new attribute relatio
Zakhor, Avideh; Henzinger, Thomas; Trevedi, Kishor; Ammar, Mostafa; Lynch, Nancy; Shin, Kang
.... We have focused on multimedia delivery in traditional client-server architectures, both in the case of the Internet and wireless networks, as well as on peer-to-peer content delivery and on mobile ad-hoc networks...
Yan, Huan; Gao, Deyun; Su, Wei; Foh, Chuan Heng; Zhang, Hongke; Vasilakos, Athanasios V
The in-network caching strategy in named data networking can not only reduce the unnecessary fetching of content from the original content server deep in the core network and improve the user response...
Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.
Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to
Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/.
Many western companies have moved part of their operations to China in order to take advantage of cheap resources and/or to gain access to a high potential market. Depending on motive, offshore facilities usually start either as “sales-only” of products exported by headquarters or “production......-only”, exporting parts and components back to headquarter for sales in the home country. In the course of time, the role of offshore subsidiaries in a company’s operations network tends to change and, with that, the capabilities, of the subsidiaries. Focusing on Danish subsidiaries in China, the objective...... of this project is to identify and explain trajectories of offshore subsidiary capability development. Given the nature of this objective the chief methodology is longitudinal, partly retrospective, partly real-time, case studies....
Choudhury, Gagan L.
Modern communication networks carry several grades of data, voice and video sessions typically using single-service or multi-service platforms employing IP, ATM or MPLS protocol mechanisms. It is well established that in many instances the session duration may have a heavy-tailed distribution [1-2]. We explore the impact of such distributions on the response time performance of user sessions. We concentrate mainly on a single output link (potentially a bottleneck on the data path) of a multi-service platform. First-come-first-served and processor sharing type scheduling mechanisms are considered (weighted fair queueing and weighted round robin are implementable approximations to generalized processor sharing). The output link is modeled as a single-server (no limit on individual session rate) or multiple servers (rate limit on individual sessions either inherently as for CBR applications or for congestion avoidance as in a cable access network). Also, the impacts of bandwidth differences between input and output links are considered. It is observed that in some cases, heavy-tailed session durations have significant impacts but those impacts may be effectively neutralized using appropriate scheduling or rate control mechanisms.
Lee, Chengming; Chen, Rongshun
Recently, saving the cooling power in servers by controlling the fan speed has attracted considerable attention because of the increasing demand for high-density servers. This paper presents an optimal self-tuning proportional-integral-derivative (PID) controller, combining a PID neural network (PIDNN) with fan-power-based optimization in the transient-state temperature response in the time domain, for a server fan cooling system. Because the thermal model of the cooling system is nonlinear and complex, a server mockup system simulating a 1U rack server was constructed and a fan power model was created using a third-order nonlinear curve fit to determine the cooling power consumption by the fan speed control. PIDNN with a time domain criterion is used to tune all online and optimized PID gains. The proposed controller was validated through experiments of step response when the server operated from the low to high power state. The results show that up to 14% of a server's fan cooling power can be saved if the fan control permits a slight temperature response overshoot in the electronic components, which may provide a time-saving strategy for tuning the PID controller to control the server fan speed during low fan power consumption.
This practical guide leads you through numerous aspects of working with PostgreSQL. Step by step examples allow you to easily set up and extend PostgreSQL. ""PostgreSQL Server Programming"" is for moderate to advanced PostgreSQL database professionals. To get the best understanding of this book, you should have general experience in writing SQL, a basic idea of query tuning, and some coding experience in a language of your choice.
Ziegler, C.; Schilling, D. L.
Two network consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self-delay and interference delay.
Vassallo, Keith; Garg, Lalit; 2nd International Conference on Computers and Management
This is paper provides an overview of the technologies currently (2016) available and in development which allow the development of cross-platform applications. Both server-side and client-side applications are considered, as well as applications for web, desktop and mobile devices such as smartphones and tablets. A web-based approach is recommended for the development of truly cross-platform applications across devices and operating system. Topics discussed include the contemporary backgroun...