WorldWideScience

Sample records for critically loaded multi-server

  1. Optimizing the Loads of multi-player online game Servers using Markov Chains

    DEFF Research Database (Denmark)

    Saeed, Aamir; Olsen, Rasmus Løvenstein; Pedersen, Jens Myrup

    2015-01-01

    that is created due to the load balancing of servers. Load balancing among servers is sensitive to correct status information. The Markov based load prediction was introduced in this paper to predict load of under-loaded servers, based on arrival (μ) and departure (λ) rates of players. The prediction based...... that need to be considered when developing load balancing algorithm, that is the reliability of the information that is shared. Simulation results show that Markov based prediction of load information performed better from the normal load status information sharing....

  2. Analisis Perbandingan Load Balancing Web Server Tunggal Dengan Web Server Cluster Menggunakan Linux Virtual Server

    OpenAIRE

    Lukitasari, Desy; Oklilas, Ahmad Fali

    2010-01-01

    Virtual server adalah server yang mempunyai skalabilitas dan ketersedian yang tinggi yang dibangun diatas sebuah cluster dari beberapa real server. Real server dan load balancer akan saling terkoneksi baik dalam jaringan lokal kecepatan tinggi atau yang terpisah secara geografis. Load balancer dapat mengirim permintaan-permintaan ke server yang berbeda dan membuat paralel service dari sebuah cluster pada sebuah alamat IP tunggal dan meminta pengiriman dapat menggunakan teknologi IP load...

  3. Analisis Kinerja Penerapan Container untuk Load Balancing Web Server

    Directory of Open Access Journals (Sweden)

    Muhammad Agung Nugroho

    2016-12-01

    Full Text Available Container merupakan teknologi virtualisasi terbaru. Container memudahkan system administrator dalam mengelola aplikasi pada server. Docker container dapat digunakan untuk membangun, mempersiapkan, dan menjalankan aplikasi. Dapat membuat aplikasi dari bahasa pemrograman yang berbeda pada lapisan apapun. Aplikasi dapat di bungkus dalam container, dan aplikasi dapat berjalan pada lingkungan apapun dimana saja.  Dalam perkembangannya container ini dapat digunakan untuk load balancing, dengan memanfaatkan HA Proxy. Load Balancing dapat digunakan untuk menyelesaikan permasalahan beban kinerja web server yang terlalu berat (overload terhadap permintaan. Load Balancing merupakan salah satu metode untuk meningkatkan skalabilitas web server sekaligus mengurangi beban kerja web server. Ujicoba dilakukan dengan memberikan beban request pada single container dan multi container, dan membandingkan kinerjanya. Analisis kinerja dapat dilakukan dengan menggunakan parameter performance pada processor, memori dan proses layanan. Penerapan ujicoba dilakukan pada raspberry pi. Hasil yang diperoleh multi container dapat digunakan untuk mengembangkan metode load balancing, hasil ujicoba menunjukkan performance raspberry pi dapat optimum karena pembagian beban processor.

  4. A Secured Load Mitigation and Distribution Scheme for Securing SIP Server

    Directory of Open Access Journals (Sweden)

    Vennila Ganesan

    2017-01-01

    Full Text Available Managing the performance of the Session Initiation Protocol (SIP server under heavy load conditions is a critical task in a Voice over Internet Protocol (VoIP network. In this paper, a two-tier model is proposed for the security, load mitigation, and distribution issues of the SIP server. In the first tier, the proposed handler segregates and drops the malicious traffic. The second tier provides a uniform load of distribution, using the least session termination time (LSTT algorithm. Besides, the mean session termination time is minimized by reducing the waiting time of the SIP messages. Efficiency of the LSTT algorithm is evaluated through the experimental test bed by considering with and without a handler. The experimental results establish that the proposed two-tier model improves the throughput and the CPU utilization. It also reduces the response time and error rate while preserving the quality of multimedia session delivery. This two-tier model provides robust security, dynamic load distribution, appropriate server selection, and session synchronization.

  5. Middleware for multi-client and multi-server mobile applications

    NARCIS (Netherlands)

    Rocha, B.P.S.; Rezende, C.G.; Loureiro, A.A.F.

    2007-01-01

    With popularization of mobile computing, many developers have faced problems due to great heterogeneity of devices. To address this issue, we present in this work a middleware for multi-client and multi-server mobile applications. We assume that the middleware at the server side has no resource

  6. Two Stage Secure Dynamic Load Balancing Architecture for SIP Server Clusters

    Directory of Open Access Journals (Sweden)

    G. Vennila

    2014-08-01

    Full Text Available Session Initiation Protocol (SIP is a signaling protocol emerged with an aim to enhance the IP network capabilities in terms of complex service provision. SIP server scalability with load balancing has a greater concern due to the dramatic increase in SIP service demand. Load balancing of session method (request/response and security measures optimizes the SIP server to regulate of network traffic in Voice over Internet Protocol (VoIP. Establishing a honeywall prior to the load balancer significantly reduces SIP traffic and drops inbound malicious load. In this paper, we propose Active Least Call in SIP Server (ALC_Server algorithm fulfills objectives like congestion avoidance, improved response times, throughput, resource utilization, reducing server faults, scalability and protection of SIP call from DoS attacks. From the test bed, the proposed two-tier architecture demonstrates that the ALC_Server method dynamically controls the overload and provides robust security, uniform load distribution for SIP servers.

  7. Implementasi Cluster Server pada Raspberry Pi dengan Menggunakan Metode Load Balancing

    Directory of Open Access Journals (Sweden)

    Ridho Habi Putra

    2016-06-01

    Full Text Available Server merupakan bagian penting dalam sebuah layanan didalam jaringan komputer. Peran server dapat menentukan kualitas baik buruknya dari layanan tersebut. Kegagalan dari sebuah server bisa disebabkan oleh beberapa faktor diantaranya kerusakan perangkat keras, sistem jaringan serta aliran listrik. Salah satu solusi untuk mengatasi kegagalan server dalam suatu jaringan komputer adalah dengan melakukan clustering server.  Tujuan dari penelitian ini adalah untuk mengukur kemampuan Raspberry Pi (Raspi digunakan sebagai web server. Raspberry Pi yang digunakan menggunakan Raspberry Pi 2 Model B dengan menggunakan processor ARM Cortex-A7 berjalan pada frekuensi 900MHz dengan memiliki RAM 1GB. Sistem operasi yang digunakan pada Raspberry Pi adalah Linux Debian Wheezy. Konsep penelitian ini menggunakan empat buah perangkat Raspberry Pi dimana dua Raspi digunakan sebagai web server dan dua Raspi lainnya digunakan sebagai penyeimbang beban (Load Balancer serta database server. Metode yang digunakan dalam pembangunan cluster server ini menggunakan metode load balancing, dimana beban server bekerja secara merata di masing-masing node. Pengujian yang diterapkan dengan melakukan perbandingan kinerja dari Raspbery Pi yang menangani lalu lintas data secara tunggal tanpa menggunakan load balancer serta pengujian Raspberry Pi dengan menggunakan load balancer sebagai beban penyeimbang antara anggota cluster server.

  8. MultiSETTER: web server for multiple RNA structure comparison.

    Science.gov (United States)

    Čech, Petr; Hoksza, David; Svozil, Daniel

    2015-08-12

    Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.

  9. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  10. Deep Recurrent Model for Server Load and Performance Prediction in Data Center

    Directory of Open Access Journals (Sweden)

    Zheng Huang

    2017-01-01

    Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.

  11. An efficient biometric and password-based remote user authentication using smart card for Telecare Medical Information Systems in multi-server environment.

    Science.gov (United States)

    Maitra, Tanmoy; Giri, Debasis

    2014-12-01

    The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.

  12. Multi-server blind quantum computation over collective-noise channels

    Science.gov (United States)

    Xiao, Min; Liu, Lin; Song, Xiuli

    2018-03-01

    Blind quantum computation (BQC) enables ordinary clients to securely outsource their computation task to costly quantum servers. Besides two essential properties, namely correctness and blindness, practical BQC protocols also should make clients as classical as possible and tolerate faults from nonideal quantum channel. In this paper, using logical Bell states as quantum resource, we propose multi-server BQC protocols over collective-dephasing noise channel and collective-rotation noise channel, respectively. The proposed protocols permit completely or almost classical client, meet the correctness and blindness requirements of BQC protocol, and are typically practical BQC protocols.

  13. An exact solution for the state probabilities of the multi-class, multi-server queue with preemptive priorities

    NARCIS (Netherlands)

    Sleptchenko, Andrei; van Harten, Aart; van der Heijden, Matthijs C.

    2005-01-01

    We consider a multi-class, multi-server queueing system with preemptive priorities. We distinguish two groups of priority classes that consist of multiple customer types, each having their own arrival and service rate. We assume Poisson arrival processes and exponentially distributed service times.

  14. PENERAPAN ARSITEKTUR MULTI-TIER DENGAN DCOM DALAM SUATU SISTEM INFORMASI

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Information System implementation using two-tier architecture result lack in several critical issues : reuse component, scalability, maintenance, and data security. The multi-tiered client/server architecture provides a good resolution to solve these problems that using DCOM technology . The software is made by using Delphi 4 Client/Server Suite and Microsoft SQL Server V. 7.0 as a database server software. The multi-tiered application is partitioned into thirds. The first is client application which provides presentation services. The second is server application which provides application services, and the third is database server which provides database services. This multi-tiered application software can be made in two model. They are Client/Server Windows model and Client/Server Web model with ActiveX Form Technology. In this research is found that making multi-tiered architecture with using DCOM technology can provide many benefits such as, centralized application logic in middle-tier, make thin client application, distributed load of data process in several machines, increases security with the ability in hiding data, dan fast maintenance without installing database drivers in every client. Abstract in Bahasa Indonesia : Penerapan sistem informasi menggunakan two-tier architecture mempunyai banyak kelemahan : penggunaan kembali komponen, skalabilitas, perawatan, dan keamanan data. Multi-tier Client-Server architecture mempunyai kemampuan untuk memecahkan masalah ini dengan DCOM teknologi. Perangkat lunak ini dapat dibuat menggunakan Delphi 4 Client/Server Suite dan Microsoft SQL Server 7.0 sebagai perangkat lunak database. Aplikasi program multi-tier ini dibagi menjadi tiga partisi. Pertama adalah aplikasi client menyediakan presentasi servis, kedua aplikasi server menyediakan servis aplikasi, dan ketiga aplikasi database menyediakan database servis. Perangkat lunak aplikasi multi-tier ini dapat dibuat dalam dua model, yaitu client/server

  15. X-Switch: An Efficient , Multi-User, Multi-Language Web Application Server

    Directory of Open Access Journals (Sweden)

    Mayumbo Nyirenda

    2010-07-01

    Full Text Available Web applications are usually installed on and accessed through a Web server. For security reasons, these Web servers generally provide very few privileges to Web applications, defaulting to executing them in the realm of a guest ac- count. In addition, performance often is a problem as Web applications may need to be reinitialised with each access. Various solutions have been designed to address these security and performance issues, mostly independently of one another, but most have been language or system-specic. The X-Switch system is proposed as an alternative Web application execution environment, with more secure user-based resource management, persistent application interpreters and support for arbitrary languages/interpreters. Thus it provides a general-purpose environment for developing and deploying Web applications. The X-Switch system's experimental results demonstrated that it can achieve a high level of performance. Further- more it was shown that X-Switch can provide functionality matching that of existing Web application servers but with the added benet of multi-user support. Finally the X-Switch system showed that it is feasible to completely separate the deployment platform from the application code, thus ensuring that the developer does not need to modify his/her code to make it compatible with the deployment platform.

  16. Cryptanalysis and improvement of a biometrics-based authentication and key agreement scheme for multi-server environments.

    Science.gov (United States)

    Yang, Li; Zheng, Zhiming

    2018-01-01

    According to advancements in the wireless technologies, study of biometrics-based multi-server authenticated key agreement schemes has acquired a lot of momentum. Recently, Wang et al. presented a three-factor authentication protocol with key agreement and claimed that their scheme was resistant to several prominent attacks. Unfortunately, this paper indicates that their protocol is still vulnerable to the user impersonation attack, privileged insider attack and server spoofing attack. Furthermore, their protocol cannot provide the perfect forward secrecy. As a remedy of these aforementioned problems, we propose a biometrics-based authentication and key agreement scheme for multi-server environments. Compared with various related schemes, our protocol achieves the stronger security and provides more functionality properties. Besides, the proposed protocol shows the satisfactory performances in respect of storage requirement, communication overhead and computational cost. Thus, our protocol is suitable for expert systems and other multi-server architectures. Consequently, the proposed protocol is more appropriate in the distributed networks.

  17. Cryptanalysis and improvement of a biometrics-based authentication and key agreement scheme for multi-server environments

    Science.gov (United States)

    Zheng, Zhiming

    2018-01-01

    According to advancements in the wireless technologies, study of biometrics-based multi-server authenticated key agreement schemes has acquired a lot of momentum. Recently, Wang et al. presented a three-factor authentication protocol with key agreement and claimed that their scheme was resistant to several prominent attacks. Unfortunately, this paper indicates that their protocol is still vulnerable to the user impersonation attack, privileged insider attack and server spoofing attack. Furthermore, their protocol cannot provide the perfect forward secrecy. As a remedy of these aforementioned problems, we propose a biometrics-based authentication and key agreement scheme for multi-server environments. Compared with various related schemes, our protocol achieves the stronger security and provides more functionality properties. Besides, the proposed protocol shows the satisfactory performances in respect of storage requirement, communication overhead and computational cost. Thus, our protocol is suitable for expert systems and other multi-server architectures. Consequently, the proposed protocol is more appropriate in the distributed networks. PMID:29534085

  18. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards.

    Science.gov (United States)

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user's management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.'s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.'s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.'s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties.

  19. Model of load balancing using reliable algorithm with multi-agent system

    Science.gov (United States)

    Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.

    2017-04-01

    Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.

  20. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards

    Science.gov (United States)

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user’s management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.’s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.’s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.’s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties. PMID:26709702

  1. Defense strategies for cloud computing multi-site server infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Ma, Chris Y. T. [Hang Seng Management College, Hon Kong; He, Fei [Texas A& M University, Kingsville, TX, USA

    2018-01-01

    We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, and also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.

  2. Simple bounds and monotonicity results for finite multi-server exponential tandem queues

    NARCIS (Netherlands)

    Dijk, van N.M.; Wal, van der J.

    1989-01-01

    Simple and computationally attractive lower and upper bounds are presented for the call congestion such as those representing multi-server loss or delay stations. Numerical computations indicate a potential usefulness of the bounds for quick engineering purposes. The bounds correspond to

  3. Bearing load distribution studies in a multi bearing rotor system and a remote computing method based on the internet

    International Nuclear Information System (INIS)

    Yang, Zhao Jian; Peng, Ze Jun; Kim, Seock Sam

    2004-01-01

    A model in the form of a Bearing Load Distribution (BLD) matrix in the Multi Bearing Rotor System (MBRS) is established by a transfer matrix equation with the consideration of a bearing load, elevation and uniform load distribution. The concept of Bearing Load Sensitivity (BLS) is proposed and matrices for load and elevation sensitivity are obtained. In order to share MBRS design resources on the internet with remote customers, the basic principle of Remote Computing (RC) based on the internet is introduced ; the RC of the BLD and BLS is achieved by Microsoft Active Server Pages (ASP) technology

  4. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    Science.gov (United States)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  5. Network characteristics for server selection in online games

    Science.gov (United States)

    Claypool, Mark

    2008-01-01

    Online gameplay is impacted by the network characteristics of players connected to the same server. Unfortunately, the network characteristics of online game servers are not well-understood, particularly for groups that wish to play together on the same server. As a step towards a remedy, this paper presents analysis of an extensive set of measurements of game servers on the Internet. Over the course of many months, actual Internet game servers were queried simultaneously by twenty-five emulated game clients, with both servers and clients spread out on the Internet. The data provides statistics on the uptime and populations of game servers over a month long period an an in-depth look at the suitability for game servers for multi-player server selection, concentrating on characteristics critical to playability--latency and fairness. Analysis finds most game servers have latencies suitable for third-person and omnipresent games, such as real-time strategy, sports and role-playing games, providing numerous server choices for game players. However, far fewer game servers have the low latencies required for first-person games, such as shooters or race games. In all cases, groups that wish to play together have a greatly reduced set of servers from which to choose because of inherent unfairness in server latencies and server selection is particularly limited as the group size increases. These results hold across different game types and even across different generations of games. The data should be useful for game developers and network researchers that seek to improve game server selection, whether for single or multiple players.

  6. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography.

    Science.gov (United States)

    Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young

    2016-01-01

    Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.'s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.'s protocol and existing similar protocols.

  7. Paying for Express Checkout: Competition and Price Discrimination in Multi-Server Queuing Systems

    Science.gov (United States)

    Deck, Cary; Kimbrough, Erik O.; Mongrain, Steeve

    2014-01-01

    We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus. PMID:24667809

  8. Paying for express checkout: competition and price discrimination in multi-server queuing systems.

    Directory of Open Access Journals (Sweden)

    Cary Deck

    Full Text Available We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus.

  9. PENGUKURAN KINERJA ROUND-ROBIN SCHEDULER UNTUK LINUX VIRTUAL SERVER PADA KASUS WEB SERVER

    Directory of Open Access Journals (Sweden)

    Royyana Muslim Ijtihadie

    2005-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Dengan meningkatnya perkembangan jumlah pengguna internet dan mulai diadopsinya penggunaan internet dalam kehidupan sehari-hari, maka lalulintas data di Internet telah meningkat secara signifikan. Sejalan dengan itu pula beban kerja server-server yang memberikan service di Internet juga mengalami kenaikan yang cukup signifikan. Hal tersebut dapat mengakibatkan suatu server mengalami kelebihan beban pada suatu saat. Untuk mengatasi hal tersebut maka diterapkan skema konfigurasi server cluster menggunakan konsep load balancing. Load balancing server menerapkan algoritma dalam melakukan pembagian tugas. Algoritma round robin telah digunakan pada Linux Virtual Server. Penelitian ini melakukan pengukuran kinerja terhadap Linux Virtual Server yang menggunakan algoritma round robin untuk melakukan penjadwalan pembagian beban terhadap server. Penelitian ini mengukur performa dari sisi client yang mencoba mengakses web server.performa yang diukur adalah jumlah request yang bisa diselesaikan perdetik (request per second, waktu untuk menyelesaikan per satu request, dan   throughput yang dihasilkan. Dari hasil percobaan didapatkan bahwa penggunaan LVS bisa meningkatkan performa, yaitu menaikkan jumlah request per detik

  10. ANALYSIS OF MULTI-SERVER QUEUEING SYSTEM WITH PREEMPTIVE PRIORITY AND REPEATED CALLS

    Directory of Open Access Journals (Sweden)

    S. A. Dudin

    2015-01-01

    Full Text Available Multi-server retrial queueing system with no buffer and two types of customers is analyzed as the model of cognitive radio system. Customers of type 1 have a preemptive priority. Customers of both types arrive according to Markovian Arrival Processes. Service times have exponential distribution with parameter depending on the customer type. Type 2 customers are admitted for service only if the number of busy servers is less than the predefined threshold. The rejected type 2 customers retry for the service. Existence condition of the stationary mode of system operation is derived. Formulas for computing key performance measures of the system are presented.

  11. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    Directory of Open Access Journals (Sweden)

    Chengqi Wang

    Full Text Available With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  12. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    Science.gov (United States)

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  13. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    Science.gov (United States)

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  14. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography

    Science.gov (United States)

    Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young

    2016-01-01

    Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.’s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.’s protocol and existing similar protocols. PMID:27163786

  15. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography.

    Directory of Open Access Journals (Sweden)

    Alavalapati Goutham Reddy

    Full Text Available Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.'s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.'s protocol and existing similar protocols.

  16. Critical Axial Load

    Directory of Open Access Journals (Sweden)

    Walt Wells

    2008-01-01

    Full Text Available Our objective in this paper is to solve a second order differential equation for a long, simply supported column member subjected to a lateral axial load using Heun's numerical method. We will use the solution to find the critical load at which the column member will fail due to buckling. We will calculate this load using Euler's derived analytical approach for an exact solution, as well as Euler's Numerical Method. We will then compare the three calculated values to see how much they deviate from one another. During the critical load calculation, it will be necessary to calculate the moment of inertia for the column member.

  17. Robust biometrics based authentication and key agreement scheme for multi-server environments using smart cards.

    Science.gov (United States)

    Lu, Yanrong; Li, Lixiang; Yang, Xing; Yang, Yixian

    2015-01-01

    Biometrics authenticated schemes using smart cards have attracted much attention in multi-server environments. Several schemes of this type where proposed in the past. However, many of them were found to have some design flaws. This paper concentrates on the security weaknesses of the three-factor authentication scheme by Mishra et al. After careful analysis, we find their scheme does not really resist replay attack while failing to provide an efficient password change phase. We further propose an improvement of Mishra et al.'s scheme with the purpose of preventing the security threats of their scheme. We demonstrate the proposed scheme is given to strong authentication against several attacks including attacks shown in the original scheme. In addition, we compare the performance and functionality with other multi-server authenticated key schemes.

  18. Robust biometrics based authentication and key agreement scheme for multi-server environments using smart cards.

    Directory of Open Access Journals (Sweden)

    Yanrong Lu

    Full Text Available Biometrics authenticated schemes using smart cards have attracted much attention in multi-server environments. Several schemes of this type where proposed in the past. However, many of them were found to have some design flaws. This paper concentrates on the security weaknesses of the three-factor authentication scheme by Mishra et al. After careful analysis, we find their scheme does not really resist replay attack while failing to provide an efficient password change phase. We further propose an improvement of Mishra et al.'s scheme with the purpose of preventing the security threats of their scheme. We demonstrate the proposed scheme is given to strong authentication against several attacks including attacks shown in the original scheme. In addition, we compare the performance and functionality with other multi-server authenticated key schemes.

  19. Weaknesses of a dynamic identity based authentication protocol for multi-server architecture

    OpenAIRE

    Han, Weiwei

    2012-01-01

    Recently, Li et al. proposed a dynamic identity based authentication protocol for multi-server architecture. They claimed their protocol is secure and can withstand various attacks. But we found some security loopholes in the protocol. Accordingly, the current paper demonstrates that Li et al.'s protocol is vulnerable to the replay attack, the password guessing attack and the masquerade attack.

  20. A Capacity Supply Model for Virtualized Servers

    Directory of Open Access Journals (Sweden)

    Alexander PINNOW

    2009-01-01

    Full Text Available This paper deals with determining the capacity supply for virtualized servers. First, a server is modeled as a queue based on a Markov chain. Then, the effect of server virtualization on the capacity supply will be analyzed with the distribution function of the server load.

  1. Nitrogen critical loads using biodiversity-related critical limits

    International Nuclear Information System (INIS)

    Posch, Maximilian; Aherne, Julian; Hettelingh, Jean-Paul

    2011-01-01

    Critical loads are widely used in the effects-based assessment of emission reduction policies. While the impacts of acidification have diminished, there is increasing concern regarding the effects of nitrogen deposition on terrestrial ecosystems. In this context much attention has been focussed on empirical critical loads as well as simulations with linked geochemistry-vegetation models. Surprisingly little attention has been paid to adapt the widely used simple mass balance approach. This approach has the well-established benefit of easy regional applicability, while incorporating specified critical chemical criteria to protect specified receptors. As plant occurrence/biodiversity is related to both the nutrient and acidity status of an ecosystem, a single abiotic factor (chemical criterion) is not sufficient. Rather than an upper limit for deposition (i.e., critical load), linked nutrient nitrogen and acidity chemical criteria for plant occurrence result in an 'optimal' nitrogen and sulphur deposition envelope. - Highlights: → Mass balance critical load approaches for nutrient nitrogen remain useful. → Biodiversity-related limits are related to nutrient and acidity status. → Nutrient and acidity chemical criteria lead to optimal deposition envelopes. → Optimal loads support effects-based emission reduction policies. - Biodiversity-related critical limits lead to optimal nitrogen and sulphur deposition envelopes for plant species or species compositions.

  2. Exam 70-411 administering Windows Server 2012

    CERN Document Server

    Course, Microsoft Official Academic

    2014-01-01

    Microsoft Windows Server is a multi-purpose server designed to increase reliability and flexibility of  a network infrastructure. Windows Server is the paramount tool used by enterprises in their datacenter and desktop strategy. The most recent versions of Windows Server also provide both server and client virtualization. Its ubiquity in the enterprise results in the need for networking professionals who know how to plan, design, implement, operate, and troubleshoot networks relying on Windows Server. Microsoft Learning is preparing the next round of its Windows Server Certification program

  3. Web Server Embedded System

    Directory of Open Access Journals (Sweden)

    Adharul Muttaqin

    2014-07-01

    Full Text Available Abstrak Embedded sistem saat ini menjadi perhatian khusus pada teknologi komputer, beberapa sistem operasi linux dan web server yang beraneka ragam juga sudah dipersiapkan untuk mendukung sistem embedded, salah satu aplikasi yang dapat digunakan dalam operasi pada sistem embedded adalah web server. Pemilihan web server pada lingkungan embedded saat ini masih jarang dilakukan, oleh karena itu penelitian ini dilakukan dengan menitik beratkan pada dua buah aplikasi web server yang tergolong memiliki fitur utama yang menawarkan “keringanan” pada konsumsi CPU maupun memori seperti Light HTTPD dan Tiny HTTPD. Dengan menggunakan parameter thread (users, ramp-up periods, dan loop count pada stress test embedded system, penelitian ini menawarkan solusi web server manakah diantara Light HTTPD dan Tiny HTTPD yang memiliki kecocokan fitur dalam penggunaan embedded sistem menggunakan beagleboard ditinjau dari konsumsi CPU dan memori. Hasil penelitian menunjukkan bahwa dalam hal konsumsi CPU pada beagleboard embedded system lebih disarankan penggunaan Light HTTPD dibandingkan dengan tiny HTTPD dikarenakan terdapat perbedaan CPU load yang sangat signifikan antar kedua layanan web tersebut Kata kunci: embedded system, web server Abstract Embedded systems are currently of particular concern in computer technology, some of the linux operating system and web server variegated also prepared to support the embedded system, one of the applications that can be used in embedded systems are operating on the web server. Selection of embedded web server on the environment is still rarely done, therefore this study was conducted with a focus on two web application servers belonging to the main features that offer a "lightness" to the CPU and memory consumption as Light HTTPD and Tiny HTTPD. By using the parameters of the thread (users, ramp-up periods, and loop count on a stress test embedded systems, this study offers a solution of web server which between the Light

  4. On delay adjustment for dynamic load balancing in distributed virtual environments.

    Science.gov (United States)

    Deng, Yunhua; Lau, Rynson W H

    2012-04-01

    Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead.

  5. The concept of target and critical loads

    International Nuclear Information System (INIS)

    Grigal, D.F.

    1991-09-01

    Target and critical loads were initially developed for assessment and control of acidic deposition, but are being considered for other air pollutants such as ozone and air-borne toxic compounds. These loads are based on thresholds, with damage assumed to occur above some defined level of deposition. Many of the historically proposed targets for acidic deposition were based on arbitrary interpretations of data. The concept of critical loads has recently separated from that of target loads. A critical load is the amount of pollutant deposition, determined by technical analysis, above which there is a specific deleterious ecological effect. A target load is the deposition, determined by political agreement, above which unacceptable ecological damage occurs; it may be greater than the critical load because of political or economic considerations, or less to conservatively account for uncertainty in the estimation of the critical load. Recent definitions of critical loads include recognition that each kind of ecosystem and effect may require a different load. Geographic regions contain a mosaic of aquatic and terrestrial resources. If precise knowledge leads to different critical loads for each system, then how is the regional target load established? For better or worse, target and critical loads are likely to be used to regulate air pollutants. The philosophy of their establishment as thresholds, their quantitative validity, and their application in regulation all require careful examination. 36 refs., 3 figs

  6. Cluster Server IPTV dengan Penjadwalan Algoritma Round Robin

    Directory of Open Access Journals (Sweden)

    Didik Aribowo

    2016-03-01

    Full Text Available Perkembangan teknologi informasi yang pesat, otomatis seiring juga dengan meningkatnya para pengguna yang terhubung pada jaringan internet. Berawal dari sebuah single server yang selalu mendapatkan request dari banyak user, perlahan tapi pasti akan terjadi overload dan crash sehingga berdampak pada request yang tidak dapat dilayani oleh single server. Desain arsitektur cluster dapat dibangun dengan menggunakan konsep network load balancing yang memungkinkan proses pengolahan data di share ke dalam beberapa komputer. Dalam penelitian ini menggunakan algoritma penjadwalan round robin sebagai solusi alternatif mengatasi permasalah overload data pada server yang dapat mempengaruhi kinerja sistem IPTV. Untuk  jumlah request yang digunakan dalam penelitian ini adalah 5000, 15000, 25000, dan 50000 request. Dengan metode tersebut, maka performansi algoritma penjawalan dapat diamati dengan menekankan pada parameter sebagai berikut, yaitu throughput, respon time, reply connection, dan error connection sehingga didapatkan algoritma penjadwalan terbaik dalam rangka optimalisasi cluster server IPTV. Secara otomatis dalam proses load balancing mampu mengurangi beban kerja setiap server sehingga tidak ada server yang overload dan memungkinkan server  menggunakan bandwidth  yang tersedia secara lebih efektif serta menyediakan akses yang cepat ke web browser yang dihosting. Implementasi webserver cluster dengan skema load balancing dapat memberikan alvalaibilitas sistem yang tetap terjaga dan skalabilitas yang cukup untuk dapat tetap melayani setiap request dari pengguna

  7. Multi-Capacity Load Cell Concept

    Directory of Open Access Journals (Sweden)

    Seif. M. OSMAN

    2014-09-01

    Full Text Available Force measuring systems are usually used to calibrate force generated systems, it is not preferable to use load cells to measure forces less than 10 % of its nominal capacity. Several load cells are required to offer calibration facilities at sites to cover different ranges, this lead to difficulties in handling procedures, through the need for several carrying cases to carry this overweight in addition to the over cost of purchasing several load cells. This article concerns with introducing a new concept for designing a multi-capacity load cell as a new force standard in the field of measuring the force. This multi-capacity load cell will replace a set of load cells and reflects economically on the total cost and on easiness of handling procedures.

  8. Peak load-impulse characterization of critical pulse loads in structural dynamics

    International Nuclear Information System (INIS)

    Abrahamson, G.R.; Lindberg, H.E.

    1975-01-01

    In presenting the characterization scheme, some general features are described first. A detailed analysis is given for the rigid-plastic system of one degree of freedom to illustrate the calculation of critical load curves in terms of peak load and impulse. This is followed by the presentation of critical load curves for uniformly loaded rigid-plastic beams and plates and for dynamic buckling of cylindrical shells under uniform lateral loads. The peak load-impulse characterization of critical pulse loads is compared with the dynamic load factor characterization, and some aspects of the history of the peak load-pulse scheme are presented. (orig./HP) [de

  9. Professional Microsoft SQL Server 2012 Administration

    CERN Document Server

    Jorgensen, Adam; LoForte, Ross; Knight, Brian

    2012-01-01

    An essential how-to guide for experienced DBAs on the most significant product release since 2005! Microsoft SQL Server 2012 will have major changes throughout the SQL Server and will impact how DBAs administer the database. With this book, a team of well-known SQL Server experts introduces the many new features of the most recent version of SQL Server and deciphers how these changes will affect the methods that administrators have been using for years. Loaded with unique tips, tricks, and workarounds for handling the most difficult SQL Server admin issues, this how-to guide deciphers topics s

  10. National implementation of the UNECE convention on long-range transboundary air pollution (effects). Pt. 2. Impacts and risk estimation, critical loads, biodiversity, dynamic modelling, critical level violation, material corrosion; Nationale Umsetzung UNECE-Luftreinhaltekonvention (Wirkungen). T. 2. Wirkungen und Risikoabschaetzungen Critical Loads, Biodiversitaet, Dynamische Modellierung, Critical Levels Ueberschreitungen, Materialkorrosion

    Energy Technology Data Exchange (ETDEWEB)

    Gauger, Thomas [Bundesforschungsanstalt fuer Landwirtschaft, Braunschweig (DE). Inst. fuer Agraroekologie (FAL-AOE); Stuttgart Univ. (Germany). Inst. fuer Navigation; Haenel, Hans-Dieter; Roesemann, Claus [Bundesforschungsanstalt fuer Landwirtschaft, Braunschweig (DE). Inst. fuer Agraroekologie (FAL-AOE); Nagel, Hans-Dieter; Becker, Rolf; Kraft, Philipp; Schlutow, Angela; Schuetze, Gudrun; Weigelt-Kirchner, Regine [OeKO-DATA Gesellschaft fuer Oekosystemanalyse und Umweltdatenmanagement mbH, Strausberg (Germany); Anshelm, Frank [Geotechnik Suedwest Frey Marx GbR, Bietigheim-Bissingen (Germany)

    2008-09-15

    The report on the implementation of the UNECE convention on long-range transboundary air pollution Pt.2 covers the following issues: The tasks of the NFC (National Focal Center) Germany including the ICP (international cooperative program) modeling and mapping and the expert panel for heavy metals. Results of the work for the multi-component protocol cover the initial data for the calculation of the critical loads following the mass balance method, critical loads for acid deposition, critical loads for nitrogen input, critical load violations (sulfur, nitrogen). The results of work for the heavy metal protocol cover methodology development and recommendations for ICO modeling and mapping in accordance with international development, contributions of the expert group/ task force on heavy metals (WGSR), data sets on the critical loads for lead, cadmium and mercury, and critical load violations (Pb, Cd, Hg). The results of work on the inclusion of biodiversity (BERN) cover data compilation, acquisition and integration concerning ecosystems, model validation and verification and the possible interpretation frame following the coupling with dynamic modeling. The future development and utilization of dynamic modeling covers model comparison, applicability, the preparation of a national data set and preparations concerning the interface to the BERN model.

  11. 4DGeoBrowser: A Web-Based Data Browser and Server for Accessing and Analyzing Multi-Disciplinary Data

    National Research Council Canada - National Science Library

    Lerner, Steven

    2001-01-01

    .... Once the information is loaded onto a Geobrowser server the investigator-user is able to login to the website and use a set of data access and analysis tools to search, plot, and display this information...

  12. BEBAN JARINGAN SAAT MENGAKSES EMAIL DARI BEBERAPA MAIL SERVER

    Directory of Open Access Journals (Sweden)

    Husni Thamrin

    2017-01-01

    Full Text Available Expensive internet facilities require prudent in its use both as a source of information and communication media. This paper discusses observations of the perceived burden of network bandwidth when accessing some of the mail server using a webmail application. Mail server in question consists of three commercial server and 2 non-commercial server. Data when it download home page, while logged in, open the email, and during idle logout recorded with sniffer Wireshark. Observations in various situations and scenarios indicate that access Yahoo email gives the network load is very high while the SquirrelMail gives the network load is very low than 5 other mail servers. For an institution, use a local mail server (institutional is highly recommended in the context of banddwidth savings.

  13. Implementation of SRPT Scheduling in Web Servers

    National Research Council Canada - National Science Library

    Harchol-Balter, Mor

    2000-01-01

    .... Experiments use the Linux operating system and the Flash web server. All experiments are repeated under a range of server loads and under both trace-based workloads and those generated by a Web workload generator...

  14. Information Interpretation Code For Providing Secure Data Integrity On Multi-Server Cloud Infrastructure

    OpenAIRE

    Sathiya Moorthy Srinivsan; Chandrasekar Chaillah

    2014-01-01

    Data security is one of the biggest concerns in cloud computing environment. Although the advantages of storing data in cloud computing environment is extremely high, there arises a problem related to data missing. CyberLiveApp (CLA) supports secure application development between multiple users, even though cloud users distinguish their vision privileges during storing of data. But CyberLiveApp failed to integrate the system with certain cloud-based computing environments on multi-server. En...

  15. Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers

    Science.gov (United States)

    Tumer, K.; Lawson, J.

    2003-01-01

    Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.

  16. Two Coupled Queues with Vastly Different Arrival Rates: Critical Loading Case

    Directory of Open Access Journals (Sweden)

    Charles Knessl

    2011-01-01

    Full Text Available We consider two coupled queues with a generalized processor sharing service discipline. The second queue has a much smaller Poisson arrival rate than the first queue, while the customer service times are of comparable magnitude. The processor sharing server devotes most of its resources to the first queue, except when it is empty. The fraction of resources devoted to the second queue is small, of the same order as the ratio of the arrival rates. We assume that the primary queue is heavily loaded and that the secondary queue is critically loaded. If we let the small arrival rate to the secondary queue be O(ε, where 0≤ε≪1, then in this asymptotic limit the number of customers in the first queue will be large, of order O(ε-1, while that in the second queue will be somewhat smaller, of order O(ε-1/2. We obtain a two-dimensional diffusion approximation for this model and explicitly solve for the joint steady state probability distribution of the numbers of customers in the two queues. This work complements that in (Morrison, 2010, which the second queue was assumed to be heavily or lightly loaded, leading to mean queue lengths that were O(ε-1 or O(1, respectively.

  17. Characteristics and Energy Use of Volume Servers in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shehabi, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ganeshalingam, M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, L. -B. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lim, B. [Fraunhofer Center for Sustainable Energy Systems, Boston, MA (United States); Roth, K. [Fraunhofer Center for Sustainable Energy Systems, Boston, MA (United States); Tsao, A. [Navigant Consulting Inc., Chicago, IL (United States)

    2017-11-01

    Servers’ field energy use remains poorly understood, given heterogeneous computing loads, configurable hardware and software, and operation over a wide range of management practices. This paper explores various characteristics of 1- and 2-socket volume servers that affect energy consumption, and quantifies the difference in power demand between higher-performing SPEC and ENERGY STAR servers and our best understanding of a typical server operating today. We first establish general characteristics of the U.S. installed base of volume servers from existing IDC data and the literature, before presenting information on server hardware configurations from data collection events at a major online retail website. We then compare cumulative distribution functions of server idle power across three separate datasets and explain the differences between them via examination of the hardware characteristics to which power draw is most sensitive. We find that idle server power demand is significantly higher than ENERGY STAR benchmarks and the industry-released energy use documented in SPEC, and that SPEC server configurations—and likely the associated power-scaling trends—are atypical of volume servers. Next, we examine recent trends in server power draw among high-performing servers across their full load range to consider how representative these trends are of all volume servers before inputting weighted average idle power load values into a recently published model of national server energy use. Finally, we present results from two surveys of IT managers (n=216) and IT vendors (n=178) that illustrate the prevalence of more-efficient equipment and operational practices in server rooms and closets; these findings highlight opportunities to improve the energy efficiency of the U.S. server stock.

  18. Design, Implementation and Testing of a Tiny Multi-Threaded DNS64 Server

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-03-01

    Full Text Available DNS64 is going to be an important service (together with NAT64 in the upcoming years of the IPv6 transition enabling the clients having only IPv6 addresses to reach the servers having only IPv4 addresses (the majority of the servers on the Internet today. This paper describes the design, implementation and functional testing of MTD64, a flexible, easy to use, multi-threaded DNS64 proxy published as a free software under the GPLv2 license. All the theoretical background is introduced including the DNS message format, the operation of the DNS64 plus NAT64 solution and the construction of the IPv4-embedded IPv6 addresses. Our design decisions are fully disclosed from the high level ones to the details. Implementation is introduced at high level only as the details can be found in the developer documentation. The most important parts of a through functional testing are included as well as the results of some basic performance comparison with BIND.

  19. Optimal routing of IP packets to multi-homed servers

    International Nuclear Information System (INIS)

    Swartz, K.L.

    1992-08-01

    Multi-homing, or direct attachment to multiple networks, offers both performance and availability benefits for important servers on busy networks. Exploiting these benefits to their fullest requires a modicum of routing knowledge in the clients. Careful policy control must also be reflected in the routing used within the network to make best use of specialized and often scarce resources. While relatively straightforward in theory, this problem becomes much more difficult to solve in a real network containing often intractable implementations from a variety of vendors. This paper presents an analysis of the problem and proposes a useful solution for a typical campus network. Application of this solution at the Stanford Linear Accelerator Center is studied and the problems and pitfalls encountered are discussed, as are the workarounds used to make the system work in the real world

  20. Fatigue damage assessment under multi-axial non-proportional cyclic loading

    International Nuclear Information System (INIS)

    Mohta, Keshav; Gupta, Suneel K.; Jadhav, P.A.; Bhasin, V.; Vijayan, P.K.

    2016-01-01

    Detailed fatigue analysis is carried out for class I Nuclear Power Plant (NPP) components to rule out the fatigue failure during their design lifetime. ASME Boiler and Pressure Vessel code Section III NB, has provided two schemes for fatigue assessment, one for fixed principal directions (proportional) loading and the other for varying principal directions (non-proportional) loading conditions. Recent literature on multi-axial fatigue tests has revealed lower fatigue lives under nonproportional loading conditions. In an attempt to understand the loading parameter lowering the fatigue life, a finite element based study has been carried out. Here, fatigue damage in a tube has been correlated with the applied axial to shear strain ratio and phase difference between them. The FE analysis has used Chaboche nonlinear kinematic hardening rule to model material's realistic cyclic plastic deformation behavior. The ASME alternating stress intensity (based on linear elastic FEA) and the plastic strain energy dissipation (based on elastic-plastic FEA) have been considered to assess the per cycle fatigue damage. The study has revealed that ASME criteria predicts lower alternating stress intensity (fatigue damage parameter S alt ) for some cases of non-proportional loading than that predicted for corresponding proportional loading case. However, the actual fatigue damage is higher in non-proportional loading than that in corresponding proportional loading case. Further the fatigue damage of an NPP component under realistic multi-axial cyclic loading conditions has been assessed using some popular critical plane based models vis-à-vis ASME Sec. III criteria. (author)

  1. Analysis of a multi-server queueing model of ABR

    Directory of Open Access Journals (Sweden)

    R. Núñez-Queija

    1998-01-01

    Full Text Available In this paper we present a queueing model for the performance analysis of Available Bit Rate (ABR traffic in Asynchronous Transfer Mode (ATM networks. We consider a multi-channel service station with two types of customers, denoted by high priority and low priority customers. In principle, high priority customers have preemptive priority over low priority customers, except on a fixed number of channels that are reserved for low priority traffic. The arrivals occur according to two independent Poisson processes, and service times are assumed to be exponentially distributed. Each high priority customer requires a single server, whereas low priority customers are served in processor sharing fashion. We derive the joint distribution of the numbers of customers (of both types in the system in steady state. Numerical results illustrate the effect of high priority traffic on the service performance of low priority traffic.

  2. Using Servers to Enhance Control System Capability

    International Nuclear Information System (INIS)

    Bickley, M.; Bowling, B. A.; Bryan, D. A.; Zeijts, J. van; White, K. S.; Witherspoon, S.

    1999-01-01

    Many traditional control systems include a distributed collection of front end machines to control hardware. Backend tools are used to view, modify, and record the signals generated by these front end machines. Software servers, which are a middleware layer between the front and back ends, can improve a control system in several ways. Servers can enable on-line processing of raw data, and consolidation of functionality. It many cases data retrieved from the front end must be processed in order to convert the raw data into useful information. These calculations are often redundantly performance by different programs, frequently offline. Servers can monitor the raw data and rapidly perform calculations, producing new signals which can be treated like any other control system signal, and can be used by any back end application. Algorithms can be incorporated to actively modify signal values in the control system based upon changes of other signals, essentially producing feedback in a control system. Servers thus increase the flexibility of a control system. Lastly, servers running on inexpensive UNIXworkstations can relay or cache frequently needed information, reducing the load on front end hardware by functioning as concentrators. Rather than many back end tools connecting directly to the front end machines, increasing the work load of these machines, they instead connect to the server. Servers like those discussed above have been used successfully at the Thomas Jefferson National Accelerator Facility to provide functionality such as beam steering, fault monitoring, storage of machine parameters, and on-line data processing. The authors discuss the potential uses of such servers, and share the results of work performed to date

  3. Analisis Perbandingan Unjuk Kerja Sistem Penyeimbang Beban Web Server dengan HAProxy dan Pound Links

    Directory of Open Access Journals (Sweden)

    Dite Ardian

    2013-04-01

    Full Text Available The development of internet technology has many organizations that expanded service website. Initially used single web server that is accessible to everyone through the Internet, but when the number of users that access the web server is very much the traffic load to the web server and the web server anyway. It is necessary for the optimization of web servers to cope with the overload received by the web server when traffic is high. Methodology of this final project research include the study of literature, system design, and testing of the system. Methods from the literature reference books related as well as from several sources the internet. The design of this thesis uses Haproxy and Pound Links as a load balancing web server. The end of this reaserch is testing the network system, where the system will be tested this stage so as to create a web server system that is reliable and safe. The result is a web server system that can be accessed by many user simultaneously rapidly as load balancing Haproxy and Pound Links system which is set up as front-end web server performance so as to create a web server that has performance and high availability.

  4. RNA-TVcurve: a Web server for RNA secondary structure comparison based on a multi-scale similarity of its triple vector curve representation.

    Science.gov (United States)

    Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin

    2017-01-21

    RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA

  5. Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)

    Science.gov (United States)

    Blasch, Erik

    2015-06-01

    Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.

  6. Vertical Load Distribution for Cloud Computing via Multiple Implementation Options

    Science.gov (United States)

    Phan, Thomas; Li, Wen-Syan

    Cloud computing looks to deliver software as a provisioned service to end users, but the underlying infrastructure must be sufficiently scalable and robust. In our work, we focus on large-scale enterprise cloud systems and examine how enterprises may use a service-oriented architecture (SOA) to provide a streamlined interface to their business processes. To scale up the business processes, each SOA tier usually deploys multiple servers for load distribution and fault tolerance, a scenario which we term horizontal load distribution. One limitation of this approach is that load cannot be distributed further when all servers in the same tier are loaded. In complex multi-tiered SOA systems, a single business process may actually be implemented by multiple different computation pathways among the tiers, each with different components, in order to provide resilience and scalability. Such multiple implementation options gives opportunities for vertical load distribution across tiers. In this chapter, we look at a novel request routing framework for SOA-based enterprise computing with multiple implementation options that takes into account the options of both horizontal and vertical load distribution.

  7. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    Science.gov (United States)

    Jain, Madhu; Meena, Rakesh Kumar

    2018-03-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  8. Calculation and mapping of critical loads in Europe: Status report 1993

    International Nuclear Information System (INIS)

    Downing, R.J.; Hettelingh, J.P.; De Smet, P.A.M.

    1993-01-01

    The work of the RIVM Coordination Center for Effects (CCE) and National Focal Centers (NFCs) for Mapping over the past two years is summarized. The primary task of the critical loads mapping program during this period was to compute and map critical loads of sulphur in Europe. Efforts were undertaken to enhance the scientific foundations and policy relevance of the critical load program, and to foster consensus among producers and users of this information by means of three workshops. The applied calculation methods are described, as well as the resulting critical loads maps, based upon the outcomes of the workshops. Chapter 2 contains the most recent maps (May 1993) of the critical load of acidity as well as the critical load of sulphur and critical sulphur deposition, which are derived from the critical load of acidity. The chapter also contains maps of the sulphur deposition in Europe in 1980 and 1990, and the resulting exceedances. In chapter 3 the methods and equations used to derive the maps of critical loads and exceedances of acidity and sulphur are described with emphasis on the advances in the calculation methods used since the first European critical loads maps were produced in 1991. In chapter 4 the methods to be used to compute and map critical loads in the future are presented. In chapter 5 an overview of the data inputs is given, and the methods of data handling performed by the CCE to produce the current European maps of critical loads. In chapter 6 the results of an uncertainty analysis is described, which was performed on the critical loads computation methodology to assess the reliability of the computation results and the importance of the various input variables. Chapter 7 provides some conclusions and recommendations resulting from the critical load mapping activities. In Appendix 1 the reports of the can be found, with additional maps of critical loads and background variables in Appendix 2. 15 figs., 11 tabs., 156 refs

  9. Design and implementation of streaming media server cluster based on FFMpeg.

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  10. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  11. Multi-scenario evaluation and specification of electromagnetic loads on ITER vacuum vessel

    International Nuclear Information System (INIS)

    Rozov, Vladimir; Martinez, J.-M.; Portafaix, C.; Sannazzaro, G.

    2014-01-01

    Highlights: • We present the results of multi-scenario analysis of EM loads on ITER vacuum vessel (VV). • The differentiation of models provides the economic way to perform big amount of calculations. • Functional approximation is proposed for distributed data/FE/numerical results specification. • Examples of specification of the load profiles by trigonometric polynomials (DHT) are given. • Principles of accounting for toroidal asymmetry at EM interactions in tokamak are considered. - Abstract: The electro-magnetic (EM) transients cause mechanical forces, which represent one of the most critical loads for the ITER vacuum vessel (VV). The paper is focused on the results of multi-scenario analysis and systematization of these EM loads, including specifically addressed pressures on shells and the net vertical force. The proposed mathematical model and computational technology, based on the use of integral parameters and operational analysis methods, enabled qualitative and quantitative analysis of the problem, time-efficient computations and systematic assessment of a large number of scenarios. The obtained estimates, found envelopes and peak values exemplify the principal loads on the VV and provide a database to support engineering load specifications. Special attention is given to the challenge of specification and documenting of the results in a form, suitable for using the data in engineering applications. The practical aspects of specification of distributed data, such as experimental and finite-element (FE) results, by analytical interpolants are discussed. The example of functional approximation of the load profiles by trigonometric polynomials based on discrete Hartley transform (DHT) is given

  12. Multi-scenario evaluation and specification of electromagnetic loads on ITER vacuum vessel

    Energy Technology Data Exchange (ETDEWEB)

    Rozov, Vladimir, E-mail: vladimir.rozov@iter.org; Martinez, J.-M.; Portafaix, C.; Sannazzaro, G.

    2014-10-15

    Highlights: • We present the results of multi-scenario analysis of EM loads on ITER vacuum vessel (VV). • The differentiation of models provides the economic way to perform big amount of calculations. • Functional approximation is proposed for distributed data/FE/numerical results specification. • Examples of specification of the load profiles by trigonometric polynomials (DHT) are given. • Principles of accounting for toroidal asymmetry at EM interactions in tokamak are considered. - Abstract: The electro-magnetic (EM) transients cause mechanical forces, which represent one of the most critical loads for the ITER vacuum vessel (VV). The paper is focused on the results of multi-scenario analysis and systematization of these EM loads, including specifically addressed pressures on shells and the net vertical force. The proposed mathematical model and computational technology, based on the use of integral parameters and operational analysis methods, enabled qualitative and quantitative analysis of the problem, time-efficient computations and systematic assessment of a large number of scenarios. The obtained estimates, found envelopes and peak values exemplify the principal loads on the VV and provide a database to support engineering load specifications. Special attention is given to the challenge of specification and documenting of the results in a form, suitable for using the data in engineering applications. The practical aspects of specification of distributed data, such as experimental and finite-element (FE) results, by analytical interpolants are discussed. The example of functional approximation of the load profiles by trigonometric polynomials based on discrete Hartley transform (DHT) is given.

  13. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  14. Server-Aided Verification Signature with Privacy for Mobile Computing

    Directory of Open Access Journals (Sweden)

    Lingling Xu

    2015-01-01

    Full Text Available With the development of wireless technology, much data communication and processing has been conducted in mobile devices with wireless connection. As we know that the mobile devices will always be resource-poor relative to static ones though they will improve in absolute ability, therefore, they cannot process some expensive computational tasks due to the constrained computational resources. According to this problem, server-aided computing has been studied in which the power-constrained mobile devices can outsource some expensive computation to a server with powerful resources in order to reduce their computational load. However, in existing server-aided verification signature schemes, the server can learn some information about the message-signature pair to be verified, which is undesirable especially when the message includes some secret information. In this paper, we mainly study the server-aided verification signatures with privacy in which the message-signature pair to be verified can be protected from the server. Two definitions of privacy for server-aided verification signatures are presented under collusion attacks between the server and the signer. Then based on existing signatures, two concrete server-aided verification signature schemes with privacy are proposed which are both proved secure.

  15. Critical loads - assessment of uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Barkman, A.

    1998-10-01

    The effects of data uncertainty in applications of the critical loads concept were investigated on different spatial resolutions in Sweden and northern Czech Republic. Critical loads of acidity (CL) were calculated for Sweden using the biogeochemical model PROFILE. Three methods with different structural complexity were used to estimate the adverse effects of S0{sub 2} concentrations in northern Czech Republic. Data uncertainties in the calculated critical loads/levels and exceedances (EX) were assessed using Monte Carlo simulations. Uncertainties within cumulative distribution functions (CDF) were aggregated by accounting for the overlap between site specific confidence intervals. Aggregation of data uncertainties within CDFs resulted in lower CL and higher EX best estimates in comparison with percentiles represented by individual sites. Data uncertainties were consequently found to advocate larger deposition reductions to achieve non-exceedance based on low critical loads estimates on 150 x 150 km resolution. Input data were found to impair the level of differentiation between geographical units at all investigated resolutions. Aggregation of data uncertainty within CDFs involved more constrained confidence intervals for a given percentile. Differentiation as well as identification of grid cells on 150 x 150 km resolution subjected to EX was generally improved. Calculation of the probability of EX was shown to preserve the possibility to differentiate between geographical units. Re-aggregation of the 95%-ile EX on 50 x 50 km resolution generally increased the confidence interval for each percentile. Significant relationships were found between forest decline and the three methods addressing risks induced by S0{sub 2} concentrations. Modifying S0{sub 2} concentrations by accounting for the length of the vegetation period was found to constitute the most useful trade-off between structural complexity, data availability and effects of data uncertainty. Data

  16. Applying Stochastic Metaheuristics to the Problem of Data Management in a Multi-Tenant Database Cluster

    Directory of Open Access Journals (Sweden)

    E. A. Boytsov

    2014-01-01

    Full Text Available A multi-tenant database cluster is a concept of a data-storage subsystem for cloud applications with the multi-tenant architecture. The cluster is a set of relational database servers with the single entry point, combined into one unit with a cluster controller. This system is aimed to be used by applications developed according to Software as a Service (SaaS paradigm and allows to place tenants at database servers so that providing their isolation, data backup and the most effective usage of available computational power. One of the most important problems about such a system is an effective distribution of data into servers, which affects the degree of individual cluster nodes load and faulttolerance. This paper considers the data-management approach, based on the usage of a load-balancing quality measure function. This function is used during initial placement of new tenants and also during placement optimization steps. Standard schemes of metaheuristic optimization such as simulated annealing and tabu search are used to find a better tenant placement.

  17. CACHING DATA STORED IN SQL SERVER FOR OPTIMIZING THE PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2016-12-01

    Full Text Available This paper present the architecture of web site with different techniques used for optimize the performance of loading the web content. The architecture presented here is for e-commerce site developed on windows with MVC, IIS and Micosoft SQL Server. Caching the data is one technique used by the browsers, by the web servers itself or by proxy servers. Caching the data is made without the knowledge of users and need to provide to user the more recent information from the server. This means that caching mechanism has to be aware of any modification of data on the server. There are different information’s presented in e-commerce site related to products like images, code of product, description, properties or stock

  18. Usage of Thin-Client/Server Architecture in Computer Aided Education

    Science.gov (United States)

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  19. Critical loads for vegetation. Definition, use and limits

    International Nuclear Information System (INIS)

    Thimonier, A.; Dupouey, J.L.

    1993-01-01

    Vegetation is a key compartment of ecosystems. It contains a large part of the biodiversity at the species level. For the evaluation of critical loads, we have to separate different receptors: lower plants (algae, fungi, lichens and mosses) and vascular plants. Trees must be distinguished due to their economic value. We analyze the different changes that pollution produces on vegetation: the state of health of individuals, changes in the biology and genetics at the population level, changes in the biodiversity or the specific composition at the community level. Calculation of critical loads is based on observational or experimental studies, in more or less controlled environments. However, they cannot yet be obtained through models of vegetation changes. Some results have been acquired at the European level, mainly for critical loads for nitrogen, but these results have come mostly from Northern Europe. Moreover, only heathlands and acidic forests have been studied in depth. Critical loads for a lot of environment types are still unknown. Lower plants and interactions between vegetation and animals need more investigation

  20. Improvements to the National Transport Code Collaboration Data Server

    Science.gov (United States)

    Alexander, David A.

    2001-10-01

    The data server of the National Transport Code Colaboration Project provides a universal network interface to interpolated or raw transport data accessible by a universal set of names. Data can be acquired from a local copy of the Iternational Multi-Tokamak (ITER) profile database as well as from TRANSP trees of MDS Plus data systems on the net. Data is provided to the user's network client via a CORBA interface, thus providing stateful data server instances, which have the advantage of remembering the desired interpolation, data set, etc. This paper will review the status and discuss the recent improvements made to the data server, such as the modularization of the data server and the addition of hdf5 and MDS Plus data file writing capability.

  1. Load allocation of power plant using multi echelon economic dispatch

    Science.gov (United States)

    Wahyuda, Santosa, Budi; Rusdiansyah, Ahmad

    2017-11-01

    In this paper, the allocation of power plant load which is usually done with a single echelon as in the load flow calculation, is expanded into a multi echelon. A plant load allocation model based on the integration of economic dispatch and multi-echelon problem is proposed. The resulting model is called as Single Objective Multi Echelon Economic Dispatch (SOME ED). This model allows the distribution of electrical power in more detail in the transmission and distribution substations along the existing network. Considering the interconnection system where the distance between the plant and the load center is usually far away, therefore the loss in this model is seen as a function of distance. The advantages of this model is its capability of allocating electrical loads properly, as well as economic dispatch information with the flexibility of electric power system as a result of using multi-echelon. In this model, the flexibility can be viewed from two sides, namely the supply and demand sides, so that the security of the power system is maintained. The model was tested on a small artificial data. The results demonstrated a good performance. It is still very open to further develop the model considering the integration with renewable energy, multi-objective with environmental issues and applied to the case with a larger scale.

  2. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    Science.gov (United States)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  3. (m, M) Machining system with two unreliable servers, mixed spares and common-cause failure

    OpenAIRE

    Jain, Madhu; Mittal, Ragini; Kumari, Rekha

    2015-01-01

    This paper deals with multi-component machine repair model having provision of warm standby units and repair facility consisting of two heterogeneous servers (primary and secondary) to provide repair to the failed units. The failure of operating and standby units may occur individually or due to some common cause. The primary server may fail partially following full failure whereas secondary server faces complete failure only. The life times of servers and operating/standby units and their re...

  4. [Radiology information system using HTML, JavaScript, and Web server].

    Science.gov (United States)

    Sone, M; Sasaki, M; Oikawa, H; Yoshioka, K; Ehara, S; Tamakawa, Y

    1997-12-01

    We have developed a radiology information system using intranet techniques, including hypertext markup language, JavaScript, and Web server. JavaScript made it possible to develop an easy-to-use application, as well as to reduce network traffic and load on the server. The system we have developed is inexpensive and flexible, and its development and maintenance are much easier than with the previous system.

  5. Towards Big Earth Data Analytics: The EarthServer Approach

    Science.gov (United States)

    Baumann, Peter

    2013-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data

  6. Improved materials management through client/server computing

    International Nuclear Information System (INIS)

    Brooks, D.; Neilsen, E.; Reagan, R.; Simmons, D.

    1992-01-01

    This paper reports that materials management and procurement impacts every organization within an electric utility from power generation to customer service. An efficient material management and procurement system can help improve productivity and minimize operating costs. It is no longer sufficient to simply automate materials management using inventory control systems. Smart companies are building centralized data warehouses and use the client/server style of computing to provide real time data access. This paper describes how Alabama Power Company, Southern Company Services and Digital Equipment Corporation transformed two existing applications, a purchase order application within DEC's ALL-IN-1 environment and a materials management application within an IBM CICS environment, into a data warehouse - client/server application. An application server is used to overcome incompatibilities between computing environments and provide easy, real-time access to information residing in multi-vendor environments

  7. Professional Microsoft SQL Server 2012 Integration Services

    CERN Document Server

    Knight, Brian; Moss, Jessica M; Davis, Mike; Rock, Chris

    2012-01-01

    An in-depth look at the radical changes to the newest release of SISS Microsoft SQL Server 2012 Integration Services (SISS) builds on the revolutionary database product suite first introduced in 2005. With this crucial resource, you will explore how this newest release serves as a powerful tool for performing extraction, transformation, and load operations (ETL). A team of SQL Server experts deciphers this complex topic and provides detailed coverage of the new features of the 2012 product release. In addition to technical updates and additions, the authors present you with a new set of SISS b

  8. Perancangan dan Pengujian Load Balancing dan Failover Menggunakan NginX

    Directory of Open Access Journals (Sweden)

    Rahmad Dani

    2017-06-01

    Full Text Available Situs web dengan traffic yang tinggi dapat menyebabkan beban kerja yang berat di sisi server, yang pada gilirannya akan mengakibatkan turunnya kinerja server, bahkan kegagalan sistem secara keseluruhan. Salah satu solusi untuk mengatasi masalah tersebut adalah dengan menerapkan teknik load balancing dan failover. Load balancing merupakan teknologi untuk melakukan pembagian beban kepada beberapa server, memastikan tidak terjadi kelebihan beban pada salah satu server. Sementara itu, failover merupakan kemampuan suatu sistem untuk berpindah ke sistem cadangan jika sistem utama mengalami kegagalan. Dalam penelitian ini load balancing dengan teknik failover akan diimplementasikan pada sistem operasi Ubuntu. Software inti yang digunakan dalam penelitian ini adalah Nginx dan KeepAlived. Nginx akan berfungsi sebagai load balancer, sedangkan KeepAlived untuk mengimplementasikan teknik failover. Beberapa skenario telah disiapkan untuk menguji sistem load balancing yang telah dirancang. Pengujian dilakukan dengan menggunakan perangkat lunak JMeter. Berdasarkan pengujian yang telah dilakukan, sistem yang dirancang berhasil membagikan beban permintaan dan dapat terus bekerja walaupun terjadi kegagalan pada server load balancer ataupun kegagalan pada server backend. Selain itu, dalam beberapa pengujian, penggunaan load balancing terbukti mampu menurunkan waktu respon dan meningkatkan thoughput pada sistem sehingga mampu meningkatkan performa keseluruhan sistem. Mengacu pada hasil penelitian ini, sistem load balancing dan failover menggunakan Nginx dapat dijadikan salah satu solusi pada sistem web server dengan situs web yang memiliki traffic tinggi.

  9. Critical traffic loading for the design of prestressed concrete bridge

    International Nuclear Information System (INIS)

    Hassan, M.I.U.

    2009-01-01

    A study has been carried out to determine critical traffic loadings for the design of bridge superstructures. The prestressed concrete girder bridge already constructed in Lahore is selected for the analysis as an example. Standard traffic loadings according to AASHTO (American Association of State Highway and Transportation Officials) and Pakistan Highway Standards are used for this purpose. These include (1) HL-93 Truck, (2) Lane and (3) Tandem Loadings in addition to (4) Military tank loading, (5) Class-A, (6) Class-B and (7) Class-AA loading, (8) NLC (National Logistic Cell) and (9) Volvo truck loadings. Bridge superstructure including transom beam is analyzed Using ASD and LRFD (Load and Resistance Factor Design) provisions of AASHTO specifications. For the analysis, two longer and shorter spans are selected. This includes the analysis of bridge deck; interior and exterior girder; a typical transom beam and a pier. Dead and live loading determination is carried out using both computer aided and manual calculations. Evaluation of traffic loadings is done for all the bridge components to find out the critical loading. HL-93 loading comes out to be the most critical loading and where this loading is not critical in case of bridge decks; a factor of 1.15 is introduced to make it equivalent with HL-93 -Ioading. SAP-2000 (Structural Engineering Services of Pakistan) and MS-Excel is employed for analysis of bridge superstructure subjected to this loading. Internal forces are obtained for the structural elements of the bridge for all traffic loadings mentioned. It is concluded that HL-93 loading can be used for the design of prestressed concrete girder bridge. Bridge design authorities like NHA (National Highway Authority) and different cities development authorities are using different standard traffic loadings. A number of suggestions are made from the results of the research work related to traffic loadings and method of design. These recommendations may be

  10. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...

  11. WebSpy: An Architecture for Monitoring Web Server Availability in a Multi-Platform Environment

    Directory of Open Access Journals (Sweden)

    Madhan Mohan Thirukonda

    2002-01-01

    Full Text Available For an electronic business (e-business, customer satisfaction can be the difference between long-term success and short-term failure. Customer satisfaction is highly impacted by Web server availability, as customers expect a Web site to be available twenty-four hours a day and seven days a week. Unfortunately, unscheduled Web server downtime is often beyond the control of the organization. What is needed is an effective means of identifying and recovering from Web server downtime in order to minimize the negative impact on the customer. An automated architecture, called WebSpy, has been developed to notify administration and to take immediate action when Web server downtime is detected. This paper describes the WebSpy architecture and differentiates it from other popular Web monitoring tools. The results of a case study are presented as a means of demonstrating WebSpy's effectiveness in monitoring Web server availability.

  12. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  13. TBI server: a web server for predicting ion effects in RNA folding.

    Science.gov (United States)

    Zhu, Yuhong; He, Zhaojian; Chen, Shi-Jie

    2015-01-01

    Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects. The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects. By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  14. TBI server: a web server for predicting ion effects in RNA folding.

    Directory of Open Access Journals (Sweden)

    Yuhong Zhu

    Full Text Available Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects.The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects.By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  15. Multi-Agent Software Engineering

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2014-01-01

    This paper proposed an alarm-monitoring system for people based on multi-agent using maps. The system monitors the users physical context using their mobile phone. The agents on the mobile phones are responsible for collecting, processing and sending data to the server. They can determine the parameters of their environment by sensors. The data are processed and sent to the server. On the other side, a set of agents on server can store this data and check the preconditions of the restrictions associated with the user, in order to trigger the appropriate alarms. These alarms are sent not only to the user who is alarmed to avoid the appeared restriction, but also to his supervisor. The proposed system is a general purpose alarm system that can be used in different critical application areas. It has been applied for monitoring the workers of radiation sites. However, these workers can do their activity tasks in the radiation environments safely

  16. Microsoft® Exchange Server 2007 Administrator's Companion

    CERN Document Server

    Glenn, Walter; Maher, Joshua

    2009-01-01

    Get your mission-critical messaging and collaboration systems up and running with the essential guide to deploying and managing Exchange Server 2007, now updated for SP1. This comprehensive administrator's reference covers the full range of server and client deployments, unified communications, security features, performance optimization, troubleshooting, and disaster recovery. It also includes four chapters on security policy, tools, and techniques to help protect messaging systems from viruses, spam, and phishing. Written by expert authors Walter Glenn and Scott Lowe, this reference deliver

  17. Mapping critical levels/loads for the Slovak Republic. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Zavodsky, D; Babiakova, G; Mitosinkova, M [and others

    1996-08-01

    As a part of the Agreement on Environmental Cooperation between Norway and Slovakia a project ``Mapping Critical Levels/Loads for Slovakia`` was established. This report presents the final project results. Critical loads for forest, surface and ground waters and their exceedances were calculated by means of the steady-state mass balance model PROFILE for soils, and the steady-state water chemistry method for waters. A grid distance of 10 km was used. Because the sulphur deposition has been decreasing, the exceedances of critical load of acidity and critical sulphur deposition of forest soils have decreased from 1990 to 1995. Practically no acidity exceedances for surface water or ground water were found in 1995. The critical level of forest ozone was exceeded all over Slovakia. In the Tatra mountains the exceedance was over 25000 ppb.h. 23 refs., 3 figs., 3 tabs.

  18. Mass balance approaches to assess critical loads and target loads of heavy metals for terrestrial and aquatic ecosystems

    NARCIS (Netherlands)

    Vries, de W.; Groenenberg, J.E.; Posch, M.

    2015-01-01

    Critical loads of heavy metals address not only ecotoxicological effects on organisms in soils and surface waters, but also food quality in view of public health. A critical load for metals is the load resulting at steady state in a metal concentration in a compartment (e.g. soil solution, surface

  19. Critical acidity loads in France; Charges critiques d`acidite en France

    Energy Technology Data Exchange (ETDEWEB)

    Probst, A.; Party, J.P.; Fevrier, C. [Centre de Geochimie de la Surface (UPR 06251 du CNRS), 67 - Strasbourg (France); Dambrine, E. [Centre de Recherches Forestieres, INRA, 45 - Orleans (France); Thomas, A.L.; King, D. [Institut National de Recherches Agronomique (INRA), 45 - ORDON (France); Stussi, J.M. [Centre National de la Recherche Scientifique (CNRS), 54 - Vandoeuvre-les-Nancy (France)

    1997-12-31

    Based on results from several systematic forest and surface water monitoring programs, carried out in various parts of France as well as in Europe, acidity critical loads have been calculated for soils and surface waters; critical loads are presented for water and soils in crystalline mountainous regions such as Ardennes, Vosges and Massif Central; links with geochemistry, ecosystems and types of trees are discussed and perspectives are given for the calculation of acid and nitrogen critical loads on the whole France

  20. [Mapping Critical Loads of Heavy Metals for Soil Based on Different Environmental Effects].

    Science.gov (United States)

    Shi, Ya-xing; Wu, Shao-hua; Zhou, Sheng-lu; Wang, Chun-hui; Chen, Hao

    2015-12-01

    China's rapid development of industrialization and urbanization causes the growing problem of heavy metal pollution of soil, threatening environment and human health. Therefore, prevention and management of heavy metal pollution become particularly important. Critical loads of heavy metals are an important management tool that can be utilized to prevent the occurrence of heavy metal pollution. Our study was based on three cases: status balance, water environmental effects and health risks. We used the steady-state mass balance equation to calculate the critical loads of Cd, Cu, Pb, Zn at different effect levels and analyze the values and spatial variation of critical loads. In addition, we used the annual input fluxes of heavy metals of the agro-ecosystem in the Yangtze River delta and China to estimate the proportion of area with exceedance of critical loads. The results demonstrated that the critical load value of Cd was the minimum, and the values of Cu and Zn were lager. There were spatial differences among the critical loads of four elements in the study area, lower critical loads areas mainly occurred in woodland and high value areas distributed in the east and southwest of the study area, while median values and the medium high areas mainly occurred in farmland. Comparing the input fluxes of heavy metals, we found that Pb and Zn in more than 90% of the area exceeded the critical loads under different environmental effects in the study area. The critical load exceedance of Cd mainly occurred under the status balance and the water environmental effect, while Cu under the status balance and water environmental effect with a higher proportion of exceeded areas. Critical loads of heavy metals at different effect levels in this study could serve as a reference from effective control of the emissions of heavy metals and to prevent the occurrence of heavy metal pollution.

  1. Creating a Data Warehouse using SQL Server

    DEFF Research Database (Denmark)

    Sørensen, Jens Otto; Alnor, Karl

    1999-01-01

    In this paper we construct a Star Join Schema and show how this schema can be created using the basic tools delivered with SQL Server 7.0. Major objectives are to keep the operational database unchanged so that data loading can be done with out disturbing the business logic of the operational...

  2. Potential nitrogen critical loads for northern Great Plains grassland vegetation

    Science.gov (United States)

    Symstad, Amy J.; Smith, Anine T.; Newton, Wesley E.; Knapp, Alan K.

    2015-01-01

    The National Park Service is concerned that increasing atmospheric nitrogen deposition caused by fossil fuel combustion and agricultural activities could adversely affect the northern Great Plains (NGP) ecosystems in its trust. The critical load concept facilitates communication between scientists and policy makers or land managers by translating the complex effects of air pollution on ecosystems into concrete numbers that can be used to inform air quality targets. A critical load is the exposure level below which significant harmful effects on sensitive elements of the environment do not occur. A recent review of the literature suggested that the nitrogen critical load for Great Plains vegetation is 10-25 kg N/ha/yr. For comparison, current atmospheric nitrogen deposition in NGP National Park Service (NPS) units ranges from ~4 kg N/ha/yr in the west to ~13 kg N/ha/yr in the east. The suggested critical load, however, was derived from studies far outside of the NGP, and from experiments investigating nitrogen loads substantially higher than current atmospheric deposition in the region.Therefore, to better determine the nitrogen critical load for sensitive elements in NGP parks, we conducted a four-year field experiment in three northern Great Plains vegetation types at Badlands and Wind Cave National Parks. The vegetation types were chosen because of their importance in NGP parks, their expected sensitivity to nitrogen addition, and to span a range of natural fertility. In the experiment, we added nitrogen at rates ranging from below current atmospheric deposition (2.5 kg N/ha/yr) to far above those levels but commensurate with earlier experiments (100 kg N/ha/yr). We measured the response of a variety of vegetation and soil characteristics shown to be sensitive to nitrogen addition in other studies, including plant biomass production, plant tissue nitrogen concentration, plant species richness and composition, non-native species abundance, and soil inorganic

  3. PENGGUNAAN KONEKSI CORBA DENGAN PEMROGRAMAN MIDAS MULTI-TIER APPLICATION DALAM SISTEM RESERVASI HOTEL

    Directory of Open Access Journals (Sweden)

    Irwan Kristanto Julistiono

    2001-01-01

    Full Text Available This paper is made from a multi-tier system using corba technology for hotel reservation program for web browser and also client program. Client software is connected to application server with Corba Connection and client and application server connect to SQL server 7.0. via ODBC. The are 2 types of client: web client and delphi client. In making web browser client application, we use delphi activex from technology, in where in this system made like making the regular form, but it has shortage in integration with html language. Multi-pier application using corba system generally has another profit beside it could be developed, this system also stake with multi system database server, multi middle servers and multi client in which with these things all the system can system can be integrated. The weakness of this system is the complicated corba system, so it will be difficult to understand, while for multi-tier it self need a particular procedure to determine which server chossed by the client. Abstract in Bahasa Indonesia : Pada makalah ini dibuat suatu sistem multi-tier yang menggunakan teknologi CORBA untuk program reservasi hotel baik dengan web browser maupun program client. Perangkat lunak yang dipakai sebagai database server adalah SQL server 7.0. Program Client Delphi melalui Corba Connection akan dihubungkan ke Aplikasi server. Dan melalui ODBC Aplikasi Server akan dihubungkan ke SQL Server 7.0. Ada dua buah aplikasi client yaitu yang menggunakan lokal network dan yang menggunakan global network/web browser. Pada pembuatan aplikasi client untuk web browser. Digunakan teknologi activex form pada delphi dimana sistem ini dibuat seperti membuat form biasa, hanya saja memiliki kekurangan pada integrasi dengan bahasa html. Penggunaan sistem multi-tier dengan Corba ini secara umum memiliki keuntungan selain dapat dikembangkan lebih lanjut juga sistem ini dirancang dengan sistem multi database server, multi midle server, dan multi client dimana

  4. Mapping critical loads in Europe in the framework of the UN/CEE

    International Nuclear Information System (INIS)

    Hettelingh, J.P.

    1993-01-01

    Critical loads for acidity, sulphur and nitrogen have been computed and geographically mapped in Europe. Critical loads are compared to actual deposition of acidity and of sulphur. Results show that parts of central and north-west Europe receive 20 times or more acidity than the ecosystems' critical loads, thus affecting the long-term sustainability. The Regional Acidification INformation and Simulation model (RAINS) is used to assess 2 scenarios of emission reduction. The first scenario describes currently applied reductions whereas the second assesses the application of maximum feasible reductions to SO 2 and NO x . The latter scenario significantly reduces the area of Europe where critical loads are exceeded. In general, it is shown that a pan-european policy is of highest necessity for obtaining an efficient reduction of acidic emissions throughout Europe. For France, in particular, it is concluded that the excess of critical loads for acidity is largely due to ammonia

  5. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  6. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  7. Windows Server 2012 vulnerabilities and security

    Directory of Open Access Journals (Sweden)

    Gabriel R. López

    2015-09-01

    Full Text Available This investigation analyses the history of the vulnerabilities of the base system Windows Server 2012 highlighting the most critic vulnerabilities given every 4 months since its creation until the current date of the research. It was organized by the type of vulnerabilities based on the classification of the NIST. Next, given the official vulnerabilities of the system, the authors show how a critical vulnerability is treated by Microsoft in order to countermeasure the security flaw. Then, the authors present the recommended security approaches for Windows Server 2012, which focus on the baseline software given by Microsoft, update, patch and change management, hardening practices and the application of Active Directory Rights Management Services (AD RMS. AD RMS is considered as an important feature since it is able to protect the system even though it is compromised using access lists at a document level. Finally, the investigation of the state of the art related to the security of Windows Server 2012 shows an analysis of solutions given by third parties vendors, which offer security products to secure the base system objective of this study. The recommended solution given by the authors present the security vendor Symantec with its successful features and also characteristics that the authors considered that may have to be improved in future versions of the security solution.

  8. UNIX secure server : a free, secure, and functional server example

    OpenAIRE

    Sastre, Hugo

    2016-01-01

    The purpose of this thesis work was to introduce UNIX server as a personal server but also as a start point for investigation and developing at a professional level. The objective of this thesis was to build a secure server providing not only a FTP server but also an HTTP server and a cloud system for remote backups. OpenBSD was used as the operating system. OpenBSD is a UNIX-like operating system made by hackers for hackers. The difference with other systems that might partially provid...

  9. Microgrids for Service Restoration to Critical Load in a Resilient Distribution System

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yin; Liu, Chen-Ching; Schneider, Kevin P.; Tuffner, Francis K.; Ton, Dan T.

    2018-01-01

    icrogrids can act as emergency sources to serve critical loads when utility power is unavailable. This paper proposes a resiliency-based methodology that uses microgrids to restore critical loads on distribution feeders after a major disaster. Due to limited capacity of distributed generators (DGs) within microgrids, dynamic performance of the DGs during the restoration process becomes essential. In this paper, the stability of microgrids, limits on frequency deviation, and limits on transient voltage and current of DGs are incorporated as constraints of the critical load restoration problem. The limits on the amount of generation resources within microgrids are also considered. By introducing the concepts of restoration tree and load group, restoration of critical loads is transformed into a maximum coverage problem, which is a linear integer program (LIP). The restoration paths and actions are determined for critical loads by solving the LIP. A 4-feeder, 1069-bus unbalanced test system with four microgrids is utilized to demonstrate the effectiveness of the proposed method. The method is applied to the distribution system in Pullman, WA, resulting in a strategy that uses generators on the Washington State University campus to restore service to the Hospital and City Hall in Pullman.

  10. SQL Server Integration Services

    CERN Document Server

    Hamilton, Bill

    2007-01-01

    SQL Server 2005 Integration Services (SSIS) lets you build high-performance data integration solutions. SSIS solutions wrap sophisticated workflows around tasks that extract, transform, and load (ETL) data from and to a wide variety of data sources. This Short Cut begins with an overview of key SSIS concepts, capabilities, standard workflow and ETL elements, the development environment, execution, deployment, and migration from Data Transformation Services (DTS). Next, you'll see how to apply the concepts you've learned through hands-on examples of common integration scenarios. Once you've

  11. Methods for monitoring the initial load to critical in the fast test reactor

    International Nuclear Information System (INIS)

    Johnson, D.L.

    1975-08-01

    Conventional symmetric fuel loadings for the initial loading to critical of the Fast Test Reactor (FTR) are predicted to be more time consuming than asymmetric or trisector loadings. Potentially significant time savings can be realized by the latter, since adequate intermediate assessments of neutron multiplication can be made periodically without control rod reconnection in all trisectors. Experimental simulation of both loading schemes was carried out in the Reverse Approach to Critical (RAC) experiments in the Fast Test Reactor-Engineering Mockup Critical facility. Analyses of these experiments indicated that conventional source multiplication methods can be applied for monitoring either a symmetric or asymmetric fuel loading scheme equally well provided that detection efficiency corrections are employed. Methods for refining predictions of reactivity and count rates for the stages in a load to critical were also investigated. (auth)

  12. Exceedance of critical loads and of critical limits impacts tree nutrition across Europe

    DEFF Research Database (Denmark)

    Waldner, P.; Thimonier, A.; Graf Pannatier, E.

    2015-01-01

    solution tended to be related to less favourable nutritional status. Context Forests have been exposed to elevated atmospheric deposition of acidifying and eutrophying sulphur and nitrogen compounds for decades. Critical loads have been identified, below which damage due to acidification and eutrophication...... are not expected to occur. Aims We explored the relationship between the exceedance of critical loads and inorganic nitrogen concentration, the base cation to aluminium ratio in soil solutions, as well as the nutritional status of trees. Methods We used recent data describing deposition, elemental concentrations....... Conclusion The findings support the hypothesis that elevated nitrogen and sulphur deposition can lead to imbalances in tree nutrition....

  13. Single server queueing networks with varying service times and renewal input

    Directory of Open Access Journals (Sweden)

    Pierre Le Gall

    2000-01-01

    Full Text Available Using recent results in tandem queues and queueing networks with renewal input, when successive service times of the same customer are varying (and when the busy periods are frequently not broken up in large networks, the local queueing delay of a single server queueing network is evaluated utilizing new concepts of virtual and actual delays (respectively. It appears that because of an important property, due to the underlying tandem queue effect, the usual queueing standards (related to long queues cannot protect against significant overloads in the buffers due to some possible “agglutination phenomenon” (related to short queues. Usual network management methods and traffic simulation methods should be revised, and should monitor the partial traffic streams loads (and not only the server load.

  14. Critical loads of nitrogen deposition and critical levels of atmospheric ammonia for semi-natural Mediterranean evergreen woodlands

    Directory of Open Access Journals (Sweden)

    P. Pinho

    2012-03-01

    Full Text Available Nitrogen (N has emerged in recent years as a key factor associated with global changes, with impacts on biodiversity, ecosystems functioning and human health. In order to ameliorate the effects of excessive N, safety thresholds such as critical loads (deposition fluxes and levels (concentrations can be established. Few studies have assessed these thresholds for semi-natural Mediterranean ecosystems. Our objective was therefore to determine the critical loads of N deposition and long-term critical levels of atmospheric ammonia for semi-natural Mediterranean evergreen woodlands. We have considered changes in epiphytic lichen communities, one of the most sensitive comunity indicators of excessive N in the atmosphere. Based on a classification of lichen species according to their tolerance to N we grouped species into response functional groups, which we used as a tool to determine the critical loads and levels. This was done for a Mediterranean climate in evergreen cork-oak woodlands, based on the relation between lichen functional diversity and modelled N deposition for critical loads and measured annual atmospheric ammonia concentrations for critical levels, evaluated downwind from a reduced N source (a cattle barn. Modelling the highly significant relationship between lichen functional groups and annual atmospheric ammonia concentration showed the critical level to be below 1.9 μg m−3, in agreement with recent studies for other ecosystems. Modelling the highly significant relationship between lichen functional groups and N deposition showed that the critical load was lower than 26 kg (N ha−1 yr−1, which is within the upper range established for other semi-natural ecosystems. Taking into account the high sensitivity of lichen communities to excessive N, these values should aid development of policies to protect Mediterranean woodlands from the initial effects of excessive N.

  15. Performance of Distributed Query Optimization in Client/Server Systems

    NARCIS (Netherlands)

    Skowronek, J.; Blanken, Henk; Wilschut, A.N.

    The design, implementation and performance of an optimizer for a nested query language is considered. The optimizer operates in a client/server environment, in particular an Intranet setting. The paper deals with the scalability challenge by tackling the load of many clients by allocating optimizer

  16. Using a multi-state recurrent neural network to optimize loading patterns in BWRs

    International Nuclear Information System (INIS)

    Ortiz, Juan Jose; Requena, Ignacio

    2004-01-01

    A Multi-State Recurrent Neural Network is used to optimize Loading Patterns (LP) in BWRs. We have proposed an energy function that depends on fuel assembly positions and their nuclear cross sections to carry out optimisation. Multi-State Recurrent Neural Networks creates LPs that satisfy the Radial Power Peaking Factor and maximize the effective multiplication factor at the Beginning of the Cycle, and also satisfy the Minimum Critical Power Ratio and Maximum Linear Heat Generation Rate at the End of the Cycle, thereby maximizing the effective multiplication factor. In order to evaluate the LPs, we have used a trained back-propagation neural network to predict the parameter values, instead of using a reactor core simulator, which saved considerable computation time in the search process. We applied this method to find optimal LPs for five cycles of Laguna Verde Nuclear Power Plant (LVNPP) in Mexico

  17. Low-cost workbench client / server cores for remote experiments in electronics

    OpenAIRE

    José M. M. Ferreira; Americo Dias; Paulo Sousa; Zorica Nedic; Jan Machotka; Ozdemir Gol; Andrew Nafalski

    2010-01-01

    This paper offers an open-source solution to implement low-cost workbenches serving a wide range of remote experiments in electronics. The proposed solution comprises 1) a small (9,65 x 6,1 cm) Linux server board; 2) a server core supporting two TCP/IP communication channels, and general purpose I/O pin drivers to interface the remote experiment hardware; and 3) a client core based on a multi-tab user interface supporting text file management to exchange experiment scripts / status informatio...

  18. Standby-Loss Elimination in Server Power Supply

    Directory of Open Access Journals (Sweden)

    Jong-Woo Kim

    2017-07-01

    Full Text Available In a server power system, a standby converter is required in order to provide the standby output, monitor the system’s status, and communicate with the server power system. Since these functions are always required, losses from the standby converter are produced even though the system operates in normal mode. For these reasons, the losses deteriorate the total efficiency of the system. In this paper, a new structure is proposed to eliminate the losses from the standby converter of a server power supply. The key feature of the proposed structure is that the main direct current (DC/DC converter substitutes all of the output power of the standby converter, and the standby converter is turned off in normal mode. With the proposed structure, the losses from the standby converter can be eliminated in normal mode, and this leads to a higher efficiency in overall load conditions. Although the structure has been proposed in the previous work, very important issues such as a steady state analysis, the transient responses, and how to control the standby converter are not discussed. This paper presents these issues further. The feasibility of the proposed structure has been verified with 400 V link voltage, 12 V/62.5 A main output, and a 12 V/2.1 A standby output server power system.

  19. Estimation of critical loads for radiocaesium in Fennoscandia and Northwest Russia

    International Nuclear Information System (INIS)

    Howard, B.J.; Wright, S.M.; Barnett, C.L.; Skuterud, L.; Strand, P.

    2002-01-01

    The application of the critical loads methodology for radioactive contamination of Arctic and sub-arctic ecosystems, where natural and semi-natural food products are important components of the diet of many people, is proposed and discussed. The critical load is herein defined as the amount of radionuclide deposition necessary to produce radionuclide activity concentrations in food products exceeding intervention limits. The high transfer of radiocaesium to reindeer meat gives this product the lowest critical load, even though the intervention limit is relatively high compared with other products. Ecological half-lives of radiocaesium in natural and semi-natural products are often very long, and it is therefore important to take account of contamination already present in the event of an accident affecting areas where such products are important. In particular, the long ecological half-life for radiocaesium in moose meat means that the critical load is highly sensitive to prior deposition. An example of the potential application of the method for emergency preparedness is given for the Chernobyl accident

  20. Results of the critical experiments concerning OTTO loading at the critical HTR-test facility KAHTER

    International Nuclear Information System (INIS)

    Drueke, V.; Litzow, W.; Paul, N.

    1982-12-01

    Critical experiments concerning OTTO loading are described. In the KAHTER facility an OTTO loading has been simulated, therefore the original KAHTER assembly was reconstructed. The determination of critical masses and reactivity worths of control rods and of additional absorber rods in the top reflector and in the upper cavity was of main interest for comparison with reactor following calculations. Besides this, reaction rates in different energy regions were measured in the upper part of the core, in the cavity and top reflector. (orig.) [de

  1. Optimal loading and protection of multi-state systems considering performance sharing mechanism

    International Nuclear Information System (INIS)

    Xiao, Hui; Shi, Daimin; Ding, Yi; Peng, Rui

    2016-01-01

    Engineering systems are designed to carry the load. The performance of the system largely depends on how much load it carries. On the other hand, the failure rate of the system is strongly affected by its load. Besides internal failures, such as fatigue and aging process, systems may also fail due to external impacts such as nature disasters and terrorism. In this paper, we integrate the effect of loading and protection of external impacts on multi-state systems with performance sharing mechanism. The objective of this research is to determine how to balance the load and protection on system elements. An availability evaluation algorithm of the proposed system is suggested and the corresponding optimization problem is solved utilizing genetic algorithms. - Highlights: • Performance sharing of multi-state systems is considered. • The effect of load on system elements is analyzed. • Joint optimization model of element loading and protection is formulated. • Genetic Algorithms are adapted to solve the reliability optimization problem.

  2. Node Load Balance Multi-flow Opportunistic Routing in Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    Wang Tao

    2014-04-01

    Full Text Available Opportunistic routing (OR has been proposed to improve the performance of wireless networks by exploiting the multi-user diversity and broadcast nature of the wireless medium. It involves multiple candidate forwarders to relay packets every hop. The existing OR doesn’t take account of the traffic load and load balance, therefore some nodes may be overloaded while the others may not, leading to network performance decline. In this paper, we focus on opportunities routing selection with node load balance which is described as a convex optimization problem. To solve the problem, by combining primal-dual and sub-gradient methods, a fully distributed Node load balance Multi-flow Opportunistic Routing algorithm (NMOR is proposed. With node load balance constraint, NMOR allocates the flow rate iteratively and the rate allocation decides the candidate forwarder selection of opportunities routing. The simulation results show that NMOR algorithm improves 100 %, 62 % of the aggregative throughput than ETX and EAX, respectively.

  3. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    Science.gov (United States)

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  4. Lichen-based critical loads for atmospheric nitrogen deposition in Western Oregon and Washington Forests, USA

    Energy Technology Data Exchange (ETDEWEB)

    Geiser, Linda H., E-mail: lgeiser@fs.fed.u [US Forest Service Pacific Northwest Region Air Resource Management Program, Siuslaw National Forest, PO Box 1148, Corvallis, OR 97339 (United States); Jovan, Sarah E. [US Forest Service Forest Inventory and Analysis Program, Pacific Northwest Research Station, 620 SW Main St, Suite 400, Portland, OR 97205 (United States); Glavich, Doug A. [US Forest Service Pacific Northwest Region Air Resource Management Program, Siuslaw National Forest, PO Box 1148, Corvallis, OR 97339 (United States); Porter, Matthew K. [Laboratory for Atmospheric Research, Washington State University, Pullman, WA 99164 (United States)

    2010-07-15

    Critical loads (CLs) define maximum atmospheric deposition levels apparently preventative of ecosystem harm. We present first nitrogen CLs for northwestern North America's maritime forests. Using multiple linear regression, we related epiphytic-macrolichen community composition to: 1) wet deposition from the National Atmospheric Deposition Program, 2) wet, dry, and total N deposition from the Communities Multi-Scale Air Quality model, and 3) ambient particulate N from Interagency Monitoring of Protected Visual Environments (IMPROVE). Sensitive species declines of 20-40% were associated with CLs of 1-4 and 3-9 kg N ha{sup -1} y{sup -1} in wet and total deposition. CLs increased with precipitation across the landscape, presumably from dilution or leaching of depositional N. Tight linear correlation between lichen and IMPROVE data suggests a simple screening tool for CL exceedance in US Class I areas. The total N model replicated several US and European lichen CLs and may therefore be helpful in estimating other temperate-forest lichen CLs. - Lichen-based critical loads for N deposition in western Oregon and Washington forests ranged from 3 to 9 kg ha{sup -1} y{sup -1}, increasing with mean annual precipitation.

  5. Mechanical behaviors of multi-filament twist superconducting strand under tensile and cyclic loading

    Science.gov (United States)

    Wang, Xu; Li, Yingxu; Gao, Yuanwen

    2016-01-01

    The superconducting strand, serving as the basic unit cell of the cable-in-conduit-conductors (CICCs), is a typical multi-filament twist composite which is always subjected to a cyclic loading under the operating condition. Meanwhile, the superconducting material Nb3Sn in the strand is sensitive to strain frequently relating to the performance degradation of the superconductivity. Therefore, a comprehensive study on the mechanical behavior of the strand helps understanding the superconducting performance of the strained Nb3Sn strands. To address this issue, taking the LMI (internal tin) strand as an example, a three-dimensional structural finite element model, named as the Multi-filament twist model, of the strand with the real configuration of the LMI strand is built to study the influences of the plasticity of the component materials, the twist of the filament bundle, the initial thermal residual stress and the breakage and its evolution of the filaments on the mechanical behaviors of the strand. The effective properties of superconducting filament bundle with random filament breakage and its evolution versus strain are obtained based on the damage theory of fiber-reinforced composite materials proposed by Curtin and Zhou. From the calculation results of this model, we find that the occurrence of the hysteresis loop in the cyclic loading curve is determined by the reverse yielding of the elastic-plastic materials in the strand. Both the initial thermal residual stress in the strand and the pitch length of the filaments have significant impacts on the axial and hysteretic behaviors of the strand. The damage of the filaments also affects the axial mechanical behavior of the strand remarkably at large axial strain. The critical current of the strand is calculated by the scaling law with the results of the Multi-filament twist model. The predicted results of the Multi-filament twist model show an acceptable agreement with the experiment.

  6. Application of the distributed genetic algorithm for loading pattern optimization problems

    International Nuclear Information System (INIS)

    Hashimoto, Hiroshi; Yamamoto, Akio

    2000-01-01

    The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors (PWR). Due to stiff nature of the loading pattern optimizations (e.g. multi-modality and non-linearity), stochastic methods like the simulated annealing or the genetic algorithm (GA) are widely applied for these problems. A basic concept of DGA is based on that of GA. However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent 'islands' and evolves them in each island. Migrations of some candidates are performed among islands with a certain period. Since candidates of solutions independently evolve in each island with accepting different genes of migrants from other islands, premature convergence in the traditional GA can be prevented. Because many candidate loading patterns should be evaluated in one generation of GA or DGA, the parallelization in these calculations works efficiently. Parallel efficiency was measured using our optimization code and good load balance was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the client/server architecture with the TCP/IP native socket and a client (optimization module) and calculation server modules communicate the objects of loading patterns each other. Throughout the sensitivity study on optimization parameters of DGA, a suitable set of the parameters for a test problem was identified. Finally, optimization capability of DGA and the traditional GA was compared in the test problem and DGA provided better optimization results than the traditional GA. (author)

  7. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR

    NARCIS (Netherlands)

    Van Der Schot, Gijs; Bonvin, Alexandre M J J

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on

  8. GeoServer cookbook

    CERN Document Server

    Iacovella, Stefano

    2014-01-01

    This book is ideal for GIS experts, developers, and system administrators who have had a first glance at GeoServer and who are eager to explore all its features in order to configure professional map servers. Basic knowledge of GIS and GeoServer is required.

  9. Scaling NS-3 DCE Experiments on Multi-Core Servers

    Science.gov (United States)

    2016-06-15

    MPTCP) using the same software in DCE. In the experiment, only two wireless links ( LTE and Wi-Fi) are setup to examine MPTCP, resulting in limited...performance drop on the blade server. Our investigation then turned to other straightforward measures including the following: • We reduced the amount...simulation with varying numbers of cores and measured the run time. To pin the simulation to a specific set of cores, we switched from using 0:00   2:00

  10. Multi-Stage Admission Control for Load Balancing in Next Generation Systems

    DEFF Research Database (Denmark)

    Mihovska, Albena D.; Anggorojati, Bayu; Luo, Jijun

    2008-01-01

    This paper describes a load-dependent multi-stage admission control suitable for next generation systems. The concept uses decision polling in entities located at different levels of the architecture hierarchy and based on the load to activate a sequence of actions related to the admission...

  11. Deterministic methods for multi-control fuel loading optimization

    Science.gov (United States)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  12. Loading pattern optimization by multi-objective simulated annealing with screening technique

    International Nuclear Information System (INIS)

    Tong, K. P.; Hyun, C. L.; Hyung, K. J.; Chang, H. K.

    2006-01-01

    This paper presents a new multi-objective function which is made up of the main objective term as well as penalty terms related to the constraints. All the terms are represented in the same functional form and the coefficient of each term is normalized so that each term has equal weighting in the subsequent simulated annealing optimization calculations. The screening technique introduced in the previous work is also adopted in order to save computer time in 3-D neutronics evaluation of trial loading patterns. For numerical test of the new multi-objective function in the loading pattern optimization, the optimum loading patterns for the initial and the cycle 7 reload PWR core of Yonggwang Unit 4 are calculated by the simulated annealing algorithm with screening technique. A total of 10 optimum loading patterns are obtained for the initial core through 10 independent simulated annealing optimization runs. For the cycle 7 reload core one optimum loading pattern has been obtained from a single simulated annealing optimization run. More SA optimization runs will be conducted to optimum loading patterns for the cycle 7 reload core and results will be presented in the further work. (authors)

  13. Designing a scalable video-on-demand server with data sharing

    Science.gov (United States)

    Lim, Hyeran; Du, David H. C.

    2001-01-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  14. LiveBench-1: continuous benchmarking of protein structure prediction servers.

    Science.gov (United States)

    Bujnicki, J M; Elofsson, A; Fischer, D; Rychlewski, L

    2001-02-01

    We present a novel, continuous approach aimed at the large-scale assessment of the performance of available fold-recognition servers. Six popular servers were investigated: PDB-Blast, FFAS, T98-lib, GenTHREADER, 3D-PSSM, and INBGU. The assessment was conducted using as prediction targets a large number of selected protein structures released from October 1999 to April 2000. A target was selected if its sequence showed no significant similarity to any of the proteins previously available in the structural database. Overall, the servers were able to produce structurally similar models for one-half of the targets, but significantly accurate sequence-structure alignments were produced for only one-third of the targets. We further classified the targets into two sets: easy and hard. We found that all servers were able to find the correct answer for the vast majority of the easy targets if a structurally similar fold was present in the server's fold libraries. However, among the hard targets--where standard methods such as PSI-BLAST fail--the most sensitive fold-recognition servers were able to produce similar models for only 40% of the cases, half of which had a significantly accurate sequence-structure alignment. Among the hard targets, the presence of updated libraries appeared to be less critical for the ranking. An "ideally combined consensus" prediction, where the results of all servers are considered, would increase the percentage of correct assignments by 50%. Each server had a number of cases with a correct assignment, where the assignments of all the other servers were wrong. This emphasizes the benefits of considering more than one server in difficult prediction tasks. The LiveBench program (http://BioInfo.PL/LiveBench) is being continued, and all interested developers are cordially invited to join.

  15. EarthServer - 3D Visualization on the Web

    Science.gov (United States)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  16. Beginning SQL Server Modeling Model-driven Application Development in SQL Server

    CERN Document Server

    Weller, Bart

    2010-01-01

    Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. Beginning SQL Server Modeling will help you gain a comprehensive understanding of how to apply DSLs and other modeling components in the development of SQL Server implementations. Most importantly, after reading the book and working through the examples, you will have considerable experience using SQL M

  17. Mastering Lync Server 2010

    CERN Document Server

    Winters, Nathan

    2012-01-01

    An in-depth guide on the leading Unified Communications platform Microsoft Lync Server 2010 maximizes communication capabilities in the workplace like no other Unified Communications (UC) solution. Written by experts who know Lync Server inside and out, this comprehensive guide shows you step by step how to administer the newest and most robust version of Lync Server. Along with clear and detailed instructions, learning is aided by exercise problems and real-world examples of established Lync Server environments. You'll gain the skills you need to effectively deploy Lync Server 2010 and be on

  18. Multi-band Monopole Antennas Loaded with Metamaterial TL

    Science.gov (United States)

    Song, Zhi-jie; Liang, Jian-gang

    2015-05-01

    A novel metamaterial transmission line (TL) by loading complementary single Archimedean spiral resonator pair (CSASRP) is investigated and used to design a set of multi-frequency monopole antennas. The particularity is that the CSASRP which features dual-shunt branches in the equivalent circuit model is directly etched in the signal strip. By smartly controlling the element parameters, three antennas are designed and one of them covering UMTS and Bluetooth bands is fabricated and measured. The antenna exhibits impedance matching better than -10 dB and normal monopolar radiation patterns at working bands of 1.9-2.22 and 2.38-2.5 GHz. Moreover, the loaded element also contributes to the radiation, which is the major advantage of this prescription over previous lumped-element loadings. The proposed antenna is also more compact over previous designs.

  19. Load Situation Awareness Design for Integration in Multi-Energy System

    DEFF Research Database (Denmark)

    Cai, Hanmin; You, Shi; Bindner, Henrik W.

    2017-01-01

    Renewable Energy Sources (RESs) have been penetrating in power system at a staggering pace in recent years. Their intermittent nature is, however, posing great threat to system operation. Recently, active load management has been suggested as a tool to counteract these side effects. In multi......-energy system, thermal load management will benefit not only electric network but also district heating network. Electric heater will be the main focus of this paper as a common thermal load. A situation awareness framework for its integration into electric and district heating network will be proposed...

  20. Peningkatan Kinerja Siakad Menggunakan Metode Load Balancing dan Fault Tolerance Di Jaringan Kampus Universitas Halu Oleo

    Directory of Open Access Journals (Sweden)

    Alimuddin Alimuddin

    2016-01-01

    Full Text Available The application of academic information system (siakad a web-based college is essential to improve the academic services. Siakad the application has many obstacles, especially in dealing with a high amount of access that caused the overload. Moreover in case of hardware or software failure caused siakad inaccessible. The solution of this problem is the use of many existing servers where the load is distributed in the respective server. Need a method of distributing the load evenly in the respective server load balancing is the method by round robin algorithm so high siakad scalability. As for dealing with the failure of a server need fault tolerance for the availability siakad be high. This research is to develop methods of load balancing and fault tolerance using software linux virtual server and some additional programs such as ipvsadm and heartbeat that has the ability to increase scalability and availability siakad. The results showed that with load balancing to minimize the response time to 5,7%, increase throughput by 37% or 1,6 times and maximize resource utilization or utilization of 1,6 times increased, and avoid overload. While high availability is obtained from the server's ability to perform failover or move another server in the event of failure. Thus implementing load balancing and fault tolerance can improve the service performance of siakad and avoid mistakes.

  1. EarthServer: a Summary of Achievements in Technology, Services, and Standards

    Science.gov (United States)

    Baumann, Peter

    2015-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data, according to ISO and OGC defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timese ries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The transatlantic EarthServer initiative, running from 2011 through 2014, has united 11 partners to establish Big Earth Data Analytics. A key ingredient has been flexibility for users to ask whatever they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level, standards-based query languages which unify data and metadata search in a simple, yet powerful way. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing cod e has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, the pioneer and leading Array DBMS built for any-size multi-dimensional raster data being extended with support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly

  2. Disk Storage Server

    CERN Multimedia

    This model was a disk storage server used in the Data Centre up until 2012. Each tray contains a hard disk drive (see the 5TB hard disk drive on the main disk display section - this actually fits into one of the trays). There are 16 trays in all per server. There are hundreds of these servers mounted on racks in the Data Centre, as can be seen.

  3. Group-Server Queues

    OpenAIRE

    Li, Quan-Lin; Ma, Jing-Yu; Xie, Mingzhou; Xia, Li

    2017-01-01

    By analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times ...

  4. Web server attack analyzer

    OpenAIRE

    Mižišin, Michal

    2013-01-01

    Web server attack analyzer - Abstract The goal of this work was to create prototype of analyzer of injection flaws attacks on web server. Proposed solution combines capabilities of web application firewall and web server log analyzer. Analysis is based on configurable signatures defined by regular expressions. This paper begins with summary of web attacks, followed by detection techniques analysis on web servers, description and justification of selected implementation. In the end are charact...

  5. Probabilistic model for multi-axial load combinations for wind turbines

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov

    2016-01-01

    into a periodic part and a perturbation term, where each part has a known probability distribution. The proposed model shows good agreement with simulated data under stationary conditions, and a design load envelope based on this model is comparable to the load envelope estimated using the standard procedure...... for determining contemporaneous loads. Using examples with simulated loads on a 10 MW wind turbine,the behavior of the bending moments acting on a blade section is illustrated under different conditions.The loading direction most critical for material failure is determined using a finite-element model...

  6. Implementing Citrix XenServer Quickstarter

    CERN Document Server

    Ahmed, Gohar

    2013-01-01

    Implementing Citrix XenServer Quick Starter is a practical, hands-on guide that will help you get started with the Citrix XenServer Virtualization technology with easy-to-follow instructions.Implementing Citrix XenServer Quick Starter is for system administrators who have little to no information on virtualization and specifically Citrix XenServer Virtualization. If you're managing a lot of physical servers and are tired of installing, deploying, updating, and managing physical machines on a daily basis over and over again, then you should probably explore your option of XenServer Virtualizati

  7. Surface water acidification and critical loads: exploring the F-factor

    Directory of Open Access Journals (Sweden)

    K. Bishop

    2009-11-01

    Full Text Available As acid deposition decreases, uncertainties in methods for calculating critical loads become more important when judgements have to be made about whether or not further emission reductions are needed. An important aspect of one type of model that has been used to calculate surface water critical loads is the empirical F-factor which estimates the degree to which acid deposition is neutralised before it reaches a lake at any particular point in time relative to the pre-industrial, steady-state water chemistry conditions.

    In this paper we will examine how well the empirical F-functions are able to estimate pre-industrial lake chemistry as lake chemistry changes during different phases of acidification and recovery. To accomplish this, we use the dynamic, process-oriented biogeochemical model SAFE to generate a plausible time series of annual runoff chemistry for ca. 140 Swedish catchments between 1800 and 2100. These annual hydrochemistry data are then used to generate empirical F-factors that are compared to the "actual" F-factor seen in the SAFE data for each lake and year in the time series. The dynamics of the F-factor as catchments acidify, and then recover are not widely recognised.

    Our results suggest that the F-factor approach worked best during the acidification phase when soil processes buffer incoming acidity. However, the empirical functions for estimating F from contemporary lake chemistry are not well suited to the recovery phase when the F-factor turns negative due to recovery processes in the soil. This happens when acid deposition has depleted the soil store of BC, and then acid deposition declines, reducing the leaching of base cations to levels below those in the pre-industrial era. An estimate of critical load from water chemistry during recovery and empirical F functions would therefore result in critical loads that are too low. Therefore, the empirical estimates of the F-factor are a significant source of

  8. Server virtualization solutions

    OpenAIRE

    Jonasts, Gusts

    2012-01-01

    Currently in the information technology sector that is responsible for a server infrastructure is a huge development in the field of server virtualization on x86 computer architecture. As a prerequisite for such a virtualization development is growth in server productivity and underutilization of available computing power. Several companies in the market are working on two virtualization architectures – hypervizor and hosting. In this paper several of virtualization products that use host...

  9. Multi-scenario electromagnetic load analysis for CFETR and EAST magnet systems

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Weiwei; Liu, Xufeng, E-mail: lxf@ipp.ac.cn; Du, Shuangsong; Song, Yuntao

    2017-01-15

    Highlights: • A multi-scenario force-calculating simulator for Tokamak magnet system is developed using interaction matrix method. • The simulator is applied to EM analysis of CFETR and EAST magnet system. • The EM loads on CFETR magnet coils at different typical scenarios and the EM loads acting on magnet system of EAST as function of time for different shots are analyzed with the simulator. • Results indicate that the approach can be conveniently used for multi-scenario and real-time EM analysis of Tokamak magnet system. - Abstract: A technology for electromagnetic (EM) analysis of the current-carrying components in tokamaks has been proposed recently (Rozov, 2013; Rozov and Alekseev, 2015). According to this method, the EM loads can be obtained by a linear transform of given currents using the pre-computed interaction matrix. Based on this technology, a multi-scenario force-calculating simulator for Tokamak magnet system is developed using Fortran programming in this paper. And the simulator is applied to EM analysis of China Fusion Engineering Test Reactor (CFETR) and Experimental Advanced Superconducting Tokamak (EAST) magnet system. The pre-computed EM interaction matrices of CFETR and EAST magnet system are implanted into the simulator, then the EM loads on CFETR magnet coils at different typical scenarios are evaluated with the simulator, and the comparison of the results with ANSYS method results validates the efficiency and accuracy of the method. Using the simulator, the EM loads acting on magnet system of EAST as function of time for different shots are further analyzed, and results indicate that the approach can be conveniently used for the real-time EM analysis of Tokamak magnet system.

  10. Fact Sheet: Improving Energy Efficiency for Server Rooms and Closets

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Hoi Ying [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mahdavi, Rod [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Greenberg, Steve [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Rich [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tschudi, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Delforge, Pierre [Natural Resources Defense Council, New York, NY (United States); Dickerson, Joyce [Natural Resources Defense Council, New York, NY (United States)

    2012-09-01

    Is there a ghost in your IT closet? If your building has one or more IT rooms or closets containing between 5 and 50 servers, chances are that they account for a significant share of the building’s energy use (in some cases, over half!). Servers, data storage arrays, networking equipment, and the cooling and power conditioning that support them tend to draw large amounts of energy 24/7, in many cases using more energy annually than traditional building loads such as HVAC and lighting. The good news is that there are many cost-effective actions, ranging from simple to advanced, that can dramatically reduce that energy use, helping you to save money and reduce pollution.

  11. NOBAI: a web server for character coding of geometrical and statistical features in RNA structure

    Science.gov (United States)

    Knudsen, Vegeir; Caetano-Anollés, Gustavo

    2008-01-01

    The Numeration of Objects in Biology: Alignment Inferences (NOBAI) web server provides a web interface to the applications in the NOBAI software package. This software codes topological and thermodynamic information related to the secondary structure of RNA molecules as multi-state phylogenetic characters, builds character matrices directly in NEXUS format and provides sequence randomization options. The web server is an effective tool that facilitates the search for evolutionary history embedded in the structure of functional RNA molecules. The NOBAI web server is accessible at ‘http://www.manet.uiuc.edu/nobai/nobai.php’. This web site is free and open to all users and there is no login requirement. PMID:18448469

  12. MCTBI: a web server for predicting metal ion effects in RNA structures.

    Science.gov (United States)

    Sun, Li-Zhen; Zhang, Jing-Xiang; Chen, Shi-Jie

    2017-08-01

    Metal ions play critical roles in RNA structure and function. However, web servers and software packages for predicting ion effects in RNA structures are notably scarce. Furthermore, the existing web servers and software packages mainly neglect ion correlation and fluctuation effects, which are potentially important for RNAs. We here report a new web server, the MCTBI server (http://rna.physics.missouri.edu/MCTBI), for the prediction of ion effects for RNA structures. This server is based on the recently developed MCTBI, a model that can account for ion correlation and fluctuation effects for nucleic acid structures and can provide improved predictions for the effects of metal ions, especially for multivalent ions such as Mg 2+ effects, as shown by extensive theory-experiment test results. The MCTBI web server predicts metal ion binding fractions, the most probable bound ion distribution, the electrostatic free energy of the system, and the free energy components. The results provide mechanistic insights into the role of metal ions in RNA structure formation and folding stability, which is important for understanding RNA functions and the rational design of RNA structures. © 2017 Sun et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  13. Professional SQL Server 2005 administration

    CERN Document Server

    Knight, Brian; Snyder, Wayne; Armand, Jean-Claude; LoForte, Ross; Ji, Haidong

    2007-01-01

    SQL Server 2005 is the largest leap forward for SQL Server since its inception. With this update comes new features that will challenge even the most experienced SQL Server DBAs. Written by a team of some of the best SQL Server experts in the industry, this comprehensive tutorial shows you how to navigate the vastly changed landscape of the SQL Server administration. Drawing on their own first-hand experiences to offer you best practices, unique tips and tricks, and useful workarounds, the authors help you handle even the most difficult SQL Server 2005 administration issues, including blockin

  14. Server hardware dependability: effect of periodic switching on and off; Auswirkungen von periodischem Ein- und Ausschalten auf die Server-Hardware-Zuverlaessigkeit

    Energy Technology Data Exchange (ETDEWEB)

    Held, M.

    2003-07-01

    This final report discusses investigations made on behalf of the Swiss Federal Office of Energy that have shown a large potential for energy savings by switching off servers during idle periods. User concerns such as those concerning the possible effects of intermittent operation on hardware reliability are discussed. On the basis of the RDF 2000 Model evaluated in this project, the predicted failure rates of components of a typical SME (small and medium enterprise) server are presented that were calculated for the three operational modes 'continuous operation', 'on and idle', and 'on and off'. The failure rate model described takes account of the influence of temperature on failure rates as well as thermo-mechanical effects caused by changes in loading and temperature that also have a substantial impact on the failure rates of electronic components.

  15. Considering Interactions among Multiple Criteria for the Server Selection

    Directory of Open Access Journals (Sweden)

    Vesna Čančer

    2010-06-01

    Full Text Available Decision-making about server selection is one of the multi-criteria decision-making (MCDM processes where interactions among criteria should be considered. The paper introduces and develops some solutions for considering interactions among criteria in the MCDM problems. In the frame procedure for MCDM by using the group of methods, based on assigning weights, special attention is given to the synthesis of the local alternatives’ values into the aggregate values where the mutual preferential independence between two criteria is not assumed. Firstly, we delineate how to complete the additive model into the multiplicative one with synergic and redundancy elements in the case that criteria are structured in one level and in two levels. Furthermore, we adapted the concept of the fuzzy Choquet integral to the multi-attribute value theory. Studying and comparing the results of the example case of the server selection obtained by both aggregation approaches, the paper highlights the advantages of the first one since it does not require from decision makers to determine the weights of all possible combinations of the criteria and it enables the further use of the most preferred MCDM methods.

  16. Criticality conditions of heterogeneous energetic materials under shock loading

    Science.gov (United States)

    Nassar, Anas; Rai, Nirmal Kumar; Sen, Oishik; Udaykumar, H. S.

    2017-06-01

    Shock interaction with the microstructural heterogeneities of energetic materials can lead to the formation of locally heated regions known as hot spots. These hot spots are the potential sites where chemical reaction may be initiated. However, the ability of a hot spot to initiate chemical reaction depends on its size, shape and strength (temperature). Previous study by Tarver et al. has shown that there exists a critical size and temperature for a given shape (spherical, cylindrical, and planar) of the hot spot above which reaction initiation is imminent. Tarver et al. assumed a constant temperature variation in the hot spot. However, the meso-scale simulations show that the temperature distribution within a hot spot formed from processes such as void collapse is seldom constant. Also, the shape of a hot spot can be arbitrary. This work is an attempt towards development of a critical hot spot curve which is a function of loading strength, duration and void morphology. To achieve the aforementioned goal, mesoscale simulations are conducted on porous HMX material. The process is repeated for different loading conditions and void sizes. The hot spots formed in the process are examined for criticality depending on whether they will ignite or not. The metamodel is used to obtain criticality curves and is compared with the critical hot spot curve of Tarver et al.

  17. QlikView Server and Publisher

    CERN Document Server

    Redmond, Stephen

    2014-01-01

    This is a comprehensive guide with a step-by-step approach that enables you to host and manage servers using QlikView Server and QlikView Publisher.If you are a server administrator wanting to learn about how to deploy QlikView Server for server management,analysis and testing, and QlikView Publisher for publishing of business content then this is the perfect book for you. No prior experience with QlikView is expected.

  18. Air pollution and impact on eco-systems. Load concept/critical level and its consequences

    International Nuclear Information System (INIS)

    Elichegaray, C.

    1993-01-01

    Critical loads and critical levels respectively can be defined as the deposition value, or the concentration of pollutants in the atmosphere, above which adverse effects on receptors such as plants, ecosystems, materials may occur. Important research is currently being developed on critical loads and levels in the framework of the Geneva convention on transboundary air pollution. Several binding protocols have been elaborated between the european countries, the Canada, and the USA, to reduce their emissions of sulphur, nitrogen oxides, volatile organic compounds. This article describes the critical loads and levels approach, and the way by which this concept is now used for the revision of the sulphur protocol. (author). 6 refs., 5 figs., 4 tabs

  19. A conceptual framework: redifining forests soil's critical acid loads under a changing climate

    Science.gov (United States)

    Steven G. McNulty; Johnny L. Boggs

    2010-01-01

    Federal agencies of several nations have or are currently developing guidelines for critical forest soil acid loads. These guidelines are used to establish regulations designed to maintain atmospheric acid inputs below levels shown to damage forests and streams. Traditionally, when the critical soil acid load exceeds the amount of acid that the ecosystem can absorb, it...

  20. Critical loads and excess loads of cadmium, copper and lead for European forest soils

    NARCIS (Netherlands)

    Reinds, G.J.; Bril, J.; Vries, de W.; Groenenberg, J.E.; Breeuwsma, A.

    1995-01-01

    Recently, concern has arisen about the impact of the dispersion of heavy metals in Europe. Therefore, a study (ESQUAD) was initiated to assess critical loads and steady-state concentrations of cadmium, copper and lead for European forest soils. The calculation methods used strongly resemble those

  1. Homemade battery-operated multi-barreled muzzle-loading gun.

    Science.gov (United States)

    Ramiah, R; Thirunavukkarasu, G

    2003-11-01

    In a recent shootout by a terrorist group against a law enforcement agency, some unusual firearms were seized. On examination, these firearms were found to be homemade, battery-operated, multi-barreled muzzle-loading guns, analogous to a repeater. Reference to battery-operated firearms is rather scanty in the literature. Hence, the unique design features, electrical circuit, and the operation system of these unusual guns are described.

  2. The development of an approach to assess critical loads of acidity for woodland habitats in Great Britain

    Directory of Open Access Journals (Sweden)

    S. J. Langan

    2004-01-01

    Full Text Available Alongside other countries that are signatories to the UNECE Convention Long Range Transboundary on Air Pollution, the UK is committed to reducing the impact of air pollution on the environment. To advise and guide this policy in relation to atmospheric emissions of sulphur and nitrogen, a critical load approach has been developed. To assess the potential impact of these pollutants on woodland habitats a steady state, simple mass balance model has been parameterised. For mineral soils, a Ca:Al ratio in soil solution has been used as the critical load indicator for potential damage. For peat and organic soils critical loads have been set according to a pH criterion. Together these approaches have been used with national datasets to examine the potential scale of acidification in woodland habitats across the UK. The results can be mapped to show the spatial variability in critical loads of the three principal woodland habitat types (managed coniferous, managed broadleaved/ mixed woodland and unmanaged woodland. The results suggest that there is a wide range of critical loads. The most sensitive (lowest critical loads are associated with managed coniferous followed by unmanaged woodland on peat soils. Calculations indicate that at steady state, acid deposition inputs reported for 1995–1997 result in a large proportion of all the woodland habitats identified receiving deposition loads in excess of their critical load; i.e. critical loads are exceeded. These are discussed in relation to future modelled depositions for 2010. Whilst significant widespread negative impacts of such deposition on UK woodland habitats have not been reported, the work serves to illustrate that if acid deposition inputs were maintained and projected emissions reductions not achieved, the long-term sustainability of large areas of woodland in the UK could be compromised. Keywords: critical loads, acid deposition, acidification, woodland, simple mass balance model

  3. Study of load balancing technology for EAST data management

    Energy Technology Data Exchange (ETDEWEB)

    Li, Shi, E-mail: lishi@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Wang, Feng [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Xiao, Bingjia [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui (China); Yang, Fei [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Department of Computer Science, Anhui Medical University, Hefei, Anhui (China); Sun, Xiaoyang; Wang, Yong [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China)

    2014-05-15

    Highlights: • The load balancing concept is introduced into the MDSplus data service. • The new data service system based on the LVS framework and heartbeat technologies are described. • The scheduling algorithm “WLC” is used, and a software system is developed for optimizing the weight of node server. - Abstract: With the continuous renewal and increasing number of diagnostics, the EAST tokamak routinely generates ∼3 GB of raw data per pulse of the experiment, which is transferred to a centralized data management system. In order to strengthen international cooperation, all the acquired data has been converted and stored in the MDSplus servers. During the data system operation, there are some problems when a lot of client machines connect to a single MDSplus data server. Because the server process keeps the connection until the client closes its connection, a lot of server processes use a lot of network ports and consume a large amount of memory, so that the speed of access to data is very slow, but the CPU resource is not fully utilized. To improve data management system performance, many MDSplus servers will be installed on the blade server and form a server cluster to realize load balancing and high availability by using LVS and heartbeat technology. This paper will describe the details of the design and the test results.

  4. A multi-objective genetic approach to domestic load scheduling in an energy management system

    International Nuclear Information System (INIS)

    Soares, Ana; Antunes, Carlos Henggeler; Oliveira, Carlos; Gomes, Álvaro

    2014-01-01

    In this paper a multi-objective genetic algorithm is used to solve a multi-objective model to optimize the time allocation of domestic loads within a planning period of 36 h, in a smart grid context. The management of controllable domestic loads is aimed at minimizing the electricity bill and the end-user’s dissatisfaction concerning two different aspects: the preferred time slots for load operation and the risk of interruption of the energy supply. The genetic algorithm is similar to the Elitist NSGA-II (Nondominated Sorting Genetic Algorithm II), in which some changes have been introduced to adapt it to the physical characteristics of the load scheduling problem and improve usability of results. The mathematical model explicitly considers economical, technical, quality of service and comfort aspects. Illustrative results are presented and the characteristics of different solutions are analyzed. - Highlights: • A genetic algorithm similar to the NSGA-II is used to solve a multi-objective model. • The optimized time allocation of domestic loads in a smart grid context is achieved. • A variable preference profile for the operation of the managed loads is included. • A safety margin is used to account for the quality of the energy services provided. • A non-dominated front with the solutions in the two-objective space is obtained

  5. Microsoft SQL Server 2012 administration real-world skills for MCSA certification and beyond (exams 70-461, 70-462, and 70-463)

    CERN Document Server

    Carpenter, Tom

    2013-01-01

    Implement, maintain, and repair SQL Server 2012 databases As the most significant update since 2008, Microsoft SQL Server 2012 boasts updates and new features that are critical to understand. Whether you manage and administer SQL Server 2012 or are planning to get your MCSA: SQL Server 2012 certification, this book is the perfect supplement to your learning and preparation. From understanding SQL Server's roles to implementing business intelligence and reporting, this practical book explores tasks and scenarios that a working SQL Server DBA faces regularly and shows you step by ste

  6. Mastering Microsoft Exchange Server 2010

    CERN Document Server

    McBee, Jim

    2010-01-01

    A top-selling guide to Exchange Server-now fully updated for Exchange Server 2010. Keep your Microsoft messaging system up to date and protected with the very newest version, Exchange Server 2010, and this comprehensive guide. Whether you're upgrading from Exchange Server 2007 SP1 or earlier, installing for the first time, or migrating from another system, this step-by-step guide provides the hands-on instruction, practical application, and real-world advice you need.: Explains Microsoft Exchange Server 2010, the latest release of Microsoft's messaging system that protects against spam and vir

  7. Optimal control of a server farm

    NARCIS (Netherlands)

    Adan, I.J.B.F.; Kulkarni, V.G.; Wijk, van A.C.C.

    2013-01-01

    We consider a server farm consisting of ample exponential servers, that serve a Poisson stream of arriving customers. Each server can be either busy, idle or off. An arriving customer will immediately occupy an idle server, if there is one, and otherwise, an off server will be turned on and start

  8. NEOS Server 4.0 Administrative Guide

    OpenAIRE

    Dolan, Elizabeth D.

    2001-01-01

    The NEOS Server 4.0 provides a general Internet-based client/server as a link between users and software applications. The administrative guide covers the fundamental principals behind the operation of the NEOS Server, installation and trouble-shooting of the Server software, and implementation details of potential interest to a NEOS Server administrator. The guide also discusses making new software applications available through the Server, including areas of concern to remote solver adminis...

  9. Application of the critical loads approach in South Africa

    CSIR Research Space (South Africa)

    Van Tienhoven, AM

    1995-12-01

    Full Text Available , South Africa. Abstract. South Africa is the most industrialised country in southern Africa and stands at some risk from negative pollution i apa~ To the authors' knowledge, this paper presents the first attempt toapply the critical loads approach...

  10. Microsoft SQL Server 2012 bible

    CERN Document Server

    Jorgensen, Adam; LeBlanc, Patrick; Cherry, Denny; Nelson, Aaron

    2012-01-01

    Harness the powerful new SQL Server 2012 Microsoft SQL Server 2012 is the most significant update to this product since 2005, and it may change how database administrators and developers perform many aspects of their jobs. If you're a database administrator or developer, Microsoft SQL Server 2012 Bible teaches you everything you need to take full advantage of this major release. This detailed guide not only covers all the new features of SQL Server 2012, it also shows you step by step how to develop top-notch SQL Server databases and new data connections and keep your databases performing at p

  11. Windows Home Server users guide

    CERN Document Server

    Edney, Andrew

    2008-01-01

    Windows Home Server brings the idea of centralized storage, backup and computer management out of the enterprise and into the home. Windows Home Server is built for people with multiple computers at home and helps to synchronize them, keep them updated, stream media between them, and back them up centrally. Built on a similar foundation as the Microsoft server operating products, it's essentially Small Business Server for the home.This book details how to install, configure, and use Windows Home Server and explains how to connect to and manage different clients such as Windows XP, Windows Vist

  12. Linux Server Security

    CERN Document Server

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  13. Open client/server computing and middleware

    CERN Document Server

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  14. Windows server cookbook for Windows server 2003 and Windows 2000

    CERN Document Server

    Allen, Robbie

    2005-01-01

    This practical reference guide offers hundreds of useful tasks for managing Windows 2000 and Windows Server 2003, Microsoft's latest server. These concise, on-the-job solutions to common problems are certain to save you many hours of time searching through Microsoft documentation. Topics include files, event logs, security, DHCP, DNS, backup/restore, and more

  15. Multi-agent grid system Agent-GRID with dynamic load balancing of cluster nodes

    Science.gov (United States)

    Satymbekov, M. N.; Pak, I. T.; Naizabayeva, L.; Nurzhanov, Ch. A.

    2017-12-01

    In this study the work presents the system designed for automated load balancing of the contributor by analysing the load of compute nodes and the subsequent migration of virtual machines from loaded nodes to less loaded ones. This system increases the performance of cluster nodes and helps in the timely processing of data. A grid system balances the work of cluster nodes the relevance of the system is the award of multi-agent balancing for the solution of such problems.

  16. Learning SQL Server Reporting Services 2012

    CERN Document Server

    Krishnaswamy, Jayaram

    2013-01-01

    The book is packed with clear instructions and plenty of screenshots, providing all the support and guidance you will need as you begin to generate reports with SQL Server 2012 Reporting Services.This book is for those who are new to SQL Server Reporting Services 2012 and aspiring to create and deploy cutting edge reports. This book is for report developers, report authors, ad-hoc report authors and model developers, and Report Server and SharePoint Server Integrated Report Server administrators. Minimal knowledge of SQL Server is assumed and SharePoint experience would be helpful.

  17. Multi-Layer Mobility Load Balancing in a Heterogeneous LTE Network

    DEFF Research Database (Denmark)

    Fotiadis, Panagiotis; Polignano, Michele; Laselva, Daniela

    2012-01-01

    This paper analyzes the behavior of a distributed Mobility Load Balancing (MLB) scheme in a multi-layer 3GPP (3rd Generation Partnership Project) Long Term Evolution (LTE) deployment with different User Equipment (UE) densities in certain network areas covered with pico cells. Target of the study...

  18. Tree-based server-middleman-client architecture: improving scalability and reliability for voting-based network games in ad hoc wireless networks

    Science.gov (United States)

    Guo, Y.; Fujinoki, H.

    2006-10-01

    The concept of a new tree-based architecture for networked multi-player games was proposed by Matuszek to improve scalability in network traffic at the same time to improve reliability. The architecture (we refer it as "Tree-Based Server- Middlemen-Client architecture") will solve the two major problems in ad-hoc wireless networks: frequent link failures and significance in battery power consumption at wireless transceivers by using two new techniques, recursive aggregation of client messages and subscription-based propagation of game state. However, the performance of the TBSMC architecture has never been quantitatively studied. In this paper, the TB-SMC architecture is compared with the client-server architecture using simulation experiments. We developed an event driven simulator to evaluate the performance of the TB-SMC architecture. In the network traffic scalability experiments, the TB-SMC architecture resulted in less than 1/14 of the network traffic load for 200 end users. In the reliability experiments, the TB-SMC architecture improved the number of successfully delivered players' votes by 31.6, 19.0, and 12.4% from the clientserver architecture at high (failure probability of 90%), moderate (50%) and low (10%) failure probability.

  19. Adaptive Load Balancing of Parallel Applications with Multi-Agent Reinforcement Learning on Heterogeneous Systems

    Directory of Open Access Journals (Sweden)

    Johan Parent

    2004-01-01

    Full Text Available We report on the improvements that can be achieved by applying machine learning techniques, in particular reinforcement learning, for the dynamic load balancing of parallel applications. The applications being considered in this paper are coarse grain data intensive applications. Such applications put high pressure on the interconnect of the hardware. Synchronization and load balancing in complex, heterogeneous networks need fast, flexible, adaptive load balancing algorithms. Viewing a parallel application as a one-state coordination game in the framework of multi-agent reinforcement learning, and by using a recently introduced multi-agent exploration technique, we are able to improve upon the classic job farming approach. The improvements are achieved with limited computation and communication overhead.

  20. Display graphical information optimization methods in a client-server information system

    Directory of Open Access Journals (Sweden)

    Юрий Викторович Мазуревич

    2015-07-01

    Full Text Available This paper presents an approach to reduce load time and volume of data necessary to display web page due to server side preprocessing. Measurement of this approach’s effectivity has been conducted. There were discovered conditions in which this approach will be the most effective, its disadvantages and presented ways to reduce them

  1. Static Load Balancing Algorithms In Cloud Computing Challenges amp Solutions

    Directory of Open Access Journals (Sweden)

    Nadeem Shah

    2015-08-01

    Full Text Available Abstract Cloud computing provides on-demand hosted computing resources and services over the Internet on a pay-per-use basis. It is currently becoming the favored method of communication and computation over scalable networks due to numerous attractive attributes such as high availability scalability fault tolerance simplicity of management and low cost of ownership. Due to the huge demand of cloud computing efficient load balancing becomes critical to ensure that computational tasks are evenly distributed across servers to prevent bottlenecks. The aim of this review paper is to understand the current challenges in cloud computing primarily in cloud load balancing using static algorithms and finding gaps to bridge for more efficient static cloud load balancing in the future. We believe the ideas suggested as new solution will allow researchers to redesign better algorithms for better functionalities and improved user experiences in simple cloud systems. This could assist small businesses that cannot afford infrastructure that supports complex amp dynamic load balancing algorithms.

  2. a model for the determination of the critical buckling load of self

    African Journals Online (AJOL)

    HP

    Considering the widespread use of this type of structure and the critical role it ... proposed by the model for the critical buckling load of self- supporting lattice tower, whose equivalent solid beam- ... stiffness, both material and geometric, [5, 6].

  3. Beginning Microsoft SQL Server 2012 Programming

    CERN Document Server

    Atkinson, Paul

    2012-01-01

    Get up to speed on the extensive changes to the newest release of Microsoft SQL Server The 2012 release of Microsoft SQL Server changes how you develop applications for SQL Server. With this comprehensive resource, SQL Server authority Robert Vieira presents the fundamentals of database design and SQL concepts, and then shows you how to apply these concepts using the updated SQL Server. Publishing time and date with the 2012 release, Beginning Microsoft SQL Server 2012 Programming begins with a quick overview of database design basics and the SQL query language and then quickly proceeds to sho

  4. Long-term modelling of nitrogen turnover and critical loads in a forested catchment using the INCA model

    Directory of Open Access Journals (Sweden)

    J.-J. Langusch

    2002-01-01

    Full Text Available Many forest ecosystems in Central Europe have reached the status of N saturation due to chronically high N deposition. In consequence, the NO3 leaching into ground- and surface waters is often substantial. Critical loads have been defined to abate the negative consequences of the NO3 leaching such as soil acidification and nutrient losses. The steady state mass balance method is normally used to calculate critical loads for N deposition in forest ecosystems. However, the steady state mass balance approach is limited because it does not take into account hydrology and the time until the steady state is reached. The aim of this study was to test the suitability of another approach: the dynamic model INCA (Integrated Nitrogen Model for European Catchments. Long-term effects of changing N deposition and critical loads for N were simulated using INCA for the Lehstenbach spruce catchment (Fichtelgebirge, NE Bavaria, Germany under different hydrological conditions. Long-term scenarios of either increasing or decreasing N deposition indicated that, in this catchment, the response of nitrate concentrations in runoff to changing N deposition is buffered by a large groundwater reservoir. The critical load simulated by the INCA model with respect to a nitrate concentration of 0.4 mg N l–1 as threshold value in runoff was 9.7 kg N ha–1yr–1 compared to 10 kg ha–1yr–1 for the steady state model. Under conditions of lower precipitation (520 mm the resulting critical load was 7.7 kg N ha–1yr–1 , suggesting the necessity to account for different hydrological conditions when calculating critical loads. The INCA model seems to be suitable to calculate critical loads for N in forested catchments under varying hydrological conditions e.g. as a consequence of climate change. Keywords: forest ecosystem, N saturation, critical load, modelling, long-term scenario, nitrate leaching, critical loads reduction, INCA

  5. SPEER-SERVER: a web server for prediction of protein specificity determining sites.

    Science.gov (United States)

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J; Panchenko, Anna R; Chakrabarti, Saikat

    2012-07-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids' Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/.

  6. Experimental creep behaviour determination of cladding tube materials under multi-axial loadings

    International Nuclear Information System (INIS)

    Grosjean, Catherine; Poquillon, Dominique; Salabura, Jean-Claude; Cloue, Jean-Marc

    2009-01-01

    Cladding tubes are structural parts of nuclear plants, submitted to complex thermomechanical loadings. Thus, it is necessary to know and predict their behaviour to preserve their integrity and to enhance their lifetime. Therefore, a new experimental device has been developed to control the load path under multi-axial load conditions. The apparatus is designed to determine the thermomechanical behaviour of zirconium alloys used for cladding tubes. First results are presented. Creep tests with different biaxial loadings were performed. Results are analysed in terms of thermal expansion and of creep strain. The anisotropy of the material is revealed and iso-creep strain curves are given.

  7. Mastering Microsoft Exchange Server 2013

    CERN Document Server

    Elfassy, David

    2013-01-01

    The bestselling guide to Exchange Server, fully updated for the newest version Microsoft Exchange Server 2013 is touted as a solution for lowering the total cost of ownership, whether deployed on-premises or in the cloud. Like the earlier editions, this comprehensive guide covers every aspect of installing, configuring, and managing this multifaceted collaboration system. It offers Windows systems administrators and consultants a complete tutorial and reference, ideal for anyone installing Exchange Server for the first time or those migrating from an earlier Exchange Server version.Microsoft

  8. Multi-critical points in weakly anisotropic magnetic systems

    International Nuclear Information System (INIS)

    Basten, J.A.J.

    1979-02-01

    This report starts with a rather extensive presentation of the concepts and ideas which constitute the basis of the modern theory of static critical phenomena. It is shown how at a critical point the semi-phenomenological concepts of universality and scaling are directly related to the divergence of the correlation length and how they are extended to a calculational method for critical behaviour in Wilson's Renormalization-Group (RG) approach. Subsequently the predictions of the molecular-field and RG-theories on the phase transitions and critical behaviour in weakly anisotropic antiferromagnets are treated. In a magnetic field applied along the easy axis, these materials can display an (H,T) phase diagram which contains either a bicritical point or a tetracritical point. Especially the behaviour close to these multi-critical points, as predicted by the extended-scaling theory, is discussed. (Auth.)

  9. Remote Sensing Data Analytics for Planetary Science with PlanetServer/EarthServer

    Science.gov (United States)

    Rossi, Angelo Pio; Figuera, Ramiro Marco; Flahaut, Jessica; Martinot, Melissa; Misev, Dimitar; Baumann, Peter; Pham Huu, Bang; Besse, Sebastien

    2016-04-01

    Planetary Science datasets, beyond the change in the last two decades from physical volumes to internet-accessible archives, still face the problem of large-scale processing and analytics (e.g. Rossi et al., 2014, Gaddis and Hare, 2015). PlanetServer, the Planetary Science Data Service of the EC-funded EarthServer-2 project (#654367) tackles the planetary Big Data analytics problem with an array database approach (Baumann et al., 2014). It is developed to serve a large amount of calibrated, map-projected planetary data online, mainly through Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) (e.g. Rossi et al., 2014; Oosthoek et al., 2013; Cantini et al., 2014). The focus of the H2020 evolution of PlanetServer is still on complex multidimensional data, particularly hyperspectral imaging and topographic cubes and imagery. In addition to hyperspectral and topographic from Mars (Rossi et al., 2014), the use of WCPS is applied to diverse datasets on the Moon, as well as Mercury. Other Solar System Bodies are going to be progressively available. Derived parameters such as summary products and indices can be produced through WCPS queries, as well as derived imagery colour combination products, dynamically generated and accessed also through OGC Web Coverage Service (WCS). Scientific questions translated into queries can be posed to a large number of individual coverages (data products), locally, regionally or globally. The new PlanetServer system uses the the Open Source Nasa WorldWind (e.g. Hogan, 2011) virtual globe as visualisation engine, and the array database Rasdaman Community Edition as core server component. Analytical tools and client components of relevance for multiple communities and disciplines are shared across service such as the Earth Observation and Marine Data Services of EarthServer. The Planetary Science Data Service of EarthServer is accessible on http://planetserver.eu. All its code base is going to be available on GitHub, on

  10. Multi-stage fuzzy load frequency control using PSO

    International Nuclear Information System (INIS)

    Shayeghi, H.; Jalili, A.; Shayanfar, H.A.

    2008-01-01

    In this paper, a particle swarm optimization (PSO) based multi-stage fuzzy (PSOMSF) controller is proposed for solution of the load frequency control (LFC) problem in a restructured power system that operate under deregulation based on the bilateral policy scheme. In this strategy the control is tuned on line from the knowledge base and fuzzy inference, which request fewer sources and has two rule base sets. In the proposed method, for achieving the desired level of robust performance, exact tuning of membership functions is very important. Thus, to reduce the design effort and find a better fuzzy system control, membership functions are designed automatically by PSO algorithm, that has a strong ability to find the most optimistic results. The motivation for using the PSO technique is to reduce fuzzy system effort and take large parametric uncertainties into account. This newly developed control strategy combines the advantage of PSO and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed PSO based MSF (PSOMSF) controller is tested on a three-area restructured power system under different operating conditions and contract variations. The results of the proposed PSOMSF controller are compared with genetic algorithm based multi-stage fuzzy (GAMSF) control through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes

  11. Multi-stage fuzzy load frequency control using PSO

    Energy Technology Data Exchange (ETDEWEB)

    Shayeghi, H. [Technical Engineering Department, University of Mohaghegh Ardabili, Ardabil (Iran); Jalili, A. [Islamic Azad University, Ardabil Branch, Ardabil (Iran); Shayanfar, H.A. [Center of Excellence for Power Automation and Operation, Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran)

    2008-10-15

    In this paper, a particle swarm optimization (PSO) based multi-stage fuzzy (PSOMSF) controller is proposed for solution of the load frequency control (LFC) problem in a restructured power system that operate under deregulation based on the bilateral policy scheme. In this strategy the control is tuned on line from the knowledge base and fuzzy inference, which request fewer sources and has two rule base sets. In the proposed method, for achieving the desired level of robust performance, exact tuning of membership functions is very important. Thus, to reduce the design effort and find a better fuzzy system control, membership functions are designed automatically by PSO algorithm, that has a strong ability to find the most optimistic results. The motivation for using the PSO technique is to reduce fuzzy system effort and take large parametric uncertainties into account. This newly developed control strategy combines the advantage of PSO and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed PSO based MSF (PSOMSF) controller is tested on a three-area restructured power system under different operating conditions and contract variations. The results of the proposed PSOMSF controller are compared with genetic algorithm based multi-stage fuzzy (GAMSF) control through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes. (author)

  12. Multi-Class load balancing scheme for QoS and energy ...

    African Journals Online (AJOL)

    Multi-Class load balancing scheme for QoS and energy conservation in cloud computing. ... If you would like more information about how to print, save, and work with PDFs, Highwire Press provides a helpful Frequently Asked Questions about PDFs. Alternatively, you can download the PDF file directly to your computer, from ...

  13. Water pollution abatement programme. The Czech Republic. Project 4.2. Assessing critical loads of acidity to surface waters in the Czech Republic. Critical loads of acidity to surface waters, north-eastern Bohemia and northern Moravia, The Czech Republic

    Energy Technology Data Exchange (ETDEWEB)

    Lien, L.; Raclavsky, K.; Raclavska, H.; Matysek, D.; Hovind, H.

    1996-01-01

    This report discusses estimates of critical loads of acidity to surface waters and their exceedances, for north-eastern Bohemia and Moravia in The Czech Republic. The survey covers 13 400 km{sup 2}, or 17% of the area of the country. Varying critical loads were observed within the examined region. 19% of the examined area showed exceedance of critical load and another 11% was close to exceedance. The survey should continue in Bohemia. 24 refs., 20 figs., 4 tabs.

  14. Mapping critical loads of nitrogen deposition for aquatic ecosystems in the Rocky Mountains, USA

    International Nuclear Information System (INIS)

    Nanus, Leora; Clow, David W.; Saros, Jasmine E.; Stephens, Verlin C.; Campbell, Donald H.

    2012-01-01

    Spatially explicit estimates of critical loads of nitrogen (N) deposition (CL Ndep ) for nutrient enrichment in aquatic ecosystems were developed for the Rocky Mountains, USA, using a geostatistical approach. The lowest CL Ndep estimates ( −1 yr −1 ) occurred in high-elevation basins with steep slopes, sparse vegetation, and abundance of exposed bedrock and talus. These areas often correspond with areas of high N deposition (>3 kg N ha −1 yr −1 ), resulting in CL Ndep exceedances ≥1.5 ± 1 kg N ha −1 yr −1 . CL Ndep and CL Ndep exceedances exhibit substantial spatial variability related to basin characteristics and are highly sensitive to the NO 3 − threshold at which ecological effects are thought to occur. Based on an NO 3 − threshold of 0.5 μmol L −1 , N deposition exceeds CL Ndep in 21 ± 8% of the study area; thus, broad areas of the Rocky Mountains may be impacted by excess N deposition, with greatest impacts at high elevations. - Highlights: ► Critical loads maps for nutrient enrichment effects of nitrogen deposition. ► Critical load estimates show spatial variability related to basin characteristics. ► Critical loads are sensitive to the nitrate threshold value for ecological effects. ► Broad areas of the Rocky Mountains may be impacted by excess nitrogen deposition. - Critical loads maps for nutrient enrichment effects of nitrogen deposition show that broad areas of the Rocky Mountains may be impacted by excess nitrogen deposition.

  15. Derivation and Mapping of Critical Loads for Nitrogen and Trends in Their Exceedance in Germany

    Directory of Open Access Journals (Sweden)

    Hans-Dieter Nagel

    2001-01-01

    Full Text Available The term “critical load” means a quantitative estimate of an exposure to one or more pollutants below which significant harmful effects on specified sensitive elements of the environment do not occur, according to present knowledge. In the case of nitrogen, both oxidised and reduced compounds contribute to the total deposition of acidity, which exceeds critical loads in many forest ecosystems. These also cause negative effects through eutrophication. Critical loads of nitrogen were derived for forest soils (deciduous and coniferous forest, natural grassland, acid fens, heathland, and mesotrophic peat bogs. In Germany, a decrease in sulphur emissions over the past 15 years resulted in a reduced exceedance of critical loads for acid deposition. In the same period it was noted that reduction in the emissions of nitrogen oxides and ammonia remained insignificant. Therefore, emissions of nitrogen compounds have become relatively more important and will continue to threaten ecosystem function and stability. The risk of environmental damage remains at an unacceptable level. The German maps show the degree to which the critical loads are exceeded, and they present current developments and an expected future trend. Results indicate that recovery from pollutant stress occurs only gradually.

  16. CalFitter: a web server for analysis of protein thermal denaturation data.

    Science.gov (United States)

    Mazurenko, Stanislav; Stourac, Jan; Kunka, Antonin; Nedeljkovic, Sava; Bednar, David; Prokop, Zbynek; Damborsky, Jiri

    2018-05-14

    Despite significant advances in the understanding of protein structure-function relationships, revealing protein folding pathways still poses a challenge due to a limited number of relevant experimental tools. Widely-used experimental techniques, such as calorimetry or spectroscopy, critically depend on a proper data analysis. Currently, there are only separate data analysis tools available for each type of experiment with a limited model selection. To address this problem, we have developed the CalFitter web server to be a unified platform for comprehensive data fitting and analysis of protein thermal denaturation data. The server allows simultaneous global data fitting using any combination of input data types and offers 12 protein unfolding pathway models for selection, including irreversible transitions often missing from other tools. The data fitting produces optimal parameter values, their confidence intervals, and statistical information to define unfolding pathways. The server provides an interactive and easy-to-use interface that allows users to directly analyse input datasets and simulate modelled output based on the model parameters. CalFitter web server is available free at https://loschmidt.chemi.muni.cz/calfitter/.

  17. Mac OS X Lion Server For Dummies

    CERN Document Server

    Rizzo, John

    2011-01-01

    The perfect guide to help administrators set up Apple's Mac OS X Lion Server With the overwhelming popularity of the iPhone and iPad, more Macs are appearing in corporate settings. The newest version of Mac Server is the ideal way to administer a Mac network. This friendly guide explains to both Windows and Mac administrators how to set up and configure the server, including services such as iCal Server, Podcast Producer, Wiki Server, Spotlight Server, iChat Server, File Sharing, Mail Services, and support for iPhone and iPad. It explains how to secure, administer, and troubleshoot the networ

  18. Learning Zimbra Server essentials

    CERN Document Server

    Kouka, Abdelmonam

    2013-01-01

    A standard tutorial approach which will guide the readers on all of the intricacies of the Zimbra Server.If you are any kind of Zimbra user, this book will be useful for you, from newbies to experts who would like to learn how to setup a Zimbra server. If you are an IT administrator or consultant who is exploring the idea of adopting, or have already adopted Zimbra as your mail server, then this book is for you. No prior knowledge of Zimbra is required.

  19. Insensitivity of proportional fairness in critically loaded bandwidth sharing networks

    NARCIS (Netherlands)

    Vlasiou, M.; Zhang, J.; Zwart, B.

    2014-01-01

    Proportional fairness is a popular service allocation mechanism to describe and analyze the performance of data networks at flow level. Recently, several authors have shown that the invariant distribution of such networks admits a product form distribution under critical loading. Assuming

  20. Automated load balancing in the ATLAS high-performance storage software

    CERN Document Server

    Le Goff, Fabrice; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment collects proton-proton collision events delivered by the LHC accelerator at CERN. The ATLAS Trigger and Data Acquisition (TDAQ) system selects, transports and eventually records event data from the detector at several gigabytes per second. The data are recorded on transient storage before being delivered to permanent storage. The transient storage consists of high-performance direct-attached storage servers accounting for about 500 hard drives. The transient storage operates dedicated software in the form of a distributed multi-threaded application. The workload includes both CPU-demanding and IO-oriented tasks. This paper presents the original application threading model for this particular workload, discussing the load-sharing strategy among the available CPU cores. The limitations of this strategy were reached in 2016 due to changes in the trigger configuration involving a new data distribution pattern. We then describe a novel data-driven load-sharing strategy, designed to automatical...

  1. Microsoft Windows Server Administration Essentials

    CERN Document Server

    Carpenter, Tom

    2011-01-01

    The core concepts and technologies you need to administer a Windows Server OS Administering a Windows operating system (OS) can be a difficult topic to grasp, particularly if you are new to the field of IT. This full-color resource serves as an approachable introduction to understanding how to install a server, the various roles of a server, and how server performance and maintenance impacts a network. With a special focus placed on the new Microsoft Technology Associate (MTA) certificate, the straightforward, easy-to-understand tone is ideal for anyone new to computer administration looking t

  2. GeoServer: il server geospaziale Open Source novità della nuova versione 2.3.0

    Directory of Open Access Journals (Sweden)

    Simone Giannecchini

    2013-04-01

    Full Text Available GeoServer è un server geospaziale Open Source sviluppato con tecnologia Java Enterprise per la gestione e l’editing di dati geospaziali secondo gli standard OGC e ISO Technical Committee 211. Esso fornisce le funzionalità di base per creareinfrastrutture spaziali di dati (SDI ed è progettato per essere interoperabile potendo pubblicare dati provenienti da ogni tipo di fonte spaziale utilizzando standard aperti.Open Source GeoSpatial server developed with Java Enterprise technology for managing, sharing and editing geospatial data according to the OGC and ISO TC 211 standards. GeoServer provides the basic functionalities to create spatial data infrastructures (SDI.GeoServer is designed for interoperability, it publishes data from any major spatial data source using open standards: it is the reference implementation of the Open Geospatial Consortium (OGC Web Feature Service (WFS and Web Coverage Service (WCS standards, as well as a highperformance certified compliant Web Map Service (WMS. GeoServer forms a core component of the Geospatial Web.

  3. Eutrophic lichens respond to multiple forms of N: implications for critical levels and critical loads research

    Science.gov (United States)

    Sarah Jovan; Jennifer Riddell; Pamela E Padgett; Thomas Nash

    2012-01-01

    Epiphytic lichen communities are highly sensitive to excess nitrogen (N), which causes the replacement of native floras by N-tolerant, ‘‘weedy’’ eutrophic species. This shift is commonly used as the indicator of ecosystem ‘‘harm’’ in studies developing empirical critical levels (CLE) for ammonia (NH3) and critical loads (CLO) for N. To be most...

  4. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR.

    Science.gov (United States)

    van der Schot, Gijs; Bonvin, Alexandre M J J

    2015-08-01

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665-1674, 2005b, doi: 10.1021/ja047109h). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27-35, 2013, doi: 10.1007/s10858-013-9762-6), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution.

  5. Estimates of critical acid loads and exceedances for forest soils across the conterminous United States

    Science.gov (United States)

    Steven G. McNulty; Erika C. Cohen; Jennifer A. Moore Myers; Timothy J. Sullivan; Harbin Li

    2007-01-01

    Concern regarding the impacts of continued nitrogen and sulfur deposition on ecosystem health has prompted the development of critical acid load assessments for forest soils. A critical acid load is a quantitative estimate of exposure to one or more pollutants at or above which harmful acidification-related effects on sensitive elements of the environment occur. A...

  6. Critical loads as a policy tool for protecting ecosystems from the effects of air pollutants

    Science.gov (United States)

    Douglas A. Burns; Tamara Blett; Richard Haeuber; Linda H. Pardo

    2008-01-01

    Framing the effects of air pollutants on ecosystems in terms of a "critical load" provides a meaningful approach for research scientists to communicate policy-relevant science to air-quality policy makers and natural resource managers. A critical-loads approach has been widely used to shape air-pollutant control policy in Europe since the 1980s, yet has only...

  7. Effects of nitrogen deposition and empirical nitrogen critical loads for ecoregions of the United States

    Science.gov (United States)

    Pardo, L.H.; Fenn, M.E.; Goodale, C.L.; Geiser, L.H.; Driscoll, C.T.; Allen, E.B.; Baron, Jill S.; Bobbink, R.; Bowman, W.D.; Clark, C.M.; Emmett, B.; Gilliam, F.S.; Greaver, T.L.; Hall, S.J.; Lilleskov, E.A.; Liu, L.; Lynch, J.A.; Nadelhoffer, K.J.; Perakis, S.S.; Robin-Abbott, M. J.; Stoddard, J.L.; Weathers, K.C.; Dennis, R.L.

    2011-01-01

    Human activity in the last century has led to a significant increase in nitrogen (N) emissions and atmospheric deposition. This N deposition has reached a level that has caused or is likely to cause alterations to the structure and function of many ecosystems across the United States. One approach for quantifying the deposition of pollution that would be harmful to ecosystems is the determination of critical loads. A critical load is defined as the input of a pollutant below which no detrimental ecological effects occur over the long-term according to present knowledge. The objectives of this project were to synthesize current research relating atmospheric N deposition to effects on terrestrial and freshwater ecosystems in the United States, and to estimate associated empirical N critical loads. The receptors considered included freshwater diatoms, mycorrhizal fungi, lichens, bryophytes, herbaceous plants, shrubs, and trees. Ecosystem impacts included: (1) biogeochemical responses and (2) individual species, population, and community responses. Biogeochemical responses included increased N mineralization and nitrification (and N availability for plant and microbial uptake), increased gaseous N losses (ammonia volatilization, nitric and nitrous oxide from nitrification and denitrification), and increased N leaching. Individual species, population, and community responses included increased tissue N, physiological and nutrient imbalances, increased growth, altered root : shoot ratios, increased susceptibility to secondary stresses, altered fire regime, shifts in competitive interactions and community composition, changes in species richness and other measures of biodiversity, and increases in invasive species. The range of critical loads for nutrient N reported for U.S. ecoregions, inland surface waters, and freshwater wetlands is 1-39 kg N.ha -1.yr -1, spanning the range of N deposition observed over most of the country. The empirical critical loads for N tend to

  8. Multi-load Optimal Design of Burner-inner-liner Under Performance Index Constraint by Second-Order Polynomial Taylor Series Method

    Directory of Open Access Journals (Sweden)

    Tu Gaoqiao

    2016-01-01

    Full Text Available Using maximum expansion pressure of n-decane, the aeroengine burner-inner-liner combustion pressure load is computed. Aerodynamic loads are obtained from internal gas pressure load and gas momentum. Multi-load second-order Taylor series equations are established using multi-variant polynomials and their sensitivities. Optimal designs are carried out using various performance index constraints. When 0.25 to 0.8 rectifications of different design variants are implemented, they converge under 5×10‒4 d-norm difference ratio.

  9. Essential Mac OS X panther server administration integrating Mac OS X server into heterogeneous networks

    CERN Document Server

    Bartosh, Michael

    2004-01-01

    If you've ever wondered how to safely manipulate Mac OS X Panther Server's many underlying configuration files or needed to explain AFP permission mapping--this book's for you. From the command line to Apple's graphical tools, the book provides insight into this powerful server software. Topics covered include installation, deployment, server management, web application services, data gathering, and more

  10. Integrated load distribution and production planning in series-parallel multi-state systems with failure rate depending on load

    International Nuclear Information System (INIS)

    Nourelfath, Mustapha; Yalaoui, Farouk

    2012-01-01

    A production system containing a set of machines (also called components) arranged according to a series-parallel configuration is addressed. A set of products must be produced in lots on this production system during a specified finite planning horizon. This paper presents a method for integrating load distribution decisions, and tactical production planning considering the costs of capacity change and the costs of unused capacity. The objective is to minimize the sum of capacity change costs, unused capacity costs, setup costs, holding costs, backorder costs, and production costs. The main constraints consist in satisfying the demand for all products over the entire horizon, and in not exceeding available repair resource. The production series-parallel system is modeled as a multi-state system with binary-state components. The proposed model takes into account the dependence of machines' failure rates on their load. Universal generating function technique can be used in the optimization algorithm for evaluating the expected system production rate in each period. We show how the formulated problem can be solved by comparing the results of several multi-product lot-sizing problems with capacity associated costs. The importance of integrating load distribution decisions and production planning is illustrated through numerical examples.

  11. Evaluation of cognitive load and emotional states during multidisciplinary critical care simulation sessions.

    Science.gov (United States)

    Pawar, Swapnil; Jacques, Theresa; Deshpande, Kush; Pusapati, Raju; Meguerdichian, Michael J

    2018-04-01

    The simulation in critical care setting involves a heterogeneous group of participants with varied background and experience. Measuring the impacts of simulation on emotional state and cognitive load in this setting is not often performed. The feasibility of such measurement in the critical care setting needs further exploration. Medical and nursing staff with varying levels of experience from a tertiary intensive care unit participated in a standardised clinical simulation scenario. The emotional state of each participant was assessed before and after completion of the scenario using a validated eight-item scale containing bipolar oppositional descriptors of emotion. The cognitive load of each participant was assessed after the completion of the scenario using a validated subjective rating tool. A total of 103 medical and nursing staff participated in the study. The participants felt more relaxed (-0.28±1.15 vs 0.14±1, Pcognitive load for all participants was 6.67±1.41. There was no significant difference in the cognitive loads among medical staff versus nursing staff (6.61±2.3 vs 6.62±1.7; P>0.05). A well-designed complex high fidelity critical care simulation scenario can be evaluated to identify the relative cognitive load of the participants' experience and their emotional state. The movement of learners emotionally from a more negative state to a positive state suggests that simulation can be an effective tool for improved knowledge transfer and offers more opportunity for dynamic thinking.

  12. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    Science.gov (United States)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  13. Server for experimental data from LHD

    International Nuclear Information System (INIS)

    Emoto, M.; Ohdachi, S.; Watanabe, K.; Sudo, S.; Nagayama, Y.

    2006-01-01

    In order to unify various types of data, the Kaiseki Server was developed. This server provides physical experimental data of large helical device (LHD) experiments. Many types of data acquisition systems currently exist in operation, and they produce files of various formats. Therefore, it has been difficult to analyze different types of acquisition data at the same time because the data of each system should be read in a particular manner. To facilitate the usage of this data by researchers, the authors have developed a new server system, which provides a unified data format and a unique data retrieval interface. Although the Kaiseki Server satisfied the initial demand, new requests arose from researchers, one of which was the remote usage of the server. The current system cannot be used remotely because of security issues. Another request was group ownership, i.e., users belonging to the same group should have equal access to data. To satisfy these demands, the authors modified the server. However, since other requests may arise in the future, the new system must be flexible so that it can satisfy future demands. Therefore, the authors decided to develop a new server using a three-tier structure

  14. Steady-state critical loads of acidity for forest soils in the Georgia Basin, British Columbia

    Directory of Open Access Journals (Sweden)

    Shaun A. WATMOUGH

    2010-08-01

    Full Text Available There has been growing interest in acid rain research in western Canada where sulphur (S and nitrogen (N emissions are expected to increase during the next two decades. One region of concern is southern British Columbia, specifically the Georgia Basin, where emissions are expected to increase owing to the expansion of industry and urban centres (Vancouver and Victoria. In the current study, weathering rates and critical loads of acidity (S and N for forest soils were estimated at nineteen sites located within the Georgia Basin. A base cation to aluminium ratio of 10 was selected as the critical chemical criterion associated with ecosystem damage. The majority of the sites (58% had low base cation weathering rates (≤50 meq m–2 y–1 based on the PROFILE model. Accordingly, mean critical load for the study sites, estimated using the steady-state mass balance model, ranged between 129–168 meq m–2 y–1. Annual average total (wet and dry S and N deposition during the period 2005–2006 (estimated by the Community Multiscale Air Quality model, exceeded critical load at five–nine of the study sites (mean exceedance = 32–46 meq m–2 y–1. The high-elevation (>1000 m study sites had shallow, acid sensitive, soils with low weathering rates; however, critical loads were predominantly exceeded at sites close to Vancouver under higher modelled deposition loads. The extent of exceedance is similar to other industrial regions in western and eastern Canada.

  15. Windows Server 2012 R2 administrator cookbook

    CERN Document Server

    Krause, Jordan

    2015-01-01

    This book is intended for system administrators and IT professionals with experience in Windows Server 2008 or Windows Server 2012 environments who are looking to acquire the skills and knowledge necessary to manage and maintain the core infrastructure required for a Windows Server 2012 and Windows Server 2012 R2 environment.

  16. A conceptual framework: Redefining forest soil's critical acid loads under a changing climate

    International Nuclear Information System (INIS)

    McNulty, Steven G.; Boggs, Johnny L.

    2010-01-01

    Federal agencies of several nations have or are currently developing guidelines for critical forest soil acid loads. These guidelines are used to establish regulations designed to maintain atmospheric acid inputs below levels shown to damage forests and streams. Traditionally, when the critical soil acid load exceeds the amount of acid that the ecosystem can absorb, it is believed to potentially impair forest health. The excess over the critical soil acid load is termed the exceedance, and the larger the exceedance, the greater the risk of ecosystem damage. This definition of critical soil acid load applies to exposure of the soil to a single, long-term pollutant (i.e., acidic deposition). However, ecosystems can be simultaneously under multiple ecosystem stresses and a single critical soil acid load level may not accurately reflect ecosystem health risk when subjected to multiple, episodic environmental stress. For example, the Appalachian Mountains of western North Carolina receive some of the highest rates of acidic deposition in the eastern United States, but these levels are considered to be below the critical acid load (CAL) that would cause forest damage. However, the area experienced a moderate three-year drought from 1999 to 2002, and in 2001 red spruce (Picea rubens Sarg.) trees in the area began to die in large numbers. The initial survey indicated that the affected trees were killed by the southern pine beetle (Dendroctonus frontalis Zimm.). This insect is not normally successful at colonizing these tree species because the trees produce large amounts of oleoresin that exclude the boring beetles. Subsequent investigations revealed that long-term acid deposition may have altered red spruce forest structure and function. There is some evidence that elevated acid deposition (particularly nitrogen) reduced tree water uptake potential, oleoresin production, and caused the trees to become more susceptible to insect colonization during the drought period

  17. NRSAS: Nuclear Receptor Structure Analysis Servers.

    NARCIS (Netherlands)

    Bettler, E.J.M.; Krause, R.; Horn, F.; Vriend, G.

    2003-01-01

    We present a coherent series of servers that can perform a large number of structure analyses on nuclear hormone receptors. These servers are part of the NucleaRDB project, which provides a powerful information system for nuclear hormone receptors. The computations performed by the servers include

  18. I/O load balancing for big data HPC applications

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Arnab K. [Virginia Polytechnic Institute and State University; Goyal, Arpit [Virginia Polytechnic Institute and State University; Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Butt, Ali R. [Virginia Tech, Blacksburg, VA; Brim, Michael J. [ORNL; Srinivasa, Sangeetha B. [Virginia Polytechnic Institute and State University

    2018-01-01

    High Performance Computing (HPC) big data problems require efficient distributed storage systems. However, at scale, such storage systems often experience load imbalance and resource contention due to two factors: the bursty nature of scientific application I/O; and the complex I/O path that is without centralized arbitration and control. For example, the extant Lustre parallel file system-that supports many HPC centers-comprises numerous components connected via custom network topologies, and serves varying demands of a large number of users and applications. Consequently, some storage servers can be more loaded than others, which creates bottlenecks and reduces overall application I/O performance. Existing solutions typically focus on per application load balancing, and thus are not as effective given their lack of a global view of the system. In this paper, we propose a data-driven approach to load balance the I/O servers at scale, targeted at Lustre deployments. To this end, we design a global mapper on Lustre Metadata Server, which gathers runtime statistics from key storage components on the I/O path, and applies Markov chain modeling and a minimum-cost maximum-flow algorithm to decide where data should be placed. Evaluation using a realistic system simulator and a real setup shows that our approach yields better load balancing, which in turn can improve end-to-end performance.

  19. Immune networks: multi-tasking capabilities at medium load

    Science.gov (United States)

    Agliari, E.; Annibale, A.; Barra, A.; Coolen, A. C. C.; Tantari, D.

    2013-08-01

    Associative network models featuring multi-tasking properties have been introduced recently and studied in the low-load regime, where the number P of simultaneously retrievable patterns scales with the number N of nodes as P ˜ log N. In addition to their relevance in artificial intelligence, these models are increasingly important in immunology, where stored patterns represent strategies to fight pathogens and nodes represent lymphocyte clones. They allow us to understand the crucial ability of the immune system to respond simultaneously to multiple distinct antigen invasions. Here we develop further the statistical mechanical analysis of such systems, by studying the medium-load regime, P ˜ Nδ with δ ∈ (0, 1]. We derive three main results. First, we reveal the nontrivial architecture of these networks: they exhibit a high degree of modularity and clustering, which is linked to their retrieval abilities. Second, by solving the model we demonstrate for δ frameworks are required to achieve effective retrieval.

  20. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR

    Energy Technology Data Exchange (ETDEWEB)

    Schot, Gijs van der [Uppsala University, Laboratory of Molecular Biophysics, Department of Cell and Molecular Biology (Sweden); Bonvin, Alexandre M. J. J., E-mail: a.m.j.j.bonvin@uu.nl [Utrecht University, Faculty of Science – Chemistry, Bijvoet Center for Biomolecular Research (Netherlands)

    2015-08-15

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665–1674, 2005b, doi: 10.1021/ja047109h 10.1021/ja047109h ). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27–35, 2013, doi: 10.1007/s10858-013-9762-6 10.1007/s10858-013-9762-6 ), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution.

  1. Performance of the WeNMR CS-Rosetta3 web server in CASD-NMR

    International Nuclear Information System (INIS)

    Schot, Gijs van der; Bonvin, Alexandre M. J. J.

    2015-01-01

    We present here the performance of the WeNMR CS-Rosetta3 web server in CASD-NMR, the critical assessment of automated structure determination by NMR. The CS-Rosetta server uses only chemical shifts for structure prediction, in combination, when available, with a post-scoring procedure based on unassigned NOE lists (Huang et al. in J Am Chem Soc 127:1665–1674, 2005b, doi: 10.1021/ja047109h 10.1021/ja047109h ). We compare the original submissions using a previous version of the server based on Rosetta version 2.6 with recalculated targets using the new R3FP fragment picker for fragment selection and implementing a new annotation of prediction reliability (van der Schot et al. in J Biomol NMR 57:27–35, 2013, doi: 10.1007/s10858-013-9762-6 10.1007/s10858-013-9762-6 ), both implemented in the CS-Rosetta3 WeNMR server. In this second round of CASD-NMR, the WeNMR CS-Rosetta server has demonstrated a much better performance than in the first round since only converged targets were submitted. Further, recalculation of all CASD-NMR targets using the new version of the server demonstrates that our new annotation of prediction quality is giving reliable results. Predictions annotated as weak are often found to provide useful models, but only for a fraction of the sequence, and should therefore only be used with caution

  2. Determination of the critical plane and durability estimation for a multiaxial cyclic loading

    Science.gov (United States)

    Burago, N. G.; Nikitin, A. D.; Nikitin, I. S.; Yakushev, V. L.

    2018-03-01

    An analytical procedure is proposed to determine the critical plane orientation according to the Findley criterion for the multiaxial cyclic loading. The cases of in-phase and anti-phase cyclic loading are considered. Calculations of the stress state are carried out for the system of the gas turbine engine compressor disk and blades for flight loading cycles. The formulas obtained are used for estimations of the fatigue durability of this essential element of structure.

  3. Multi-objective Extremum Seeking Control for Enhancement of Wind Turbine Power Capture with Load Reduction

    Science.gov (United States)

    Xiao, Yan; Li, Yaoyu; Rotea, Mario A.

    2016-09-01

    The primary objective in below rated wind speed (Region 2) is to maximize the turbine's energy capture. Due to uncertainty, variability of turbine characteristics and lack of inexpensive but precise wind measurements, model-free control strategies that do not use wind measurements such as Extremum Seeking Control (ESC) have received significant attention. Based on a dither-demodulation scheme, ESC can maximize the wind power capture in real time despite uncertainty, variabilities and lack of accurate wind measurements. The existing work on ESC based wind turbine control focuses on power capture only. In this paper, a multi-objective extremum seeking control strategy is proposed to achieve nearly optimum wind energy capture while decreasing structural fatigue loads. The performance index of the ESC combines the rotor power and penalty terms of the standard deviations of selected fatigue load variables. Simulation studies of the proposed multi-objective ESC demonstrate that the damage-equivalent loads of tower and/or blade loads can be reduced with slight compromise in energy capture.

  4. Research on a Method of Geographical Information Service Load Balancing

    Science.gov (United States)

    Li, Heyuan; Li, Yongxing; Xue, Zhiyong; Feng, Tao

    2018-05-01

    With the development of geographical information service technologies, how to achieve the intelligent scheduling and high concurrent access of geographical information service resources based on load balancing is a focal point of current study. This paper presents an algorithm of dynamic load balancing. In the algorithm, types of geographical information service are matched with the corresponding server group, then the RED algorithm is combined with the method of double threshold effectively to judge the load state of serve node, finally the service is scheduled based on weighted probabilistic in a certain period. At the last, an experiment system is built based on cluster server, which proves the effectiveness of the method presented in this paper.

  5. Immune networks: multi-tasking capabilities at medium load

    International Nuclear Information System (INIS)

    Agliari, E; Annibale, A; Barra, A; Coolen, A C C; Tantari, D

    2013-01-01

    Associative network models featuring multi-tasking properties have been introduced recently and studied in the low-load regime, where the number P of simultaneously retrievable patterns scales with the number N of nodes as P ∼ log N. In addition to their relevance in artificial intelligence, these models are increasingly important in immunology, where stored patterns represent strategies to fight pathogens and nodes represent lymphocyte clones. They allow us to understand the crucial ability of the immune system to respond simultaneously to multiple distinct antigen invasions. Here we develop further the statistical mechanical analysis of such systems, by studying the medium-load regime, P ∼ N δ with δ ∈ (0, 1]. We derive three main results. First, we reveal the nontrivial architecture of these networks: they exhibit a high degree of modularity and clustering, which is linked to their retrieval abilities. Second, by solving the model we demonstrate for δ < 1 the existence of large regions in the phase diagram where the network can retrieve all stored patterns simultaneously. Finally, in the high-load regime δ = 1 we find that the system behaves as a spin-glass, suggesting that finite-connectivity frameworks are required to achieve effective retrieval. (paper)

  6. Mastering Windows Server 2008 Networking Foundations

    CERN Document Server

    Minasi, Mark; Mueller, John Paul

    2011-01-01

    Find in-depth coverage of general networking concepts and basic instruction on Windows Server 2008 installation and management including active directory, DNS, Windows storage, and TCP/IP and IPv4 networking basics in Mastering Windows Server 2008 Networking Foundations. One of three new books by best-selling author Mark Minasi, this guide explains what servers do, how basic networking works (IP basics and DNS/WINS basics), and the fundamentals of the under-the-hood technologies that support staff must understand. Learn how to install Windows Server 2008 and build a simple network, security co

  7. National Medical Terminology Server in Korea

    Science.gov (United States)

    Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee

    Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.

  8. Test Program for the Performance Analysis of DNS64 Servers

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2015-09-01

    Full Text Available In our earlier research papers, bash shell scripts using the host Linux command were applied for testing the performance and stability of different DNS64 server imple­mentations. Because of their inefficiency, a small multi-threaded C/C++ program (named dns64perf was written which can directly send DNS AAAA record queries. After the introduction to the essential theoretical background about the structure of DNS messages and TCP/IP socket interface programming, the design decisions and implementation details of our DNS64 performance test program are disclosed. The efficiency of dns64perf is compared to that of the old method using bash shell scripts. The result is convincing: dns64perf can send at least 95 times more DNS AAAA record queries per second. The source code of dns64perf is published under the GNU GPLv3 license to support the work of other researchers in the field of testing the performance of DNS64 servers.

  9. Critical current degradation in superconducting niobium-titanium alloys in external magnetic fields under loading

    International Nuclear Information System (INIS)

    Bojko, V.S.; Lazareva, M.B.; Starodubov, Ya.D.; Chernyj, O.V.; Gorbatenko, V.M.

    1992-01-01

    The effect of external magnetic fields on the stress at which the critical current starts to degrade (the degradation threshold σ 0 e ) under mechanical loads in superconducting Nb-Ti alloys is studied and a possible mechanism of realization of the effect observed is proposed.It is assumed that additional stresses on the transformation dislocation from the external magnetic fields are beneficial for the growth of martensite inclusions whose superconducting parameters (critical current density j k and critical temperature T k ) are lower then those in the initial material.The degradation threshold is studied experimentally in external magnetic fields H up to 7 T.The linear dependence σ 0 e (H) is observed.It is shown that external magnetic fields play an important role in the critical current degradation at the starting stages of deformation.This fact supports the assumption that the degradation of superconducting parameters under loading are due to the phenomenon of superelasticity,i.e. a reversible load-induced change in the martensite inclusions sizes rather than the reversible mechanical twinning.The results obtained are thought to be important to estimating superconducting solenoid stability in a wide range of magnetic fields

  10. The scratch test - Different critical load determination techniques. [adhesive strength of thin hard coatings

    Science.gov (United States)

    Sekler, J.; Hintermann, H. E.; Steinmann, P. A.

    1988-01-01

    Different critical load determination techniques such as microscopy, acoustic emission, normal, tangential, and lateral forces used for scratch test evaluation of complex or multilayer coatings are investigated. The applicability of the scratch test to newly developed coating techniques, systems, and applications is discussed. Among the methods based on the use of a physical measurement, acoustic emission detection is the most effective. The dynamics ratio between the signals below and above the critical load for the acoustic emission (much greater than 100) is well above that obtained with the normal, tangential, and lateral forces. The present commercial instruments are limited in load application performance. A scratch tester able to apply accurate loads as low as 0.01 N would probably overcome most of the actual limitations and would be expected to extend the scratch testing technique to different application fields such as optics and microelectronics.

  11. Approximations for Markovian multi-class queues with preemptive priorities

    NARCIS (Netherlands)

    van der Heijden, Matthijs C.; van Harten, Aart; Sleptchenko, Andrei

    2004-01-01

    We discuss the approximation of performance measures in multi-class M/M/k queues with preemptive priorities for large problem instances (many classes and servers) using class aggregation and server reduction. We compared our approximations to exact and simulation results and found that our approach

  12. Analisis Perbandingan Antara Colocation Server Dengan Amazon Web Services (Cloud Untuk Usabilitas Portal Swa.co.id Di PT. Swa Media Bisnis

    Directory of Open Access Journals (Sweden)

    Lipur Sugiyanta

    2017-06-01

    Full Text Available Untuk mendukung usabilitas web portal nya, SWA Media Online menggunakan layanan web hosting Colocation Server dari Wowrack. Layanan Colocation Server dari Wowrack ini memiliki lokasi server fisik atau pusat data di Surabaya, Indonesia. Seiring berjalannya waktu, penggunaan Colocation Server dirasa semakin menghambat perkembangan perusahaan, terbukti dengan melambatnya akses ke web portal swa.co.id. Untuk itu, pada bulan Mei - Juni 2015 SWA Media Online memutuskan berpaling dari Colocation Server ke teknologi cloud terbaru. Pada akhir bulan Juni 2015, SWA Media Online resmi bermigrasi dari colocation ke Amazon Web Services. Dimana server fisik nya berada di Singapura (untuk pelanggan ASEAN. Untuk fitur yang digunakan, hampir sama seperti saat menggunakan colocation yaitu yang sesuai dengan kebutuhan perusahaan. Namun, pada Amazon Web Services memberikan service atau fitur tambahan berupa adanya load balancer, auto scaling, dan bucket atau media penyimpanan. Metodologi yang peneliti terapkan dalam penelitian ini adalah metodologi analisis secara kualitatif. Berdasarkan hasil penelitian, didapatkan hasil bahwa fitur tambahan yang diberikan Amazon Web Services mampu meningkatkan usabilitas portal dalam segi kemudahan dalam kecepatan akses portal. Kecepatan akses web portal meningkat lebih baik dibandingkan saat menggunakan Colocation Server.

  13. Performance enhancement of a web-based picture archiving and communication system using commercial off-the-shelf server clusters.

    Science.gov (United States)

    Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  14. Performance Enhancement of a Web-Based Picture Archiving and Communication System Using Commercial Off-the-Shelf Server Clusters

    Directory of Open Access Journals (Sweden)

    Yan-Lin Liu

    2014-01-01

    Full Text Available The rapid development of picture archiving and communication systems (PACSs thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital’s operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB, distributed file system (DFS, and structured query language (SQL duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR, computed tomography (CT, and magnetic resonance (MR images simultaneously to simulate the clinical situation. The average transmission rate (ATR was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  15. CheD: chemical database compilation tool, Internet server, and client for SQL servers.

    Science.gov (United States)

    Trepalin, S V; Yarkov, A V

    2001-01-01

    An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.

  16. NExT server

    CERN Document Server

    1989-01-01

    The first website at CERN - and in the world - was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer. The website described the basic features of the web; how to access other people's documents and how to set up your own server. This NeXT machine - the original web server - is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world's first website to its original address.

  17. Web-based access to near real-time and archived high-density time-series data: cyber infrastructure challenges & developments in the open-source Waveform Server

    Science.gov (United States)

    Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.

    2010-12-01

    The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.

  18. MS SQL Server 7.0 as a Platform for a Star Join Schema Data Warehouse

    DEFF Research Database (Denmark)

    Sørensen, Jens Otto; Alnor, Karl

    1998-01-01

    In this paper we construct a Star Join Schema and show how this schema can be created using the basic tools delivered with SQL Server 7.0. Major objectives are to keep the operational database unchanged so that data loading can be done without disturbing the business logic of the operational...

  19. Multi-Objective Flight Control for Drag Minimization and Load Alleviation of High-Aspect Ratio Flexible Wing Aircraft

    Science.gov (United States)

    Nguyen, Nhan; Ting, Eric; Chaparro, Daniel; Drew, Michael; Swei, Sean

    2017-01-01

    As aircraft wings become much more flexible due to the use of light-weight composites material, adverse aerodynamics at off-design performance can result from changes in wing shapes due to aeroelastic deflections. Increased drag, hence increased fuel burn, is a potential consequence. Without means for aeroelastic compensation, the benefit of weight reduction from the use of light-weight material could be offset by less optimal aerodynamic performance at off-design flight conditions. Performance Adaptive Aeroelastic Wing (PAAW) technology can potentially address these technical challenges for future flexible wing transports. PAAW technology leverages multi-disciplinary solutions to maximize the aerodynamic performance payoff of future adaptive wing design, while addressing simultaneously operational constraints that can prevent the optimal aerodynamic performance from being realized. These operational constraints include reduced flutter margins, increased airframe responses to gust and maneuver loads, pilot handling qualities, and ride qualities. All of these constraints while seeking the optimal aerodynamic performance present themselves as a multi-objective flight control problem. The paper presents a multi-objective flight control approach based on a drag-cognizant optimal control method. A concept of virtual control, which was previously introduced, is implemented to address the pair-wise flap motion constraints imposed by the elastomer material. This method is shown to be able to satisfy the constraints. Real-time drag minimization control is considered to be an important consideration for PAAW technology. Drag minimization control has many technical challenges such as sensing and control. An initial outline of a real-time drag minimization control has already been developed and will be further investigated in the future. A simulation study of a multi-objective flight control for a flight path angle command with aeroelastic mode suppression and drag

  20. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    International Nuclear Information System (INIS)

    Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng

    2015-01-01

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  1. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feifei [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Liu, Yong, E-mail: yongliu@pku.edu.cn [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Institute of Water Sciences, Peking University, Beijing 100871 (China); Su, Han [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Zou, Rui [Tetra Tech, Inc., 10306 Eaton Place, Ste 340, Fairfax, VA 22030 (United States); Yunnan Key Laboratory of Pollution Process and Management of Plateau Lake-Watershed, Kunming 650034 (China); Guo, Huaicheng [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China)

    2015-05-15

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  2. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    CERN Document Server

    Valassi, A; Kalkhof, A; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN for accessing the data stored by the LHC experiments using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier web server and cache. Two new components have recently been added to CORAL to implement a model involving a middle tier "CORAL server" deployed close to the database and a tree of "CORAL server proxy" instances, with data caching and multiplexing functionalities, deployed close to the client. The new components are meant to provide advantages for read-only and read-write data access, in both offline and online use cases, in the areas of scalability and performance (multiplexing for several incoming connections, optional data caching) and security (authentication via proxy certificates). A first implementation of the two new c...

  3. Waiting-time approximations in multi-queue systems with cyclic service

    NARCIS (Netherlands)

    Boxma, O.J.; Meister, B.W.

    1987-01-01

    This study is devoted to mean waiting-time approximations in a single-server multi-queue model with cyclic service and zero switching times of the server between consecutive queues. Two different service disciplines are considered: exhaustive service and (ordinary cyclic) nonexhaustive service. For

  4. Personalized Pseudonyms for Servers in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiao Qiuyu

    2017-10-01

    Full Text Available A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”, a persistent pseudonym for a tenant server that can be used by a single client to access the server, whose real identity is protected by the cloud from both passive and active network attackers. When instantiated for TLS-based access to web servers, our design works with all major browsers and requires no additional client-side software and minimal changes to the client user experience. Moreover, changes to tenant servers can be hidden in supporting software (operating systems and web-programming frameworks without imposing on web-content development. Perhaps most notably, our system boosts privacy with minimal impact to web-browsing performance, after some initial setup during a user’s first access to each web server.

  5. Mastering Microsoft Windows Server 2008 R2

    CERN Document Server

    Minasi, Mark; Finn, Aidan

    2010-01-01

    The one book you absolutely need to get up and running with Windows Server 2008 R2. One of the world's leading Windows authorities and top-selling author Mark Minasi explores every nook and cranny of the latest version of Microsoft's flagship network operating system, Windows Server 2008 R2, giving you the most in-depth coverage in any book on the market.: Focuses on Windows Windows Server 2008 R2, the newest version of Microsoft's Windows' server line of operating system, and the ideal server for new Windows 7 clients; Author Mark Minasi is one of the world's leading Windows authorities and h

  6. Web server for priority ordered multimedia services

    Science.gov (United States)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  7. Mastering Windows Server 2012 R2

    CERN Document Server

    Minasi, Mark; Booth, Christian; Butler, Robert; McCabe, John; Panek, Robert; Rice, Michael; Roth, Stefan

    2013-01-01

    Check out the new Hyper-V, find new and easier ways to remotely connect back into the office, or learn all about Storage Spaces-these are just a few of the features in Windows Server 2012 R2 that are explained in this updated edition from Windows authority Mark Minasi and a team of Windows Server experts led by Kevin Greene. This book gets you up to speed on all of the new features and functions of Windows Server, and includes real-world scenarios to put them in perspective. If you're a system administrator upgrading to, migrating to, or managing Windows Server 2012 R2, find what you need to

  8. An adversarial queueing model for online server routing

    NARCIS (Netherlands)

    Bonifaci, V.

    2007-01-01

    In an online server routing problem, a vehicle or server moves in a network in order to process incoming requests at the nodes. Online server routing problems have been thoroughly studied using competitive analysis. We propose a new model for online server routing, based on adversarial queueing

  9. SPENT NUCLEAR FUEL NUMBER DENSITIES FOR MULTI-PURPOSE CANISTER CRITICALITY CALCULATIONS

    International Nuclear Information System (INIS)

    D. A. Thomas

    1996-01-01

    The purpose of this analysis is to calculate the number densities for spent nuclear fuel (SNF) to be used in criticality evaluations of the Multi-Purpose Canister (MPC) waste packages. The objective of this analysis is to provide material number density information which will be referenced by future MPC criticality design analyses, such as for those supporting the Conceptual Design Report

  10. New approaches to provide ride-through for critical loads in electric power distribution systems

    Science.gov (United States)

    Montero-Hernandez, Oscar C.

    2001-07-01

    The extensive use of electronic circuits has enabled modernization, automation, miniaturization, high quality, low cost, and other achievements regarding electric loads in the last decades. However, modern electronic circuits and systems are extremely sensitive to disturbances from the electric power supply. In fact, the rate at which these disturbances happen is considerable as has been documented in recent years. In response to the power quality concerns presented previously, this dissertation is proposing new approaches to provide ride-through for critical loads during voltage disturbances with emphasis on voltage sags. In this dissertation, a new approach based on an AC-DC-AC system is proposed to provide ride-through for critical loads connected in buildings and/or an industrial system. In this approach, a three-phase IGBT inverter with a built in Dc-link voltage regulator is suitably controlled along with static by-pass switches to provide continuous power to critical loads. During a disturbance, the input utility source is disconnected and the power from the inverter is connected to the load. The remaining voltage in the AC supply is converted to DC and compensated before being applied to the inverter and the load. After detecting normal utility conditions, power from the utility is restored to the critical load. In order to achieve an extended ride-through capability a second approach is introduced. In this case, the Dc-link voltage regulator is performed by a DC-DC Buck-Boost converter. This new approach has the capability to mitigate voltage variations below and above the nominal value. In the third approach presented in this dissertation, a three-phase AC to AC boost converter is investigated. This converter provides a boosting action for the utility input voltages, right before they are applied to the load. The proposed Pulse Width Modulation (PWM) control strategy ensures independent control of each phase and compensates for both single-phase or poly

  11. Multi-Axis Independent Electromechanical Load Control for Docking System Actuation Development and Verification Using dSPACE

    Science.gov (United States)

    Oesch, Christopher; Dick, Brandon; Rupp, Timothy

    2015-01-01

    The development of highly complex and advanced actuation systems to meet customer demands has accelerated as the use of real-time testing technology expands into multiple markets at Moog. Systems developed for the autonomous docking of human rated spacecraft to the International Space Station (ISS), envelope multi-operational characteristics which place unique constraints on an actuation system. Real-time testing hardware has been used as a platform for incremental testing and development for the linear actuation system which controls initial capture and docking for vehicles visiting the ISS. This presentation will outline the role of dSPACE hardware as a platform for rapid control-algorithm prototyping as well as an Electromechanical Actuator (EMA) system dynamic loading simulator, both conducted at Moog to develop the safety critical Linear Actuator System (LAS) of the NASA Docking System (NDS).

  12. A tandem queue with delayed server release

    OpenAIRE

    Nawijn, W.M.

    1997-01-01

    We consider a tandem queue with two stations. The rst station is an s-server queue with Poisson arrivals and exponential service times. After terminating his service in the rst station, a customer enters the second station to require service at an exponential single server, while in the meantime he is blocking his server in station 1 until he completes service in station 2, whereupon the server in station 1 is released. An analysis of the generating function of the simultaneous probability di...

  13. Effects and empirical critical loads of Nitrogen for ecoregions of the United States

    Science.gov (United States)

    Pardo, Linda H.; Robin-Abbott, Molly J.; Fenn, Mark E.; Goodale, Christine L.; Geiser, Linda H.; Driscoll, Charles T.; Allen, Edith B.; Baron, Jill S.; Bobbink, Roland; Bowman, William D.; Clark, C M; Emmett, B.; Gilliam, Frank S; Greaver, Tara L.; Hall, Sharon J; Lilleskov, Erik A.; Liu, Lingli; Lynch, Jason A.; Nadelhoffer, Knute J; Perakis, Steven; Stoddard, John L; Weathers, Kathleen C.; Dennis, Robin L.

    2015-01-01

    Human activity in the last century has increased nitrogen (N) deposition to a level that has caused or is likely to cause alterations to the structure and function of many ecosystems across the United States. We synthesized current research relating atmospheric N deposition to effects on terrestrial and freshwater ecosystems in the United States, and estimated associated empirical critical loads of N for several receptors: freshwater diatoms, mycorrhizal fungi, lichens, bryophytes, herbaceous plants, shrubs, and trees. Biogeochemical responses included increased N mineralization and nitrification, increased gaseous N losses, and increased N leaching. Individual species, population, and community responses included increased tissue N, physiological and nutrient imbalances, increased growth, altered root-shoot ratios, increased susceptibility to secondary stresses, altered fire regime, shifts in competitive interactions and community composition, changes in species richness and other measures of biodiversity, and increases in invasive species. The range of critical loads of nutrient N reported for U.S. ecoregions, inland surface waters, and freshwater wetlands is 1–39 kg N ha−1 yr−1, spanning the range of N deposition observed over most of the country. The empirical critical loads of N tend to increase in the following sequence: diatoms, lichens and bryophytes, mycorrhizal fungi, herbaceous plants and shrubs, trees.

  14. A group arrival retrial G - queue with multi optional stages of service, orbital search and server breakdown

    Science.gov (United States)

    Radha, J.; Indhira, K.; Chandrasekaran, V. M.

    2017-11-01

    A group arrival feedback retrial queue with k optional stages of service and orbital search policy is studied. Any arriving group of customer finds the server free, one from the group enters into the first stage of service and the rest of the group join into the orbit. After completion of the i th stage of service, the customer under service may have the option to choose (i+1)th stage of service with θi probability, with pI probability may join into orbit as feedback customer or may leave the system with {q}i=≤ft\\{\\begin{array}{l}1-{p}i-{θ }i,i=1,2,\\cdots k-1\\ 1-{p}i,i=k\\end{array}\\right\\} probability. Busy server may get to breakdown due to the arrival of negative customers and the service channel will fail for a short interval of time. At the completion of service or repair, the server searches for the customer in the orbit (if any) with probability α or remains idle with probability 1-α. By using the supplementary variable method, steady state probability generating function for system size, some system performance measures are discussed.

  15. WebSphere Application Server Step by Step

    CERN Document Server

    Cline, Owen; Van Sickel, Peter

    2012-01-01

    WebSphere Application Server (WAS) is complex and multifaceted middleware used by huge enterprises as well as small businesses. In this book, the authors do an excellent job of covering the many aspects of the software. While other books merely cover installation and configuration, this book goes beyond that to cover the critical verification and management process to ensure a successful installation and implementation. It also addresses all of the different packages-from Express to Network-so that no matter what size your company is, you will be able to successfully implement WAS V6. To de

  16. [Loading and strength of single- and multi-unit fixed dental prostheses 2. Strength

    NARCIS (Netherlands)

    Baat, C. de; Witter, D.J.; Meijers, C.C.A.J.; Vergoossen, E.L.; Creugers, N.H.J.

    2014-01-01

    The ultimate strength of a dental prosthesis is defined as the strongest loading force applied to the prosthesis until afracture failure occurs. Important key terms are strength, hardness, toughness and fatigue. Relatively prevalent complications of single- and multi-unit fixed dental prostheses are

  17. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  18. JAFA: a protein function annotation meta-server

    DEFF Research Database (Denmark)

    Friedberg, Iddo; Harder, Tim; Godzik, Adam

    2006-01-01

    Annotations, or JAFA server. JAFA queries several function prediction servers with a protein sequence and assembles the returned predictions in a legible, non-redundant format. In this manner, JAFA combines the predictions of several servers to provide a comprehensive view of what are the predicted functions...

  19. A polling model with an autonomous server

    NARCIS (Netherlands)

    de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    Polling models are used as an analytical performance tool in several application areas. In these models, the focus often is on controlling the operation of the server as to optimize some performance measure. For several applications, controlling the server is not an issue as the server moves

  20. Passive Detection of Misbehaving Name Servers

    Science.gov (United States)

    2013-10-01

    name servers that changed IP address five or more times in a month. Solid red line indicates those servers possibly linked to pharmaceutical scams . 12...malicious and stated that fast-flux hosting “is considered one of the most serious threats to online activities today” [ICANN 2008, p. 2]. The...that time, apparently independent of filters on name-server flux, a large number of pharmaceutical scams1 were taken down. These scams apparently

  1. Integrating multi-view transmission system into MPEG-21 stereoscopic and multi-view DIA (digital item adaptation)

    Science.gov (United States)

    Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran

    2006-10-01

    As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To

  2. Mastering Microsoft Windows Small Business Server 2008

    CERN Document Server

    Johnson, Steven

    2010-01-01

    A complete, winning approach to the number one small business solution. Do you have 75 or fewer users or devices on your small-business network? Find out how to integrate everything you need for your mini-enterprise with Microsoft's new Windows Server 2008 Small Business Server, a custom collection of server and management technologies designed to help small operations run smoothly without a giant IT department. This comprehensive guide shows you how to master all SBS components as well as handle integration with other Microsoft technologies.: Focuses on Windows Server 2008 Small Business Serv

  3. Microsoft Windows Server 2012 administration instant reference

    CERN Document Server

    Hester, Matthew

    2013-01-01

    Fast, accurate answers for common Windows Server questions Serving as a perfect companion to all Windows Server books, this reference provides you with quick and easily searchable solutions to day-to-day challenges of Microsoft's newest version of Windows Server. Using helpful design features such as thumb tabs, tables of contents, and special heading treatments, this resource boasts a smooth and seamless approach to finding information. Plus, quick-reference tables and lists provide additional on-the-spot answers. Covers such key topics as server roles and functionality, u

  4. Server-side Statistics Scripting in PHP

    Directory of Open Access Journals (Sweden)

    Jan de Leeuw

    1997-06-01

    Full Text Available On the UCLA Statistics WWW server there are a large number of demos and calculators that can be used in statistics teaching and research. Some of these demos require substantial amounts of computation, others mainly use graphics. These calculators and demos are implemented in various different ways, reflecting developments in WWW based computing. As usual, one of the main choices is between doing the work on the client-side (i.e. in the browser or on the server-side (i.e. on our WWW server. Obviously, client-side computation puts fewer demands on the server. On the other hand, it requires that the client downloads Java applets, or installs plugins and/or helpers. If JavaScript is used, client-side computations will generally be slow. We also have to assume that the client is installed properly, and has the required capabilities. Requiring too much on the client-side has caused browsing machines such as Netscape Communicator to grow beyond all reasonable bounds, both in size and RAM requirements. Moreover requiring Java and JavaScript rules out such excellent browsers as Lynx or Emacs W3. For server-side computing, we can configure the server and its resources ourselves, and we need not worry about browser capabilities and configuration. Nothing needs to be downloaded, except the usual HTML pages and graphics. In the same way as on the client side, there is a scripting solution, where code is interpreted, or a ob ject-code solution using compiled code. For the server-side scripting, we use embedded languages, such as PHP/FI. The scripts in the HTML pages are interpreted by a CGI program, and the output of the CGI program is send to the clients. Of course the CGI program is compiled, but the statistics procedures will usually be interpreted, because PHP/FI does not have the appropriate functions in its scripting language. This will tend to be slow, because embedded languages do not deal efficiently with loops and similar constructs. Thus a first

  5. Triple-server blind quantum computation using entanglement swapping

    Science.gov (United States)

    Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua

    2014-04-01

    Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.

  6. Dscam1 web server: online prediction of Dscam1 self- and hetero-affinity.

    Science.gov (United States)

    Marini, Simone; Nazzicari, Nelson; Biscarini, Filippo; Wang, Guang-Zhong

    2017-06-15

    Formation of homodimers by identical Dscam1 protein isomers on cell surface is the key factor for the self-avoidance of growing neurites. Dscam1 immense diversity has a critical role in the formation of arthropod neuronal circuit, showing unique evolutionary properties when compared to other cell surface proteins. Experimental measures are available for 89 self-binding and 1722 hetero-binding protein samples, out of more than 19 thousands (self-binding) and 350 millions (hetero-binding) possible isomer combinations. We developed Dscam1 Web Server to quickly predict Dscam1 self- and hetero- binding affinity for batches of Dscam1 isomers. The server can help the study of Dscam1 affinity and help researchers navigate through the tens of millions of possible isomer combinations to isolate the strong-binding ones. Dscam1 Web Server is freely available at: http://bioinformatics.tecnoparco.org/Dscam1-webserver . Web server code is available at https://gitlab.com/ne1s0n/Dscam1-binding . simone.marini@unipv.it or guangzhong.wang@picb.ac.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  7. The CAD-score web server: contact area-based comparison of structures and interfaces of proteins, nucleic acids and their complexes.

    Science.gov (United States)

    Olechnovič, Kliment; Venclovas, Ceslovas

    2014-07-01

    The Contact Area Difference score (CAD-score) web server provides a universal framework to compute and analyze discrepancies between different 3D structures of the same biological macromolecule or complex. The server accepts both single-subunit and multi-subunit structures and can handle all the major types of macromolecules (proteins, RNA, DNA and their complexes). It can perform numerical comparison of both structures and interfaces. In addition to entire structures and interfaces, the server can assess user-defined subsets. The CAD-score server performs both global and local numerical evaluations of structural differences between structures or interfaces. The results can be explored interactively using sortable tables of global scores, profiles of local errors, superimposed contact maps and 3D structure visualization. The web server could be used for tasks such as comparison of models with the native (reference) structure, comparison of X-ray structures of the same macromolecule obtained in different states (e.g. with and without a bound ligand), analysis of nuclear magnetic resonance (NMR) structural ensemble or structures obtained in the course of molecular dynamics simulation. The web server is freely accessible at: http://www.ibt.lt/bioinformatics/cad-score. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Effect of video server topology on contingency capacity requirements

    Science.gov (United States)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  9. RANCANG BANGUN PERANGKAT LUNAK MANAJEMEN DATABASE SQL SERVER BERBASIS WEB

    Directory of Open Access Journals (Sweden)

    Muchammad Husni

    2005-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Microsoft SQL Server merupakan aplikasi desktop database server yang bersifat client/server, karena memiliki komponen client, yang  berfungsi menampilkan dan memanipulasi data; serta komponen server yang berfungsi menyimpan, memanggil, dan mengamankan database. Operasi-operasi manajemen semua server database dalam jaringan dilakukan administrator database dengan menggunakan tool administratif utama SQL Server yang bernama Enterprise Manager. Hal ini mengakibatkan administrator database hanya bisa  melakukan operasi-operasi tersebut di komputer yang telah diinstalasi Microsoft SQL Server. Pada penelitian ini, dirancang suatu aplikasi berbasis web dengan menggunakan ASP.Net untuk melakukan pengaturan database server. Aplikasi ini menggunakan ADO.NET yang memanfaatkan Transact-SQL dan stored procedure pada server untuk melakukan operasi-operasi manajemen database pada suatu server database SQL, dan menampilkannya ke dalam web. Administrator database bisa menjalankan aplikasi berbasis web tersebut dari komputer mana saja pada jaringan dan melakukan koneksi ke server database SQL dengan menggunakan web browser. Dengan demikian memudahkan administrator melakukan tugasnya tanpa harus menggunakan komputer server.   Kata Kunci : Transact-SQL, ASP.Net, ADO.NET, SQL Server

  10. Increased Dicarbonyl Stress as a Novel Mechanism of Multi-Organ Failure in Critical Illness

    Directory of Open Access Journals (Sweden)

    Bas C. T. van Bussel

    2017-02-01

    Full Text Available Molecular pathological pathways leading to multi-organ failure in critical illness are progressively being unravelled. However, attempts to modulate these pathways have not yet improved the clinical outcome. Therefore, new targetable mechanisms should be investigated. We hypothesize that increased dicarbonyl stress is such a mechanism. Dicarbonyl stress is the accumulation of dicarbonyl metabolites (i.e., methylglyoxal, glyoxal, and 3-deoxyglucosone that damages intracellular proteins, modifies extracellular matrix proteins, and alters plasma proteins. Increased dicarbonyl stress has been shown to impair the renal, cardiovascular, and central nervous system function, and possibly also the hepatic and respiratory function. In addition to hyperglycaemia, hypoxia and inflammation can cause increased dicarbonyl stress, and these conditions are prevalent in critical illness. Hypoxia and inflammation have been shown to drive the rapid intracellular accumulation of reactive dicarbonyls, i.e., through reduced glyoxalase-1 activity, which is the key enzyme in the dicarbonyl detoxification enzyme system. In critical illness, hypoxia and inflammation, with or without hyperglycaemia, could thus increase dicarbonyl stress in a way that might contribute to multi-organ failure. Thus, we hypothesize that increased dicarbonyl stress in critical illness, such as sepsis and major trauma, contributes to the development of multi-organ failure. This mechanism has the potential for new therapeutic intervention in critical care.

  11. Narrowing the scope of failure prediction using targeted fault load injection

    Science.gov (United States)

    Jordan, Paul L.; Peterson, Gilbert L.; Lin, Alan C.; Mendenhall, Michael J.; Sellers, Andrew J.

    2018-05-01

    As society becomes more dependent upon computer systems to perform increasingly critical tasks, ensuring that those systems do not fail becomes increasingly important. Many organizations depend heavily on desktop computers for day-to-day operations. Unfortunately, the software that runs on these computers is written by humans and, as such, is still subject to human error and consequent failure. A natural solution is to use statistical machine learning to predict failure. However, since failure is still a relatively rare event, obtaining labelled training data to train these models is not a trivial task. This work presents new simulated fault-inducing loads that extend the focus of traditional fault injection techniques to predict failure in the Microsoft enterprise authentication service and Apache web server. These new fault loads were successful in creating failure conditions that were identifiable using statistical learning methods, with fewer irrelevant faults being created.

  12. A dynamic modelling approach for estimating critical loads of nitrogen based on plant community changes under a changing climate

    International Nuclear Information System (INIS)

    Belyazid, Salim; Kurz, Dani; Braun, Sabine; Sverdrup, Harald; Rihm, Beat; Hettelingh, Jean-Paul

    2011-01-01

    A dynamic model of forest ecosystems was used to investigate the effects of climate change, atmospheric deposition and harvest intensity on 48 forest sites in Sweden (n = 16) and Switzerland (n = 32). The model was used to investigate the feasibility of deriving critical loads for nitrogen (N) deposition based on changes in plant community composition. The simulations show that climate and atmospheric deposition have comparably important effects on N mobilization in the soil, as climate triggers the release of organically bound nitrogen stored in the soil during the elevated deposition period. Climate has the most important effect on plant community composition, underlining the fact that this cannot be ignored in future simulations of vegetation dynamics. Harvest intensity has comparatively little effect on the plant community in the long term, while it may be detrimental in the short term following cutting. This study shows: that critical loads of N deposition can be estimated using the plant community as an indicator; that future climatic changes must be taken into account; and that the definition of the reference deposition is critical for the outcome of this estimate. - Research highlights: → Plant community changes can be used to estimate critical loads of nitrogen. → Climate change is decisive for future changes of geochemistry and plant communities. → Climate change cannot be ignored in estimates of critical loads. → The model ForSAFE-Veg was successfully used to set critical loads of nitrogen. - Plant community composition can be used in dynamic modelling to estimate critical loads of nitrogen deposition, provided the appropriate reference deposition, future climate and target plant communities are defined.

  13. Mastering Citrix XenServer

    CERN Document Server

    Reed, Martez

    2014-01-01

    If you are an administrator who is looking to gain a greater understanding of how to design and implement a virtualization solution based on Citrix® XenServer®, then this book is for you. The book will serve as an excellent resource for those who are already familiar with other virtualization platforms, such as Microsoft Hyper-V or VMware vSphere.The book assumes that you have a good working knowledge of servers, networking, and storage technologies.

  14. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2012-01-01

    Written by Denny Cherry, a Microsoft MVP for the SQL Server product, a Microsoft Certified Master for SQL Server 2008, and one of the biggest names in SQL Server today, Securing SQL Server, Second Edition explores the potential attack vectors someone can use to break into your SQL Server database as well as how to protect your database from these attacks. In this book, you will learn how to properly secure your database from both internal and external threats using best practices and specific tricks the author uses in his role as an independent consultant while working on some of the largest

  15. Securing SQL server protecting your database from attackers

    CERN Document Server

    Cherry, Denny

    2015-01-01

    SQL server is the most widely-used database platform in the world, and a large percentage of these databases are not properly secured, exposing sensitive customer and business data to attack. In Securing SQL Server, Third Edition, you will learn about the potential attack vectors that can be used to break into SQL server databases as well as how to protect databases from these attacks. In this book, Denny Cherry - a Microsoft SQL MVP and one of the biggest names in SQL server - will teach you how to properly secure an SQL server database from internal and external threats using best practic

  16. Server farms with setup costs

    NARCIS (Netherlands)

    Gandhi, A.; Harchol-Balter, M.; Adan, I.J.B.F.

    2010-01-01

    In this paper we consider server farms with a setup cost. This model is common in manufacturing systems and data centers, where there is a cost to turn servers on. Setup costs always take the form of a time delay, and sometimes there is additionally a power penalty, as in the case of data centers.

  17. Determination and Distribution of Critical Loads: Application to the Forest Soils in the Autonomous Region of Madrid

    International Nuclear Information System (INIS)

    Sousa, M.; Schmid, T.; Rabago, I.

    2000-01-01

    The critical loads of acidity and sulphur have been determined for forest soils within the north and northwest of the Autonomous Region of Madrid. The SMB-CCE and SMB-PROFILE steady state models have been applied using a 1 km x 1 km resolution. The forest ecosystems have been characterised according to the soil and forest type, slope and climatic data using a Geographic Information System. In order to estimate the critical loads, processes such as weathering rate of the parent material, atmospheric deposition. critical alkalinity leaching rate and nutrients absorbed by the vegetation have been considered. In general the forest soils present high critical load values for acidity and sulphur. The more sensitive zones are found in the north of the Sierra of Guadarrama. Independent of the applied methods, the results are associated to the types of soils where Leptosols have the lowest, Cambisoles and Regosoles intermediate and Luvisoles the most elevated values. (Author) 40 refs

  18. Development of a method of lifetime assessment of power plant components under complex multi-axial vibration loads

    International Nuclear Information System (INIS)

    Fesich, Thomas M.

    2012-01-01

    In general, technical components are loaded and stressed by forces and moments both constant and variable over time. Multi-axial stress conditions can arise as a function of the load on, and/or the geometry of, a component. Assessing the impact on stability of multi-axial stress conditions is a problem for which no generally valid solution has as yet been found, especially when loads and stresses vary over time. This is also due to the fact that the development over time of stresses can give rise to very complex stress conditions. Assessing the lifetime of power plant components subjected to complex vibration loads and stresses often is not reliable if performed by means of conventional codes and approaches, or is associated with high degrees of conservatism. The MPA AIM-Life concept developed at the Stuttgart MPA/IMWF, which is an advanced and verified strength hypothesis based on energy considerations, allows such assessments to be made more reliably, numerically efficient, and avoiding excessive conservatism. (orig.)

  19. Implementing eco friendly highly reliable upload feature using multi 3G service

    Science.gov (United States)

    Tanutama, Lukas; Wijaya, Rico

    2017-12-01

    The current trend of eco friendly Internet access is preferred. In this research the understanding of eco friendly is minimum power consumption. The devices that are selected have operationally low power consumption and normally have no power consumption as they are hibernating during idle state. To have the reliability a router of a router that has internal load balancing feature will provide the improvement of previous research on multi 3G services for broadband lines. Previous studies emphasized on accessing and downloading information files from Public Cloud residing Web Servers. The demand is not only for speed but high reliability of access as well. High reliability will mean mitigating both direct and indirect high cost due to repeated attempts of uploading and downloading the large files. Nomadic and mobile computer users need viable solution. Following solution for downloading information has been proposed and tested. The solution is promising. The result is now extended to providing reliable access line by means of redundancy and automatic reconfiguration for uploading and downloading large information files to a Web Server in the Cloud. The technique is taking advantage of internal load balancing feature to provision a redundant line acting as a backup line. A router that has the ability to provide load balancing to several WAN lines is chosen. The WAN lines are constructed using multiple 3G lines. The router supports the accessing Internet with more than one 3G access line which increases the reliability and availability of the Internet access as the second line immediately takes over if the first line is disturbed.

  20. DIANA-microT web server v5.0: service integration into miRNA functional analysis workflows.

    Science.gov (United States)

    Paraskevopoulou, Maria D; Georgakilas, Georgios; Kostoulas, Nikos; Vlachos, Ioannis S; Vergoulis, Thanasis; Reczko, Martin; Filippidis, Christos; Dalamagas, Theodore; Hatzigeorgiou, A G

    2013-07-01

    MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANA-microT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA-gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANA-microT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines.

  1. Towards optimizing server performance in an educational MMORPG for teaching computer programming

    Science.gov (United States)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2013-10-01

    Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.

  2. The HydroServer Platform for Sharing Hydrologic Data

    Science.gov (United States)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its

  3. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    Science.gov (United States)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  4. APT analyses of deuterium-loaded Fe/V multi-layered films

    KAUST Repository

    Gemma, R.

    2009-04-01

    Interaction of hydrogen with metallic multi-layered thin films remains as a hot topic in recent days Detailed knowledge on such chemically modulated systems is required if they are desired for application in hydrogen energy system as storage media. In this study, the deuterium concentration profile of Fe/V multi-layer was investigated by atom probe tomography (APT) at 60 and 30 K. It is firstly shown that deuterium-loaded sample can easily react with oxygen at the Pd capping layer on Fe/V and therefore, it is highly desired to avoid any oxygen exposure after D(2) loading before APT analysis. The analysis temperature also has an impact on D concentration profile. The result taken at 60 K shows clear traces of surface segregation of D atoms towards analysis surface. The observed diffusion profile of D allows us to estimate an apparent diffusion coefficient D. The calculated D at 60 K is in the order of 10(-17) cm(2)/s, deviating 6 orders of magnitude from an extrapolated value. This was interpreted with alloying, D-trapping at defects and effects of the large extension to which the extrapolation was done. A D concentration profile taken at 30 K shows nosegregation anymore and a homogeneous distribution at C(D) = 0.05(2) D/Me, which is in good accordance with that measured in the corresponding pressure-composition isotherm. (C) 2008 Elsevier B.V. All rights reserved.

  5. APT analyses of deuterium-loaded Fe/V multi-layered films

    KAUST Repository

    Gemma, R.; Al-Kassab, Talaat; Kirchheim, R.; Pundt, A.

    2009-01-01

    Interaction of hydrogen with metallic multi-layered thin films remains as a hot topic in recent days Detailed knowledge on such chemically modulated systems is required if they are desired for application in hydrogen energy system as storage media. In this study, the deuterium concentration profile of Fe/V multi-layer was investigated by atom probe tomography (APT) at 60 and 30 K. It is firstly shown that deuterium-loaded sample can easily react with oxygen at the Pd capping layer on Fe/V and therefore, it is highly desired to avoid any oxygen exposure after D(2) loading before APT analysis. The analysis temperature also has an impact on D concentration profile. The result taken at 60 K shows clear traces of surface segregation of D atoms towards analysis surface. The observed diffusion profile of D allows us to estimate an apparent diffusion coefficient D. The calculated D at 60 K is in the order of 10(-17) cm(2)/s, deviating 6 orders of magnitude from an extrapolated value. This was interpreted with alloying, D-trapping at defects and effects of the large extension to which the extrapolation was done. A D concentration profile taken at 30 K shows nosegregation anymore and a homogeneous distribution at C(D) = 0.05(2) D/Me, which is in good accordance with that measured in the corresponding pressure-composition isotherm. (C) 2008 Elsevier B.V. All rights reserved.

  6. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  7. Economic emission dispatching with variations of wind power and loads using multi-objective optimization by learning automata

    International Nuclear Information System (INIS)

    Liao, H.L.; Wu, Q.H.; Li, Y.Z.; Jiang, L.

    2014-01-01

    Highlights: • Apply multi-objective optimization by learning automata to power system. • Sequentially dimensional search and state memory are incorporated. • Track dispatch under significant variations of wind power and load demand. • Good performance in terms of accuracy, distribution and computation time. - Abstract: This paper is concerned with using multi-objective optimization by learning automata (MOLA) for economic emission dispatching in the environment where wind power and loads vary. With its capabilities of sequentially dimensional search and state memory, MOLA is able to find accurate solutions while satisfying two objectives: fuel cost coupled with environmental emission and voltage stability. Its searching quality and efficiency are measured using the hypervolume indicator for investigating the quality of Pareto front, and demonstrated by tracking the dispatch solutions under significant variations of wind power and load demand. The simulation studies are carried out on the modified midwestern American electric power system and the IEEE 118-bus test system, in which wind power penetration and load variations present. Evaluated on these two power systems, MOLA is fully compared with multi-objective evolutionary algorithm based on decomposition (MOEA/D) and non-dominated sorting genetic algorithm II (NSGA-II). The simulation results have shown the superiority of MOLA over NAGA-II and MOEA/D, as it is able to obtain more accurate and widely distributed Pareto fronts. In the dynamic environment where the operation condition of both wind speed and load demand varies, MOLA outperforms the other two algorithms, with respect to the tracking ability and accuracy of the solutions

  8. m2-ABKS: Attribute-Based Multi-Keyword Search over Encrypted Personal Health Records in Multi-Owner Setting.

    Science.gov (United States)

    Miao, Yinbin; Ma, Jianfeng; Liu, Ximeng; Wei, Fushan; Liu, Zhiquan; Wang, Xu An

    2016-11-01

    Online personal health record (PHR) is more inclined to shift data storage and search operations to cloud server so as to enjoy the elastic resources and lessen computational burden in cloud storage. As multiple patients' data is always stored in the cloud server simultaneously, it is a challenge to guarantee the confidentiality of PHR data and allow data users to search encrypted data in an efficient and privacy-preserving way. To this end, we design a secure cryptographic primitive called as attribute-based multi-keyword search over encrypted personal health records in multi-owner setting to support both fine-grained access control and multi-keyword search via Ciphertext-Policy Attribute-Based Encryption. Formal security analysis proves our scheme is selectively secure against chosen-keyword attack. As a further contribution, we conduct empirical experiments over real-world dataset to show its feasibility and practicality in a broad range of actual scenarios without incurring additional computational burden.

  9. Windows Terminal Servers Orchestration

    Science.gov (United States)

    Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim

    2017-10-01

    Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.

  10. The RNAsnp web server

    DEFF Research Database (Denmark)

    Radhakrishnan, Sabarinathan; Tafer, Hakim; Seemann, Ernst Stefan

    2013-01-01

    , are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected...... to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/....

  11. Optimizing queries in SQL Server 2008

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2010-05-01

    Full Text Available Starting from the need to develop efficient IT systems, we intend to review theoptimization methods and tools that can be used by SQL Server database administratorsand developers of applications based on Microsoft technology, focusing on the latestversion of the proprietary DBMS, SQL Server 2008. We’ll reflect on the objectives tobe considered in improving the performance of SQL Server instances, we will tackle themostly used techniques for analyzing and optimizing queries and we will describe the“Optimize for ad hoc workloads”, “Plan Freezing” and “Optimize for unknown" newoptions, accompanied by relevant code examples.

  12. Loading Analysis of Modular Multi-level Converter for Offshore High-voltage DC Application under Various Grid Faults

    DEFF Research Database (Denmark)

    Liu, Hui; Ma, Ke; Loh, Poh Chiang

    2016-01-01

    challenges but may also result in overstressed components for the modular multi-level converter. However, the thermal loading of the modular multi-level converter under various grid faults has not yet been clarified. In this article, the power loss and thermal performance of the modular multi-level converter...... low-voltage ride-through strongly depend on the types and severity values of grid voltage dips. The thermal distribution among the three phases of the modular multi-level converter may be quite uneven, and some devices are much more stressed than the normal operating condition, which may...

  13. Novel instrument for characterizing comprehensive physical properties under multi-mechanical loads and multi-physical field coupling conditions

    Science.gov (United States)

    Liu, Changyi; Zhao, Hongwei; Ma, Zhichao; Qiao, Yuansen; Hong, Kun; Ren, Zhuang; Zhang, Jianhai; Pei, Yongmao; Ren, Luquan

    2018-02-01

    Functional materials represented by ferromagnetics and ferroelectrics are widely used in advanced sensor and precision actuation due to their special characterization under coupling interactions of complex loads and external physical fields. However, the conventional devices for material characterization can only provide a limited type of loads and physical fields and cannot simulate the actual service conditions of materials. A multi-field coupling instrument for characterization has been designed and implemented to overcome this barrier and measure the comprehensive physical properties under complex service conditions. The testing forms include tension, compression, bending, torsion, and fatigue in mechanical loads, as well as different external physical fields, including electric, magnetic, and thermal fields. In order to offer a variety of information to reveal mechanical damage or deformation forms, a series of measurement methods at the microscale are integrated with the instrument including an indentation unit and in situ microimaging module. Finally, several coupling experiments which cover all the loading and measurement functions of the instrument have been implemented. The results illustrate the functions and characteristics of the instrument and then reveal the variety in mechanical and electromagnetic properties of the piezoelectric transducer ceramic, TbDyFe alloy, and carbon fiber reinforced polymer under coupling conditions.

  14. Experience with Server Self Service Center (S3C)

    International Nuclear Information System (INIS)

    Sucik, Juraj; Bukowiec, Sebastian

    2010-01-01

    CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft (registered) Virtual Server 2005. With the introduction of Windows Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. This paper describes the architecture of the redesigned virtual Server Self Service based on Hyper-V which provides dynamically scalable virtualized resources on demand as needed and outlines the possible implications on the future use of virtual machines at CERN.

  15. Experience with Server Self Service Center (S3C)

    CERN Multimedia

    Sucik, J

    2009-01-01

    CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft® Virtual Server 2005. With the introduction of Windows Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. This paper describes the architecture of the redesigned virtual Server Self Service based on Hyper-V which provides dynamically scalable virtualized resources on demand as needed and outlines the possible implications on the future use of virtual machines at CERN.

  16. Energy Efficiency in Small Server Rooms: Field Surveys and Findings

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Iris [Hoi; Greenberg, Steve; Mahdavi, Roozbeh; Brown, Richard; Tschudi, William

    2014-08-11

    Fifty-seven percent of US servers are housed in server closets, server rooms, and localized data centers, in what are commonly referred to as small server rooms, which comprise 99percent of all server spaces in the US. While many mid-tier and enterprise-class data centers are owned by large corporations that consider energy efficiency a goal to minimize business operating costs, small server rooms typically are not similarly motivated. They are characterized by decentralized ownership and management and come in many configurations, which creates a unique set of efficiency challenges. To develop energy efficiency strategies for these spaces, we surveyed 30 small server rooms across eight institutions, and selected four of them for detailed assessments. The four rooms had Power Usage Effectiveness (PUE) values ranging from 1.5 to 2.1. Energy saving opportunities ranged from no- to low-cost measures such as raising cooling set points and better airflow management, to more involved but cost-effective measures including server consolidation and virtualization, and dedicated cooling with economizers. We found that inefficiencies mainly resulted from organizational rather than technical issues. Because of the inherent space and resource limitations, the most effective measure is to operate servers through energy-efficient cloud-based services or well-managed larger data centers, rather than server rooms. Backup power requirement, and IT and cooling efficiency should be evaluated to minimize energy waste in the server space. Utility programs are instrumental in raising awareness and spreading technical knowledge on server operation, and the implementation of energy efficiency measures in small server rooms.

  17. Managing server clusters on intermittent power

    Directory of Open Access Journals (Sweden)

    Navin Sharma

    2015-12-01

    Full Text Available Reducing the energy footprint of data centers continues to receive significant attention due to both its financial and environmental impact. There are numerous methods that limit the impact of both factors, such as expanding the use of renewable energy or participating in automated demand-response programs. To take advantage of these methods, servers and applications must gracefully handle intermittent constraints in their power supply. In this paper, we propose blinking—metered transitions between a high-power active state and a low-power inactive state—as the primary abstraction for conforming to intermittent power constraints. We design Blink, an application-independent hardware–software platform for developing and evaluating blinking applications, and define multiple types of blinking policies. We then use Blink to design both a blinking version of memcached (BlinkCache and a multimedia cache (GreenCache to demonstrate how application characteristics affect the design of blink-aware distributed applications. Our results show that for BlinkCache, a load-proportional blinking policy combines the advantages of both activation and synchronous blinking for realistic Zipf-like popularity distributions and wind/solar power signals by achieving near optimal hit rates (within 15% of an activation policy, while also providing fairer access to the cache (within 2% of a synchronous policy for equally popular objects. In contrast, for GreenCache, due to multimedia workload patterns, we find that a staggered load proportional blinking policy with replication of the first chunk of each video reduces the buffering time at all power levels, as compared to activation or load-proportional blinking policies.

  18. Use of Multi-Response Format Test in the Assessment of Medical Students’ Critical Thinking Ability

    Science.gov (United States)

    Mafinejad, Mahboobeh Khabaz; Monajemi, Alireza; Jalili, Mohammad; Soltani, Akbar; Rasouli, Javad

    2017-01-01

    Introduction To evaluate students critical thinking skills effectively, change in assessment practices is must. The assessment of a student’s ability to think critically is a constant challenge, and yet there is considerable debate on the best assessment method. There is evidence that the intrinsic nature of open and closed-ended response questions is to measure separate cognitive abilities. Aim To assess critical thinking ability of medical students by using multi-response format of assessment. Materials and Methods A cross-sectional study was conducted on a group of 159 undergraduate third-year medical students. All the participants completed the California Critical Thinking Skills Test (CCTST) consisting of 34 multiple-choice questions to measure general critical thinking skills and a researcher-developed test that combines open and closed-ended questions. A researcher-developed 48-question exam, consisting of 8 short-answers and 5 essay questions, 19 Multiple-Choice Questions (MCQ), and 16 True-False (TF) questions, was used to measure critical thinking skills. Correlation analyses were performed using Pearson’s coefficient to explore the association between the total scores of tests and subtests. Results One hundred and fifty-nine students participated in this study. The sample comprised 81 females (51%) and 78 males (49%) with an age range of 20±2.8 years (mean 21.2 years). The response rate was 64.1%. A significant positive correlation was found between types of questions and critical thinking scores, of which the correlations of MCQ (r=0.82) and essay questions (r=0.77) were strongest. The significant positive correlations between multi-response format test and CCTST’s subscales were seen in analysis, evaluation, inference and inductive reasoning. Unlike CCTST subscales, multi-response format test have weak correlation with CCTST total score (r=0.45, p=0.06). Conclusion This study highlights the importance of considering multi-response format test in

  19. Use of Multi-Response Format Test in the Assessment of Medical Students' Critical Thinking Ability.

    Science.gov (United States)

    Mafinejad, Mahboobeh Khabaz; Arabshahi, Seyyed Kamran Soltani; Monajemi, Alireza; Jalili, Mohammad; Soltani, Akbar; Rasouli, Javad

    2017-09-01

    To evaluate students critical thinking skills effectively, change in assessment practices is must. The assessment of a student's ability to think critically is a constant challenge, and yet there is considerable debate on the best assessment method. There is evidence that the intrinsic nature of open and closed-ended response questions is to measure separate cognitive abilities. To assess critical thinking ability of medical students by using multi-response format of assessment. A cross-sectional study was conducted on a group of 159 undergraduate third-year medical students. All the participants completed the California Critical Thinking Skills Test (CCTST) consisting of 34 multiple-choice questions to measure general critical thinking skills and a researcher-developed test that combines open and closed-ended questions. A researcher-developed 48-question exam, consisting of 8 short-answers and 5 essay questions, 19 Multiple-Choice Questions (MCQ), and 16 True-False (TF) questions, was used to measure critical thinking skills. Correlation analyses were performed using Pearson's coefficient to explore the association between the total scores of tests and subtests. One hundred and fifty-nine students participated in this study. The sample comprised 81 females (51%) and 78 males (49%) with an age range of 20±2.8 years (mean 21.2 years). The response rate was 64.1%. A significant positive correlation was found between types of questions and critical thinking scores, of which the correlations of MCQ (r=0.82) and essay questions (r=0.77) were strongest. The significant positive correlations between multi-response format test and CCTST's subscales were seen in analysis, evaluation, inference and inductive reasoning. Unlike CCTST subscales, multi-response format test have weak correlation with CCTST total score (r=0.45, p=0.06). This study highlights the importance of considering multi-response format test in the assessment of critical thinking abilities of medical

  20. Server-Aided Two-Party Computation with Simultaneous Corruption

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Damgård, Ivan Bjerre; Ranellucci, Samuel

    We consider secure two-party computation in the client-server model where there are two adversaries that operate separately but simultaneously, each of them corrupting one of the parties and a restricted subset of servers that they interact with. We model security via the local universal composab......We consider secure two-party computation in the client-server model where there are two adversaries that operate separately but simultaneously, each of them corrupting one of the parties and a restricted subset of servers that they interact with. We model security via the local universal...

  1. Maintenance in Single-Server Queues: A Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Najeeb Al-Matar

    2009-01-01

    examine a single-server queue with bulk input and secondary work during server's multiple vacations. When the buffer contents become exhausted the server leaves the system to perform some diagnostic service of a minimum of L jobs clustered in packets of random sizes (event A. The server is not supposed to stay longer than T units of time (event B. The server returns to the system when A or B occurs, whichever comes first. On the other hand, he may not break service of a packet in a middle even if A or B occurs. Furthermore, the server waits for batches of customers to arrive if upon his return the queue is still empty. We obtain a compact and explicit form functional for the queueing process in equilibrium.

  2. Locating Nearby Copies of Replicated Internet Servers

    National Research Council Canada - National Science Library

    Guyton, James D; Schwartz, Michael F

    1995-01-01

    In this paper we consider the problem of choosing among a collection of replicated servers focusing on the question of how to make choices that segregate client/server traffic according to network topology...

  3. GeoServer beginner's guide

    CERN Document Server

    Youngblood, Brian

    2013-01-01

    Step-by-step instructions are included and the needs of a beginner are totally satisfied by the book. The book consists of plenty of examples with accompanying screenshots and code for an easy learning curve. You are a web developer with knowledge of server side scripting, and have experience with installing applications on the server. You have a desire to want more than Google maps, by offering dynamically built maps on your site with your latest geospatial data stored in MySQL, PostGIS, MsSQL or Oracle. If this is the case, this book is meant for you.

  4. Server hardware trends

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk will cover the status of the current and upcoming offers on server platforms, focusing mainly on the processing and storage parts. Alternative solutions like Open Compute (OCP) will be quickly covered.

  5. Empirical and simulated critical loads for nitrogen deposition in California mixed conifer forests

    International Nuclear Information System (INIS)

    Fenn, M.E.; Jovan, S.; Yuan, F.; Geiser, L.; Meixner, T.; Gimeno, B.S.

    2008-01-01

    Empirical critical loads (CL) for N deposition were determined from changes in epiphytic lichen communities, elevated NO 3 - leaching in streamwater, and reduced fine root biomass in ponderosa pine (Pinus ponderosa Dougl. ex Laws.) at sites with varying N deposition. The CL for lichen community impacts of 3.1 kg ha -1 year -1 is expected to protect all components of the forest ecosystem from the adverse effects of N deposition. Much of the western Sierra Nevada is above the lichen-based CL, showing significant changes in lichen indicator groups. The empirical N deposition threshold and that simulated by the DayCent model for enhanced NO 3 - leaching were 17 kg N ha -1 year -1 . DayCent estimated that elevated NO 3 - leaching in the San Bernardino Mountains began in the late 1950s. Critical values for litter C:N (34.1), ponderosa pine foliar N (1.1%), and N concentrations (1.0%) in the lichen Letharia vulpina ((L.) Hue) are indicative of CL exceedance. - Critical loads for N deposition effects on lichens, trees and nitrate leaching provide benchmarks for protecting California forests

  6. Empirical and simulated critical loads for nitrogen deposition in California mixed conifer forests

    Energy Technology Data Exchange (ETDEWEB)

    Fenn, M.E. [USDA Forest Service, Pacific Southwest Research Station, 4955 Canyon Crest Drive, Riverside, CA 92507 (United States)], E-mail: mfenn@fs.fed.us; Jovan, S. [USDA Forest Service, Pacific Northwest Research Station, 620 SW Main, Suite 400, Portland, OR 97205 (United States); Yuan, F. [Department of Hydrology and Water Resources, University of Arizona, Tucson, AZ 85721 (United States); Geiser, L. [USDA Forest Service, Pacific Northwest Air Resource Management Program, PO Box 1148, Corvallis, OR 97339 (United States); Meixner, T. [Department of Hydrology and Water Resources, University of Arizona, Tucson, AZ 85721 (United States); Gimeno, B.S. [Ecotoxicology of Air Pollution, CIEMAT (ed. 70), Avda. Complutense 22, 28040 Madrid (Spain)

    2008-10-15

    Empirical critical loads (CL) for N deposition were determined from changes in epiphytic lichen communities, elevated NO{sub 3}{sup -} leaching in streamwater, and reduced fine root biomass in ponderosa pine (Pinus ponderosa Dougl. ex Laws.) at sites with varying N deposition. The CL for lichen community impacts of 3.1 kg ha{sup -1} year{sup -1} is expected to protect all components of the forest ecosystem from the adverse effects of N deposition. Much of the western Sierra Nevada is above the lichen-based CL, showing significant changes in lichen indicator groups. The empirical N deposition threshold and that simulated by the DayCent model for enhanced NO{sub 3}{sup -}leaching were 17 kg N ha{sup -1} year{sup -1}. DayCent estimated that elevated NO{sub 3}{sup -} leaching in the San Bernardino Mountains began in the late 1950s. Critical values for litter C:N (34.1), ponderosa pine foliar N (1.1%), and N concentrations (1.0%) in the lichen Letharia vulpina ((L.) Hue) are indicative of CL exceedance. - Critical loads for N deposition effects on lichens, trees and nitrate leaching provide benchmarks for protecting California forests.

  7. On-line single server dial-a-ride problems

    NARCIS (Netherlands)

    Feuerstein, E.; Stougie, L.

    1998-01-01

    In this paper results on the dial-a-ride problem with a single server are presented. Requests for rides consist of two points in a metric space, a source and a destination. A ride has to be made by the server from the source to the destination. The server travels at unit speed in the metric space

  8. Personalized Pseudonyms for Servers in the Cloud

    OpenAIRE

    Xiao Qiuyu; Reiter Michael K.; Zhang Yinqian

    2017-01-01

    A considerable and growing fraction of servers, especially of web servers, is hosted in compute clouds. In this paper we opportunistically leverage this trend to improve privacy of clients from network attackers residing between the clients and the cloud: We design a system that can be deployed by the cloud operator to prevent a network adversary from determining which of the cloud’s tenant servers a client is accessing. The core innovation in our design is a PoPSiCl (pronounced “popsicle”), ...

  9. Elastic Buckling Behaviour of General Multi-Layered Graphene Sheets

    Directory of Open Access Journals (Sweden)

    Rong Ming Lin

    2015-04-01

    Full Text Available Elastic buckling behaviour of multi-layered graphene sheets is rigorously investigated. Van der Waals forces are modelled, to a first order approximation, as linear physical springs which connect the nodes between the layers. Critical buckling loads and their associated modes are established and analyzed under different boundary conditions, aspect ratios and compressive loading ratios in the case of graphene sheets compressed in two perpendicular directions. Various practically possible loading configurations are examined and their effect on buckling characteristics is assessed. To model more accurately the buckling behaviour of multi-layered graphene sheets, a physically more representative and realistic mixed boundary support concept is proposed and applied. For the fundamental buckling mode under mixed boundary support, the layers with different boundary supports deform similarly but non-identically, leading to resultant van der Waals bonding forces between the layers which in turn affect critical buckling load. Results are compared with existing known solutions to illustrate the excellent numerical accuracy of the proposed modelling approach. The buckling characteristics of graphene sheets presented in this paper form a comprehensive and wholesome study which can be used as potential structural design guideline when graphene sheets are employed for nano-scale sensing and actuation applications such as nano-electro-mechanical systems.

  10. Securing SQL Server Protecting Your Database from Attackers

    CERN Document Server

    Cherry, Denny

    2011-01-01

    There is a lot at stake for administrators taking care of servers, since they house sensitive data like credit cards, social security numbers, medical records, and much more. In Securing SQL Server you will learn about the potential attack vectors that can be used to break into your SQL Server database, and how to protect yourself from these attacks. Written by a Microsoft SQL Server MVP, you will learn how to properly secure your database, from both internal and external threats. Best practices and specific tricks employed by the author will also be revealed. Learn expert techniques to protec

  11. TSKT-ORAM: A Two-Server k-ary Tree Oblivious RAM without Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Jinsheng Zhang

    2017-09-01

    Full Text Available This paper proposes TSKT-oblivious RAM (ORAM, an efficient multi-server ORAM construction, to protect a client’s access pattern to outsourced data. TSKT-ORAM organizes each of the server storages as a k-ary tree and adopts XOR-based private information retrieval (PIR and a novel delayed eviction technique to optimize both the data query and data eviction process. TSKT-ORAM is proven to protect the data access pattern privacy with a failure probability of 2 - 80 when system parameter k ≥ 128 . Meanwhile, given a constant-size local storage, when N (i.e., the total number of outsourced data blocks ranges from 2 16 – 2 34 , the communication cost of TSKT-ORAM is only 22–46 data blocks. Asymptotic analysis and practical comparisons are conducted to show that TSKT-ORAM incurs lower communication cost, storage cost and access delay in practical scenarios than the compared state-of-the-art ORAM schemes.

  12. I-TASSER server for protein 3D structure prediction

    Directory of Open Access Journals (Sweden)

    Zhang Yang

    2008-01-01

    Full Text Available Abstract Background Prediction of 3-dimensional protein structures from amino acid sequences represents one of the most important problems in computational structural biology. The community-wide Critical Assessment of Structure Prediction (CASP experiments have been designed to obtain an objective assessment of the state-of-the-art of the field, where I-TASSER was ranked as the best method in the server section of the recent 7th CASP experiment. Our laboratory has since then received numerous requests about the public availability of the I-TASSER algorithm and the usage of the I-TASSER predictions. Results An on-line version of I-TASSER is developed at the KU Center for Bioinformatics which has generated protein structure predictions for thousands of modeling requests from more than 35 countries. A scoring function (C-score based on the relative clustering structural density and the consensus significance score of multiple threading templates is introduced to estimate the accuracy of the I-TASSER predictions. A large-scale benchmark test demonstrates a strong correlation between the C-score and the TM-score (a structural similarity measurement with values in [0, 1] of the first models with a correlation coefficient of 0.91. Using a C-score cutoff > -1.5 for the models of correct topology, both false positive and false negative rates are below 0.1. Combining C-score and protein length, the accuracy of the I-TASSER models can be predicted with an average error of 0.08 for TM-score and 2 Å for RMSD. Conclusion The I-TASSER server has been developed to generate automated full-length 3D protein structural predictions where the benchmarked scoring system helps users to obtain quantitative assessments of the I-TASSER models. The output of the I-TASSER server for each query includes up to five full-length models, the confidence score, the estimated TM-score and RMSD, and the standard deviation of the estimations. The I-TASSER server is freely available

  13. Web Server Configuration for an Academic Intranet

    National Research Council Canada - National Science Library

    Baltzis, Stamatios

    2000-01-01

    .... One of the factors that boosted this ability was the evolution of the Web Servers. Using the web server technology man can be connected and exchange information with the most remote places all over the...

  14. Preliminary modelling and mapping of critical loads for cadmium and lead in Europa

    NARCIS (Netherlands)

    Hettelingh JP; Slootweg J; Posch M; Ilyin I; MNV-CCE/WGE-IPC M&M Coordination Center for Effects; EMEP-Meteorological Synthesizing Centre-East

    2004-01-01

    De "Working Group on Effects" (WGE) van de "Convention on Long-range Transboundary Air Pollution" onder de "United Nations Economic Commission or Europe" (UNECE-CLRTAP) heeft tijdens haar 20e bijeenkomst besloten dat de methode om kritische depositiewaarden (critical loads) voor cadmium en lood in

  15. Tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2007-01-01

    We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see [2]. In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of

  16. Environment server. Digital field information archival technology

    International Nuclear Information System (INIS)

    Kita, Nobuyuki; Kita, Yasuyo; Yang, Hai-quan

    2002-01-01

    For the safety operation of nuclear power plants, it is important to store various information about plants for a long period and visualize those stored information as desired. The system called Environment Server is developed for realizing it. In this paper, the general concepts of Environment Server is explained and its partial implementation for archiving the image information gathered by inspection mobile robots into virtual world and visualizing them is described. An extension of Environment Server for supporting attention sharing is also briefly introduced. (author)

  17. Prediction calculation of HTR-10 fuel loading for the first criticality

    International Nuclear Information System (INIS)

    Jing Xingqing; Yang Yongwei; Gu Yuxiang; Shan Wenzhi

    2001-01-01

    The 10 MW high temperature gas cooled reactor (HTR-10) was built at Institute of Nuclear Energy Technology, Tsinghua University, and the first criticality was attained in Dec. 2000. The high temperature gas cooled reactor physics simulation code VSOP was used for the prediction of the fuel loading for HTR-10 first criticality. The number of fuel element and graphite element was predicted to provide reference for the first criticality experiment. The prediction calculations toke into account the factors including the double heterogeneity of the fuel element, buckling feedback for the spectrum calculation, the effect of the mixture of the graphite and the fuel element, and the correction of the diffusion coefficients near the upper cavity based on the transport theory. The effects of impurities in the fuel and the graphite element in the core and those in the reflector graphite on the reactivity of the reactor were considered in detail. The first criticality experiment showed that the predicted values and the experiment results were in good agreement with little relative error less than 1%, which means the prediction was successful

  18. Influence of the heater material on the critical heat load at boiling of liquids on surfaces with different sizes

    Science.gov (United States)

    Anokhina, E. V.

    2010-05-01

    Data on critical heat loads q cr for the saturated and unsaturated pool boiling of water and ethanol under atmospheric pressure are reported. It is found experimentally that the critical heat load does not necessarily coincide with the heat load causing burnout of the heater, which should be taken into account. The absolute values of q cr for the boiling of water and ethanol on copper surfaces 65, 80, 100, 120, and 200 μm in diameter; tungsten surface 100 μm in diameter; and nichrome surface 100 μm in diameter are obtained experimentally.

  19. A Novel Algorithm of Quantum Random Walk in Server Traffic Control and Task Scheduling

    Directory of Open Access Journals (Sweden)

    Dong Yumin

    2014-01-01

    Full Text Available A quantum random walk optimization model and algorithm in network cluster server traffic control and task scheduling is proposed. In order to solve the problem of server load balancing, we research and discuss the distribution theory of energy field in quantum mechanics and apply it to data clustering. We introduce the method of random walk and illuminate what the quantum random walk is. Here, we mainly research the standard model of one-dimensional quantum random walk. For the data clustering problem of high dimensional space, we can decompose one m-dimensional quantum random walk into m one-dimensional quantum random walk. In the end of the paper, we compare the quantum random walk optimization method with GA (genetic algorithm, ACO (ant colony optimization, and SAA (simulated annealing algorithm. In the same time, we prove its validity and rationality by the experiment of analog and simulation.

  20. Energy-Reduction Offloading Technique for Streaming Media Servers

    Directory of Open Access Journals (Sweden)

    Yeongpil Cho

    2016-01-01

    Full Text Available Recent growth in popularity of mobile video services raises a demand for one of the most popular and convenient methods of delivering multimedia data, video streaming. However, heterogeneity of currently existing mobile devices involves an issue of separate video transcoding for each type of mobile devices such as smartphones, tablet PCs, and smart TVs. As a result additional burden comes to media servers, which pretranscode multimedia data for number of clients. Regarding even higher increase of video data in the Internet in the future, the problem of media servers overload is impending. To struggle against the problem an offloading method is introduced in this paper. By the use of SorTube offloading framework video transcoding process is shifted from the centralized media server to the local offloading server. Thus, clients can receive personally customized video stream; meanwhile the overload of centralized servers is reduced.

  1. The Development of Mobile Server for Language Courses

    OpenAIRE

    Tokumoto, Hiroko; Yoshida, Mitsunobu

    2009-01-01

    The aim of this paper is to introduce the conceptual design of the mobile server software "MY Server" for language teaching drafted by Tokumoto. It is to report how this software is designed and adopted effectively to Japanese language teaching. Most of the current server systems for education require facilities in a big scale including high-spec server machines, professional administrators, which naturally result in big budget projects that individual teachers or small size schools canno...

  2. Getting started with SQL Server 2014 administration

    CERN Document Server

    Ellis, Gethyn

    2014-01-01

    This is an easytofollow handson tutorial that includes real world examples of SQL Server 2014's new features. Each chapter is explained in a stepbystep manner which guides you to implement the new technology.If you want to create an highly efficient database server then this book is for you. This book is for database professionals and system administrators who want to use the added features of SQL Server 2014 to create a hybrid environment, which is both highly available and allows you to get the best performance from your databases.

  3. Position paper: Live load design criteria for Project W-236A Multi-Function Waste Tank Facility

    International Nuclear Information System (INIS)

    Giller, R.A.

    1995-01-01

    The purpose of this paper is to discuss the live loads applied to the underground storage tanks of the Multi Function Waste Tank Facility, and to provide the basis for Project W-236A live load criteria. Project 236A provides encompasses building a Weather Enclosure over the two underground storage tanks at the 200-West area. According to the Material Handling Study, the Groves AT 1100 crane used within the Weather Enclosure will have a gross vehicle weight of 66.5 tons. Therefore, a 100-ton concentrated live load is being used for the planning of the construction of the Weather Enclosure

  4. Locating Hidden Servers

    National Research Council Canada - National Science Library

    Oeverlier, Lasse; Syverson, Paul F

    2006-01-01

    .... Announced properties include server resistance to distributed DoS. Both the EFF and Reporters Without Borders have issued guides that describe using hidden services via Tor to protect the safety of dissidents as well as to resist censorship...

  5. Multi-objective efficiency enhancement using workload spreading in an operational data center

    International Nuclear Information System (INIS)

    Habibi Khalaj, Ali; Scherer, Thomas; Siriwardana, Jayantha; Halgamuge, Saman K.

    2015-01-01

    Highlights: • Development of the heat-flow reduced order model (HFROM) for the IBM ZRL data center. • Verification of the developed HFROM with the experimentally verified CFD model. • Multi-objective efficiency enhancement of the HFROM using particle swarm optimization. • Improving the COP of the data center’s cooling system by about 17%. • Increasing the total allocated workload of the servers by about 10%. - Abstract: The cooling systems of rapidly growing Data Centers (DCs) consume a considerable amount of energy, which is one of the main concerns in designing and operating DCs. The main source of thermal inefficiency in a typical air-cooled DC is hot air recirculation from outlets of servers into their inlets, causing hot spots and leading to performance reduction of the cooling system. In this study, a thermally aware workload spreading method is proposed for reducing the hot spots while the total allocated server workload is increased. The core of this methodology lies in developing an appropriate thermal DC model for the optimization process. Given the fact that utilizing a high-fidelity thermal model of a DC is highly time consuming in the optimization process, a three dimensional reduced order model of a real DC is developed in this study. This model, whose boundary conditions are determined based on measurement data of an operational DC, is developed based on the potential flow theory updated with the Rankine vortex to account for buoyancy and air recirculation effects inside the DC. Before evaluating the proposed method, this model is verified with a computational fluid dynamic (CFD) model simulated with the same boundary conditions. The efficient load spreading method is achieved by applying a multi-objective particle swarm optimization (MOPSO) algorithm whose objectives are to minimize the hot spot occurrences and to maximize the total workload allocated to servers. In this case study, by applying the proposed method, the Coefficient of

  6. Airborne pollutants. Transports, effects and critical loads; Lufttransporterte forurensninger. Tilfoersler, virkninger og taalegrenser

    Energy Technology Data Exchange (ETDEWEB)

    Floeysand, I; Loebersli, E [eds.

    1996-01-01

    The report from a conference concerns two Norwegian research programmes. The first one deals with the transports and effects of airborne pollutants, and the second one relates to the critical loads of nature. A number of 17 papers from the conference are prepared. 318 refs., 57 figs., 22 tabs.

  7. Updated assessment of critical loads of lead and cadmium for European forest soils

    NARCIS (Netherlands)

    Reinds, G.J.; Vries, de W.; Groenenberg, J.E.

    2002-01-01

    At its 20th session the Working Group on Effects (WGE) of the Convention on Long-range Transboundary Air Pollution of the United Nations Economic Commission for Europe (UNECECLRTAP), noted the need to further develop and test the methodology for mapping critical loads for cadmium and lead. To this

  8. Energy-efficient server management; Energieeffizientes Servermanagement

    Energy Technology Data Exchange (ETDEWEB)

    Sauter, B.

    2003-07-01

    This final report for the Swiss Federal Office of Energy (SFOE) presents the results of a project that aimed to develop an automatic shut-down system for the servers used in typical electronic data processing installations to be found in small and medium-sized enterprises. The purpose of shutting down these computers - the saving of energy - is discussed. The development of a shutdown unit on the basis of a web-server that automatically shuts down the servers connected to it and then interrupts their power supply is described. The functions of the unit, including pre-set times for switching on and off, remote operation via the Internet and its interaction with clients connected to it are discussed. Examples of the system's user interface are presented.

  9. Multi-stage crypto ransomware attacks: A new emerging cyber threat to critical infrastructure and industrial control systems

    OpenAIRE

    Aaron Zimba; Zhaoshun Wang; Hongsong Chen

    2018-01-01

    The inevitable integration of critical infrastructure to public networks has exposed the underlying industrial control systems to various attack vectors. In this paper, we model multi-stage crypto ransomware attacks, which are today an emerging cyber threat to critical infrastructure. We evaluate our modeling approach using multi-stage attacks by the infamous WannaCry ransomware. The static malware analysis results uncover the techniques employed by the ransomware to discover vulnerable nodes...

  10. Application of static critical load models for acidity to high mountain lakes in Europe

    Czech Academy of Sciences Publication Activity Database

    Curtis, C. J.; Barbieri, A.; Camarero, L.; Gabathuler, M.; Galas, J.; Hanselmann, K.; Kopáček, Jiří; Mosello, R.; Nickus, U.; Rose, N.; Stuchlík, E.; Thies, H.; Ventura, M.; Wright, R.

    2002-01-01

    Roč. 2, č. 2 (2002), s. 115-126 ISSN 1567-7230 Grant - others:EU(XE) MOLAR ENV4-CT95-0007; EU(XE) Environment and Climate Programme Keywords : acid deposition * critical loads Subject RIV: DJ - Water Pollution ; Quality

  11. Simatik : Aplikasi Simulasi Bank Soal Tes Potensi Akademik (TPA Berbasis Multi Platform

    Directory of Open Access Journals (Sweden)

    Made Hendra Yudha Saputra

    2017-01-01

    Full Text Available Abstrak---Penelitian ini bertujuan untuk : (1 menghasilkan rancang bangun dan implementasi Simatik : Aplikasi Simulasi Bank Soal Tes Potensi Akademik (TPA Berbasis Multi Platform, (2 Mengetahui respon dari Pengguna terhadap Simatik : Aplikasi Simulasi Bank Soal Tes Potensi Akademik (TPA Berbasis Multi Platform. Dalam perancangannya, aplikasi ini akan menggunakan arsitektur client-server untuk melakukan proses pertukaran data. Perancangan dilakukan dengan menggunakan model fungsional berupa UML. Model fungsional berupa UML tersebut diimplementasikan dalam sebuah framework yaitu Phonegap dengan bahasa pemrograman HTML5. Untuk mengetahui respon terhadap Simatik : Aplikasi Simulasi Bank Soal Tes Potensi Akademik (TPA Berbasis Multi Platform ini diperoleh dengan menggunakan metode angket. Hasil akhirnya yaitu berupa Aplikasi Simatik berbasis Multi Platform yang dapat diinstall pada perangkat mobile untuk digunakan dalam latihan soal-soal yang terkait dengan Tes Potensi Akademik (TPA. Berdasarkan hasil uji usability, aplikasi Simatik berbasis Multi Platform ini mendapatkan persentase hasil sebesar 95,6 % dengan kategori sangat baik yang berarti dalam pengoperasiannya aplikasi ini mudah untuk digunakan dan dapat berfungsi sesuai dengan fungsi seharusnya. Kata Kunci : Phonegap, Multi Platform, Client Server, Mobile, Tes Potensi Akademik (TPA, Simatik   Abstract--- This research is purpose to : (1 produce generate design and implementation Simatik : Aplikasi Simulasi Bank Soal Tes Potensi Akademik (TPA Berbasis Multi Platform (2 To knowing the response of users to Simatik : Aplikasi Simulasi Bank Soal Tes Potensi Akademik (TPA Berbasis Multi Platform. In its design, this application will use the client-server architecture to make the exchange process of data. The design were done by using a functional model UML form. The functional model UML form is implemented within a framework that is phonegap with HTML 5 programming languages. To determine the

  12. Lyceum: A Multi-Protocol Digital Library Gateway

    Science.gov (United States)

    Maa, Ming-Hokng; Nelson, Michael L.; Esler, Sandra L.

    1997-01-01

    Lyceum is a prototype scalable query gateway that provides a logically central interface to multi-protocol and physically distributed, digital libraries of scientific and technical information. Lyceum processes queries to multiple syntactically distinct search engines used by various distributed information servers from a single logically central interface without modification of the remote search engines. A working prototype (http://www.larc.nasa.gov/lyceum/) demonstrates the capabilities, potentials, and advantages of this type of meta-search engine by providing access to over 50 servers covering over 20 disciplines.

  13. Towards Direct Manipulation and Remixing of Massive Data: The EarthServer Approach

    Science.gov (United States)

    Baumann, P.

    2012-04-01

    Complex analytics on "big data" is one of the core challenges of current Earth science, generating strong requirements for on-demand processing and fil tering of massive data sets. Issues under discussion include flexibility, performance, scalability, and the heterogeneity of the information types invo lved. In other domains, high-level query languages (such as those offered by database systems) have proven successful in the quest for flexible, scalable data access interfaces to massive amounts of data. However, due to the lack of support for many of the Earth science data structures, database systems are only used for registries and catalogs, but not for the bulk of spatio-temporal data. One core information category in this field is given by coverage data. ISO 19123 defines coverages, simplifying, as a representation of a "space-time varying phenomenon". This model can express a large class of Earth science data structures, including rectified and non-rectified rasters, curvilinear grids, point clouds, TINs, general meshes, trajectories, surfaces, and solids. This abstract definition, which is too high-level to establish interoperability, is concretized by the OGC GML 3.2.1 Application Schema for Coverages Standard into an interoperable representation. The OGC Web Coverage Processing Service (WCPS) Standard defines a declarative query language on multi-dimensional raster-type coverages, such as 1D in-situ sensor timeseries, 2D EO imagery, 3D x/y/t image time series and x/y/z geophysical data, 4D x/y/z/t climate and ocean data. Hence, important ingredients for versatile coverage retrieval are given - however, this potential has not been fully unleashed by service architectures up to now. The EU FP7-INFRA project EarthServer, launched in September 2011, aims at enabling standards-based on-demand analytics over the Web for Earth science data based on an integration of W3C XQuery for alphanumeric data and OGC-WCPS for raster data. Ultimately, EarthServer will support

  14. Installing and Testing a Server Operating System

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2003-08-01

    Full Text Available The paper is based on the experience of the author with the FreeBSD server operating system administration on three servers in use under academicdirect.ro domain.The paper describes a set of installation, preparation, and administration aspects of a FreeBSD server.First issue of the paper is the installation procedure of FreeBSD operating system on i386 computer architecture. Discussed problems are boot disks preparation and using, hard disk partitioning and operating system installation using a existent network topology and a internet connection.Second issue is the optimization procedure of operating system, server services installation, and configuration. Discussed problems are kernel and services configuration, system and services optimization.The third issue is about client-server applications. Using operating system utilities calls we present an original application, which allows displaying the system information in a friendly web interface. An original program designed for molecular structure analysis was adapted for systems performance comparisons and it serves for a discussion of Pentium, Pentium II and Pentium III processors computation speed.The last issue of the paper discusses the installation and configuration aspects of dial-in service on a UNIX-based operating system. The discussion includes serial ports, ppp and pppd services configuration, ppp and tun devices using.

  15. Pro SQL Server 2012 relational database design and implementation

    CERN Document Server

    Davidson, Louis

    2012-01-01

    Learn effective and scalable database design techniques in a SQL Server environment. Pro SQL Server 2012 Relational Database Design and Implementation covers everything from design logic that business users will understand, all the way to the physical implementation of design in a SQL Server database. Grounded in best practices and a solid understanding of the underlying theory, Louis Davidson shows how to "get it right" in SQL Server database design and lay a solid groundwork for the future use of valuable business data. Gives a solid foundation in best practices and relational theory Covers

  16. Foundations of SQL Server 2008 R2 Business Intelligence

    CERN Document Server

    Fouche, Guy

    2011-01-01

    Foundations of SQL Server 2008 R2 Business Intelligence introduces the entire exciting gamut of business intelligence tools included with SQL Server 2008. Microsoft has designed SQL Server 2008 to be more than just a database. It's a complete business intelligence (BI) platform. The database is at its core, and surrounding the core are tools for data mining, modeling, reporting, analyzing, charting, and integration with other enterprise-level software packages. SQL Server 2008 puts an incredible amount of BI functionality at your disposal. But how do you take advantage of it? That's what this

  17. The eDoc-Server Project Building an Institutional Repository for the Max Planck Society

    CERN Document Server

    Beier, Gerhard

    2004-01-01

    With the eDoc-Server the Heinz Nixdorf Center for Information Management in the Max Planck Society (ZIM) provides the research institutes of the Max Planck Society (MPS) with a platform to disseminate, store, and manage their scientific output. Moreover, eDoc serves as a tool to facilitate and promote open access to scientific information and primary sources. Since its introduction in October 2002 eDoc has gained high visibility within the MPS. It has been backed by strong institutional commitment to open access as documented in the 'Berlin Declaration on Open Access to the Data of the Sciences and Humanities', which was initiated by the MPS and found large support among major research organizations in Europe. This paper will outline the concept as well as the current status of the eDoc-Server, providing an example for the development and introduction of an institutional repository in a multi-disciplinary research organization.

  18. Impact of measurement uncertainty from experimental load distribution factors on bridge load rating

    Science.gov (United States)

    Gangone, Michael V.; Whelan, Matthew J.

    2018-03-01

    Load rating and testing of highway bridges is important in determining the capacity of the structure. Experimental load rating utilizes strain transducers placed at critical locations of the superstructure to measure normal strains. These strains are then used in computing diagnostic performance measures (neutral axis of bending, load distribution factor) and ultimately a load rating. However, it has been shown that experimentally obtained strain measurements contain uncertainties associated with the accuracy and precision of the sensor and sensing system. These uncertainties propagate through to the diagnostic indicators that in turn transmit into the load rating calculation. This paper will analyze the effect that measurement uncertainties have on the experimental load rating results of a 3 span multi-girder/stringer steel and concrete bridge. The focus of this paper will be limited to the uncertainty associated with the experimental distribution factor estimate. For the testing discussed, strain readings were gathered at the midspan of each span of both exterior girders and the center girder. Test vehicles of known weight were positioned at specified locations on each span to generate maximum strain response for each of the five girders. The strain uncertainties were used in conjunction with a propagation formula developed by the authors to determine the standard uncertainty in the distribution factor estimates. This distribution factor uncertainty is then introduced into the load rating computation to determine the possible range of the load rating. The results show the importance of understanding measurement uncertainty in experimental load testing.

  19. Building mail server on distributed computing system

    International Nuclear Information System (INIS)

    Akihiro Shibata; Osamu Hamada; Tomoko Oshikubo; Takashi Sasaki

    2001-01-01

    The electronic mail has become the indispensable function in daily job, and the server stability and performance are required. Using DCE and DFS we have built the distributed electronic mail sever, that is, servers such as SMTP, IMAP are distributed symmetrically, and provides the seamless access

  20. Vegetation community change points suggest that critical loads of nutrient nitrogen may be too high

    Science.gov (United States)

    Wilkins, Kayla; Aherne, Julian; Bleasdale, Andy

    2016-12-01

    It is widely accepted that elevated nitrogen deposition can have detrimental effects on semi-natural ecosystems, including changes to plant diversity. Empirical critical loads of nutrient nitrogen have been recommended to protect many sensitive European habitats from significant harmful effects. In this study, we used Threshold Indicator Taxa Analysis (TITAN) to investigate shifts in vegetation communities along an atmospheric nitrogen deposition gradient for twenty-two semi-natural habitat types (as described under Annex I of the European Union Habitats Directive) in Ireland. Significant changes in vegetation community, i.e., change points, were determined for twelve habitats, with seven habitats showing a decrease in the number of positive indicator species. Community-level change points indicated a decrease in species abundance along a nitrogen deposition gradient ranging from 3.9 to 15.3 kg N ha-1 yr-1, which were significantly lower than recommended critical loads (Wilcoxon signed-rank test; V = 6, p < 0.05). These results suggest that lower critical loads of empirical nutrient nitrogen deposition may be required to protect many European habitats. Changes to vegetation communities may mean a loss of sensitive indicator species and potentially rare species in these habitats, highlighting how emission reductions policies set under the National Emissions Ceilings Directive may be directly linked to meeting the goal set out under the European Union's Biodiversity Strategy of "halting the loss of biodiversity" across Europe by 2020.

  1. [Loading and strength of single- and multi-unit fixed dental prostheses. 1. Retention and resistance

    NARCIS (Netherlands)

    Baat, C. de; Witter, D.J.; Meijers, C.C.A.J.; Vergoossen, E.L.; Creugers, N.H.J.

    2014-01-01

    The degree to which single- and multi-unit fixed dental prostheses are able to withstand loading forces is dependent, among other things, on the quality of their retention and resistance. The quality of the retention and resistance of the configuration of an abutment tooth prepared for a metal and

  2. Professional Team Foundation Server 2012

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2012-01-01

    A comprehensive guide to using Microsoft Team Foundation Server 2012 Team Foundation Server has become the leading Microsoft productivity tool for software management, and this book covers what developers need to know to use it effectively. Fully revised for the new features of TFS 2012, it provides developers and software project managers with step-by-step instructions and even assists those who are studying for the TFS 2012 certification exam. You'll find a broad overview of TFS, thorough coverage of core functions, a look at extensibility options, and more, written by Microsoft ins

  3. Professional Team Foundation Server 2010

    CERN Document Server

    Blankenship, Ed; Holliday, Grant; Keller, Brian

    2011-01-01

    Authoritative guide to TFS 2010 from a dream team of Microsoft insiders and MVPs!Microsoft Visual Studio Team Foundation Server (TFS) has evolved until it is now an essential tool for Microsoft?s Application Lifestyle Management suite of productivity tools, enabling collaboration within and among software development teams. By 2011, TFS will replace Microsoft?s leading source control system, VisualSourceSafe, resulting in an even greater demand for information about it. Professional Team Foundation Server 2010, written by an accomplished team of Microsoft insiders and Microsoft MVPs, provides

  4. IBM WebSphere Application Server 80 Administration Guide

    CERN Document Server

    Robinson, Steve

    2011-01-01

    IBM WebSphere Application Server 8.0 Administration Guide is a highly practical, example-driven tutorial. You will be introduced to WebSphere Application Server 8.0, and guided through configuration, deployment, and tuning for optimum performance. If you are an administrator who wants to get up and running with IBM WebSphere Application Server 8.0, then this book is not to be missed. Experience with WebSphere and Java would be an advantage, but is not essential.

  5. Nuclear criticality safety: general. 4. The CASTOR X/32S Method of Covering mis-loading Concerns

    International Nuclear Information System (INIS)

    Lancaster, Dale B.; Rombough, Charles T.; Diersch, Rudolf; Spilker, Harry

    2001-01-01

    In the United States, most cask licenses do not directly consider mis-loading. If the enrichment limit for a shipping cask is high and the reactivity control is inherent in the cask, the reactivity effect of a mis-load is small. However, in large-capacity casks, such as the CASTOR X/32S, the effect can be much larger. The U.S. Department of Energy Topical Report on Actinide- Only Burnup Credit takes the position that a fuel assembly mis-load does need to be analyzed since there are multiple independent checks, and thus, the double-contingency principle is met. Unfortunately, 11 assemblies were mis-loaded at Palisades. This event has caused the U.S. Nuclear Regulatory Commission (NRC) to ask for more detail on prevention of mis-loading. In the summer of 1999, Palisades loaded 11 assemblies, which did not comply with the loading requirements for their VSC-24 cask. The cask requires 5 yr of cooling, and these 11 assemblies had just a little more than 4 yr of cooling. The mis-loading did not result in an unsafe condition but in an un-reviewed condition. This mis-loading was not identified until November 2000 during a review related to an NRC information notice. The loading plan for the cask was incorrect. The engineering review of the loading plan missed the error. The operators had loaded the cask consistent with the loading plan. The cask loading was then confirmed by comparing to the loading plan. The loading plan was in error since the engineer assumed that the entire region of fuel was discharged at the same time. The 11 assemblies of concern were reinserted in the reactor, and the engineer and the reviewer did not check for this. The reactor records for all the assemblies were correct but apparently were not checked by the engineer who created the loading plan. To prevent a mis-load criticality event, the following steps will be required for the CASTOR X/32S storage and transport cask: 1. A loading plan will be prepared for each cask loaded. This plan will be

  6. A polling model with an autonomous server

    NARCIS (Netherlands)

    de Haan, Roland; Boucherie, Richardus J.; van Ommeren, Jan C.W.

    2009-01-01

    This paper considers polling systems with an autonomous server that remain at a queue for an exponential amount of time before moving to a next queue incurring a generally distributed switch-over time. The server remains at a queue until the exponential visit time expires, also when the queue

  7. Single-server queues with spatially distributed arrivals

    NARCIS (Netherlands)

    Kroese, Dirk; Schmidt, Volker

    1994-01-01

    Consider a queueing system where customers arrive at a circle according to a homogeneous Poisson process. After choosing their positions on the circle, according to a uniform distribution, they wait for a single server who travels on the circle. The server's movement is modelled by a Brownian motion

  8. Evaluation of the Intel Nehalem-EX server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2010-01-01

    In this paper we report on a set of benchmark results recently obtained by the CERN openlab by comparing the 4-socket, 32-core Intel Xeon X7560 server with the previous generation 4-socket server, based on the Xeon X7460 processor. The Xeon X7560 processor represents a major change in many respects, especially the memory sub-system, so it was important to make multiple comparisons. In most benchmarks the two 4-socket servers were compared. It should be underlined that both servers represent the “top of the line” in terms of frequency. However, in some cases, it was important to compare systems that integrated the latest processor features, such as QPI links, Symmetric multithreading and over-clocking via Turbo mode, and in such situations the X7560 server was compared to a dual socket L5520 based system with an identical frequency of 2.26 GHz. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following ...

  9. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  10. Single-server blind quantum computation with quantum circuit model

    Science.gov (United States)

    Zhang, Xiaoqian; Weng, Jian; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing; Song, Tingting

    2018-06-01

    Blind quantum computation (BQC) enables the client, who has few quantum technologies, to delegate her quantum computation to a server, who has strong quantum computabilities and learns nothing about the client's quantum inputs, outputs and algorithms. In this article, we propose a single-server BQC protocol with quantum circuit model by replacing any quantum gate with the combination of rotation operators. The trap quantum circuits are introduced, together with the combination of rotation operators, such that the server is unknown about quantum algorithms. The client only needs to perform operations X and Z, while the server honestly performs rotation operators.

  11. CPU Server

    CERN Multimedia

    The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.

  12. A tandem queue with delayed server release

    NARCIS (Netherlands)

    Nawijn, W.M.

    1997-01-01

    We consider a tandem queue with two stations. The rst station is an s-server queue with Poisson arrivals and exponential service times. After terminating his service in the rst station, a customer enters the second station to require service at an exponential single server, while in the meantime he

  13. Loading History Effect on Creep Deformation of Rock

    Directory of Open Access Journals (Sweden)

    Wendong Yang

    2018-06-01

    Full Text Available The creep characteristics of rocks are very important for assessing the long-term stability of rock engineering structures. Two loading methods are commonly used in creep tests: single-step loading and multi-step loading. The multi-step loading method avoids the discrete influence of rock specimens on creep deformation and is relatively time-efficient. It has been widely accepted by researchers in the area of creep testing. However, in the process of multi-step loading, later deformation is affected by earlier loading. This is a key problem in considering the effects of loading history. Therefore, we intend to analyze the deformation laws of rock under multi-step loading and propose a method to correct the disturbance of the preceding load. Based on multi-step loading creep tests, the memory effect of creep deformation caused by loading history is discussed in this paper. A time-affected correction method for the creep strains under multi-step loading is proposed. From this correction method, the creep deformation under single-step loading can be estimated by the super-position of creeps obtained by the dissolution of a multistep creep. We compare the time-affected correction method to the coordinate translation method without considering loading history. The results show that the former results are more consistent with the experimental results. The coordinate translation method produces a large error which should be avoided.

  14. PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.

    Science.gov (United States)

    Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude

    2015-07-01

    Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Leading CFT constraints on multi-critical models in d>2

    Energy Technology Data Exchange (ETDEWEB)

    Codello, Alessandro [CP-Origins, University of Southern Denmark,Campusvej 55, 5230 Odense M (Denmark); INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Safari, Mahmoud [INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); Vacca, Gian Paolo [INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Zanusso, Omar [Theoretisch-Physikalisches Institut, Friedrich-Schiller-Universität Jena,Max-Wien-Platz 1, 07743 Jena (Germany); INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy)

    2017-04-21

    We consider the family of renormalizable scalar QFTs with self-interacting potentials of highest monomial ϕ{sup m} below their upper critical dimensions d{sub c}=((2m)/(m−2)), and study them using a combination of CFT constraints, Schwinger-Dyson equation and the free theory behavior at the upper critical dimension. For even integers m≥4 these theories coincide with the Landau-Ginzburg description of multi-critical phenomena and interpolate with the unitary minimal models in d=2, while for odd m the theories are non-unitary and start at m=3 with the Lee-Yang universality class. For all the even potentials and for the Lee-Yang universality class, we show how the assumption of conformal invariance is enough to compute the scaling dimensions of the local operators ϕ{sup k} and of some families of structure constants in either the coupling’s or the ϵ-expansion. For all other odd potentials we express some scaling dimensions and structure constants in the coupling’s expansion.

  16. Leading CFT constraints on multi-critical models in d > 2

    DEFF Research Database (Denmark)

    Codello, Alessandro; Safari, Mahmoud; Vacca, Gian Paolo

    2017-01-01

    We consider the family of renormalizable scalar QFTs with self-interacting potentials of highest monomial ϕm below their upper critical dimensions dc=2mm−2, and study them using a combination of CFT constraints, Schwinger-Dyson equation and the free theory behavior at the upper critical dimension....... For even integers m ≥ 4 these theories coincide with the Landau-Ginzburg description of multi-critical phenomena and interpolate with the unitary minimal models in d = 2, while for odd m the theories are non-unitary and start at m = 3 with the Lee-Yang universality class. For all the even potentials...... and for the Lee-Yang universality class, we show how the assumption of conformal invariance is enough to compute the scaling dimensions of the local operators ϕk and of some families of structure constants in either the coupling’s or the ϵ-expansion. For all other odd potentials we express some scaling dimensions...

  17. Construction of a nuclear data server using TCP/IP

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko; Sakai, Osamu [Kyushu Univ., Fukuoka (Japan)

    1997-03-01

    We construct a nuclear data server which provides data in the evaluated nuclear data library through the network by means of TCP/IP. The client is not necessarily a user but a computer program. Two examples with a prototype server program are demonstrated, the first is data transfer from the server to a user, and the second is to a computer program. (author)

  18. APLIKASI SERVER VIRTUAL IP UNTUK MIKROKONTROLER

    OpenAIRE

    Ashari, Ahmad

    2008-01-01

    Selama ini mikrokontroler yang terhubung ke satu komputer hanya dapat diakses melalui satu IP saja, padahal kebanyakan sistem operasi sekarang dapat memperjanjikan lebih dari satu IP untuk setiap komputer dalam bentuk virtual IP. Penelitian ini mengkaji pemanfaatan virtual IP dari IP aliasing pada sistem operasi Linux sebagai Server Virtual IP untuk mikrokontroler. Prinsip dasar Server Virtual IP adalah pembuatan Virtual Host pada masing-masing IP untuk memproses paket-paket data dan menerjem...

  19. Consideration of criticality in a nuclear waste repository

    International Nuclear Information System (INIS)

    Rechard, R.P.; Sanchez, L.C.; Stockman, C.T.; Ramsey, J.L. Jr.; Martell, M.

    1995-01-01

    The preliminary criticality analysis that was done suggests that the possibility of achieving critical conditions cannot be easily ruled out without looking at the geochemical process of assembly or the dynamics of the operation of a critical assembly. The evaluation of a critical assembly requires an integrated, consistent approach that includes evaluating the following: (1) the alteration rates of the layers of the container and spent fuel, (2) the transport of fissile material or neutron absorbers, and (3) the assembly mechanisms that can achieve critical conditions. The above is a non-trivial analysis and preliminary work suggests that with the loading assumed, enough fissile mass will leach from the HEU multi-purpose canisters to support a criticality. In addition, the consequences of an unpressurized Oklo type criticality would be insignificant to the performance of an unsaturated, tuff repository

  20. Getting started with SQL Server 2012 cube development

    CERN Document Server

    Lidberg, Simon

    2013-01-01

    As a practical tutorial for Analysis Services, get started with developing cubes. ""Getting Started with SQL Server 2012 Cube Development"" walks you through the basics, working with SSAS to build cubes and get them up and running.Written for SQL Server developers who have not previously worked with Analysis Services. It is assumed that you have experience with relational databases, but no prior knowledge of cube development is required. You need SQL Server 2012 in order to follow along with the exercises in this book.

  1. An Electronic Healthcare Record Server Implemented in PostgreSQL

    Directory of Open Access Journals (Sweden)

    Tony Austin

    2015-01-01

    Full Text Available This paper describes the implementation of an Electronic Healthcare Record server inside a PostgreSQL relational database without dependency on any further middleware infrastructure. The five-part international standard for communicating healthcare records (ISO EN 13606 is used as the information basis for the design of the server. We describe some of the features that this standard demands that are provided by the server, and other areas where assumptions about the durability of communications or the presence of middleware lead to a poor fit. Finally, we discuss the use of the server in two real-world scenarios including a commercial application.

  2. Client Server design and implementation issues in the Accelerator Control System environment

    International Nuclear Information System (INIS)

    Sathe, S.; Hoff, L.; Clifford, T.

    1995-01-01

    In distributed system communication software design, the Client Server model has been widely used. This paper addresses the design and implementation issues of such a model, particularly when used in Accelerator Control Systems. in designing the Client Server model one needs to decide how the services will be defined for a server, what types of messages the server will respond to, which data formats will be used for the network transactions and how the server will be located by the client. Special consideration needs to be given to error handling both on the server and client side. Since the server usually is located on a machine other than the client, easy and informative server diagnostic capability is required. The higher level abstraction provided by the Client Server model simplifies the application writing, however fine control over network parameters is essential to improve the performance. Above mentioned design issues and implementation trade-offs are discussed in this paper

  3. Multi-Trait Multi-Method Matrices for the Validation of Creativity and Critical Thinking Assessments for Secondary School Students in England and Greece

    Directory of Open Access Journals (Sweden)

    Ourania Maria Ventista

    2017-08-01

    Full Text Available The aim of this paper is the validation of measurement tools which assess critical thinking and creativity as general constructs instead of subject-specific skills. Specifically, this research examined whether there is convergent and discriminant (or divergent validity between measurement tools of creativity and critical thinking. For this purpose, the multi-trait and multi-method matrix suggested by Campbell and Fiske (1959 was used. This matrix presented the correlation of scores that students obtain in different assessments in order to reveal whether the assessments measure the same or different constructs. Specifically, the two methods used were written and oral exams, and the two traits measured were critical thinking and creativity. For the validation of the assessments, 30 secondary-school students in Greece and 21 in England completed the assessments. The sample in both countries provided similar results. The critical thinking tools demonstrated convergent validity when compared with each other and discriminant validity with the creativity assessments. Furthermore, creativity assessments which measure the same aspect of creativity demonstrated convergent validity. To conclude, this research provided indicators that critical thinking and creativity as general constructs can be measured in a valid way. However, since the sample was small, further investigation of the validation of the assessment tools with a bigger sample is recommended.

  4. Record Recommendations for the CERN Document Server

    CERN Document Server

    AUTHOR|(CDS)2096025; Marian, Ludmila

    CERN Document Server (CDS) is the institutional repository of the European Organization for Nuclear Research (CERN). It hosts all the research material produced at CERN, as well as multi- media and administrative documents. It currently has more than 1.5 million records grouped in more than 1000 collections. It’s underlying platform is Invenio, an open source digital library system created at CERN. As the size of CDS increases, discovering useful and interesting records becomes more challenging. Therefore, the goal of this work is to create a system that supports the user in the discovery of related interesting records. To achieve this, a set of recommended records are displayed on the record page. These recommended records are based on the analyzed behavior (page views and downloads) of other users. This work will describe the methods and algorithms used for creating, implementing, and the integration with the underlying software platform, Invenio. A very important decision factor when designing a recomme...

  5. Load-Unload Response Ratio and Accelerating Moment/Energy Release Critical Region Scaling and Earthquake Prediction

    Science.gov (United States)

    Yin, X. C.; Mora, P.; Peng, K.; Wang, Y. C.; Weatherley, D.

    The main idea of the Load-Unload Response Ratio (LURR) is that when a system is stable, its response to loading corresponds to its response to unloading, whereas when the system is approaching an unstable state, the response to loading and unloading becomes quite different. High LURR values and observations of Accelerating Moment/Energy Release (AMR/AER) prior to large earthquakes have led different research groups to suggest intermediate-term earthquake prediction is possible and imply that the LURR and AMR/AER observations may have a similar physical origin. To study this possibility, we conducted a retrospective examination of several Australian and Chinese earthquakes with magnitudes ranging from 5.0 to 7.9, including Australia's deadly Newcastle earthquake and the devastating Tangshan earthquake. Both LURR values and best-fit power-law time-to-failure functions were computed using data within a range of distances from the epicenter. Like the best-fit power-law fits in AMR/AER, the LURR value was optimal using data within a certain epicentral distance implying a critical region for LURR. Furthermore, LURR critical region size scales with mainshock magnitude and is similar to the AMR/AER critical region size. These results suggest a common physical origin for both the AMR/AER and LURR observations. Further research may provide clues that yield an understanding of this mechanism and help lead to a solid foundation for intermediate-term earthquake prediction.

  6. Theoretical assessment of a proposal for the simplified determination of critical loads of elastic shells

    International Nuclear Information System (INIS)

    Malmberg, T.

    1986-08-01

    Within the context of the stability analysis of the cryostat of a fusion reactor the question was raised whether or not the rather lengthy conventional stability analysis can be circumvented by applying a simplified strategy based on common linear Finite Element computer programs. This strategy involves the static linear deformation analysis of the structure with and without imperfections. For some simple stability problems this approach has been shown to be successful. The purpose of this study is to derive a general proof of the validity of this approach for thin shells with arbitrary geometry under hydrostatic pressure or dead loading along the boundary. This general assessment involves two types of analyses: 1) A general stability analysis for thin shells; this is based on a simple nonlinear shell theory and a stability criterion in form of the neutral (indifferent) equilibrium condition. This result is taken as reference solution. 2) A general linear deformation analysis for thin imperfect shells and the definition of a suitable scalar parameter (β-parameter) which should represent the reciprocal of the critical load factor. It is shown that the simplified strategy (=β-parameter approach'') generally is not capable to predict the actual critical load factor irrespective whether there is a hydrostatic pressure loading or dead loading along the edge of the shell. This general result is in contrast to the observations made for some simple stability problems. Nevertheless, the results of this study do not exclude the possibility that the simplified strategy will give reasonable approximate solutions at least for a restricted class of stability problems. (orig./HP) [de

  7. Analysis of a multi-server queueing model of ABR

    NARCIS (Netherlands)

    R. Núñez Queija (Rudesindo); O.J. Boxma (Onno)

    1996-01-01

    textabstractIn this paper we present a queueing model for the performance a-na-ly-sis of ABR traffic in ATM networks. We consider a multi-channel service station with two types of customers, the first having preemptive priority over the second. The arrivals occur according to two independent Poisson

  8. On a Batch Arrival Queuing System Equipped with a Stand-by Server during Vacation Periods or the Repairs Times of the Main Server

    Directory of Open Access Journals (Sweden)

    Rehab F. Khalaf

    2011-01-01

    Full Text Available We study a queuing system which is equipped with a stand-by server in addition to the main server. The stand-by server provides service to customers only during the period of absence of the main server when either the main server is on a vacation or it is in the state of repairs due to a sudden failure from time to time. The service times, vacation times, and repair times are assumed to follow general arbitrary distributions while the stand-by service times follow exponential distribution. Supplementary variables technique has been used to obtain steady state results in explicit and closed form in terms of the probability generating functions for the number of customers in the queue, the average number of customers, and the average waiting time in the queue while the MathCad software has been used to illustrate the numerical results in this work.

  9. Comparison of Certification Authority Roles in Windows Server 2003 and Windows Server 2008

    Directory of Open Access Journals (Sweden)

    A. I. Luchnik

    2011-03-01

    Full Text Available An analysis of Certification Authority components of Microsoft server operating systems was conducted. Based on the results main directions of development of certification authorities and PKI were highlighted.

  10. Optimal Waste Load Allocation Using Multi-Objective Optimization and Multi-Criteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    L. Saberi

    2016-10-01

    Full Text Available Introduction: Increasing demand for water, depletion of resources of acceptable quality, and excessive water pollution due to agricultural and industrial developments has caused intensive social and environmental problems all over the world. Given the environmental importance of rivers, complexity and extent of pollution factors and physical, chemical and biological processes in these systems, optimal waste-load allocation in river systems has been given considerable attention in the literature in the past decades. The overall objective of planning and quality management of river systems is to develop and implement a coordinated set of strategies and policies to reduce or allocate of pollution entering the rivers so that the water quality matches by proposing environmental standards with an acceptable reliability. In such matters, often there are several different decision makers with different utilities which lead to conflicts. Methods/Materials: In this research, a conflict resolution framework for optimal waste load allocation in river systems is proposed, considering the total treatment cost and the Biological Oxygen Demand (BOD violation characteristics. There are two decision-makers inclusive waste load discharges coalition and environmentalists who have conflicting objectives. This framework consists of an embedded river water quality simulator, which simulates the transport process including reaction kinetics. The trade-off curve between objectives is obtained using the Multi-objective Particle Swarm Optimization Algorithm which these objectives are minimization of the total cost of treatment and penalties that must be paid by discharges and a violation of water quality standards considering BOD parameter which is controlled by environmentalists. Thus, the basic policy of river’s water quality management is formulated in such a way that the decision-makers are ensured their benefits will be provided as far as possible. By using MOPSO

  11. The SMARTCyp cytochrome P450 metabolism prediction server

    DEFF Research Database (Denmark)

    Rydberg, Patrik; Gloriam, David Erik Immanuel; Olsen, Lars

    2010-01-01

    The SMARTCyp server is the first web application for site of metabolism prediction of cytochrome P450-mediated drug metabolism.......The SMARTCyp server is the first web application for site of metabolism prediction of cytochrome P450-mediated drug metabolism....

  12. Critical chain construction with multi-resource constraints based on portfolio technology in South-to-North Water Diversion Project

    Directory of Open Access Journals (Sweden)

    Jing-chun Feng

    2011-06-01

    Full Text Available Recently, the critical chain study has become a hot issue in the project management research field. The construction of the critical chain with multi-resource constraints is a new research subject. According to the system analysis theory and project portfolio theory, this paper discusses the creation of project portfolios based on the similarity principle and gives the definition of priority in multi-resource allocation based on quantitative analysis. A model with multi-resource constraints, which can be applied to the critical chain construction of the A-bid section in the South-to-North Water Diversion Project, was proposed. Contrast analysis with the comprehensive treatment construction method and aggressive treatment construction method was carried out. This paper also makes suggestions for further research directions and subjects, which will be useful in improving the theories in relevant research fields.

  13. Asynchronous data change notification between database server and accelerator control systems

    International Nuclear Information System (INIS)

    Wenge Fu; Seth Nemesure; Morris, J.

    2012-01-01

    Database data change notification (DCN) is a commonly used feature, it allows to be informed when the data has been changed on the server side by another client. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. (authors)

  14. Multi-stage crypto ransomware attacks: A new emerging cyber threat to critical infrastructure and industrial control systems

    Directory of Open Access Journals (Sweden)

    Aaron Zimba

    2018-03-01

    Full Text Available The inevitable integration of critical infrastructure to public networks has exposed the underlying industrial control systems to various attack vectors. In this paper, we model multi-stage crypto ransomware attacks, which are today an emerging cyber threat to critical infrastructure. We evaluate our modeling approach using multi-stage attacks by the infamous WannaCry ransomware. The static malware analysis results uncover the techniques employed by the ransomware to discover vulnerable nodes in different SCADA and production subnets, and for the subsequent network propagation. Based on the uncovered artifacts, we recommend a cascaded network segmentation approach, which prioritizes the security of production network devices. Keywords: Critical infrastructure, Cyber-attack, Industrial control system, Crypto ransomware, Vulnerability

  15. Inter-annual variations in water yield to lakes in northeastern Alberta: implications for estimating critical loads of acidity

    Directory of Open Access Journals (Sweden)

    Roderick HAZEWINKEL

    2010-08-01

    Full Text Available Stable isotopes of water were applied to estimate water yield to fifty lakes in northeastern Alberta as part of an acid sensitivity study underway since 2002 in the Athabasca Oil Sands Region (AOSR. Herein, we apply site-specific water yields for each lake to calculate critical loads of acidity using water chemistry data and a steady-state water chemistry model. The main goal of this research was to improve site-specific critical load estimates and to understand the sensitivity to hydrologic variability across a Boreal Plains region under significant oil sands development pressure. Overall, catchment water yields were found to vary significantly over the seven year monitoring period, with distinct variations among lakes and between different regions, overprinted by inter-annual climate-driven shifts. Analysis of critical load estimates based on site-specific water yields suggests that caution must be applied to establish hydrologic conditions and define extremes at specific sites in order to protect more sensitive ecosystems. In general, lakes with low (high water yield tended to be more (less acid sensitive but were typically less (more affected by interannual hydrological variations. While it has been customary to use long-term water yields to define a static critical load for lakes, we find that spatial and temporal variability in water yield may limit effectiveness of this type of assessment in areas of the Boreal Plain characterized by heterogeneous runoff and without a long-term lake-gauging network. Implications for predicting acidification risk are discussed for the AOSR.

  16. Design of multi-tiered database application based on CORBA component

    International Nuclear Information System (INIS)

    Sun Xiaoying; Dai Zhimin

    2003-01-01

    As computer technology quickly developing, middleware technology changed traditional two-tier database system. The multi-tiered database system, consisting of client application program, application servers and database serves, is mainly applying. While building multi-tiered database system using CORBA component has become the mainstream technique. In this paper, an example of DUV-FEL database system is presented, and then discuss the realization of multi-tiered database based on CORBA component. (authors)

  17. Asynchronous data change notification between database server and accelerator controls system

    International Nuclear Information System (INIS)

    Fu, W.; Morris, J.; Nemesure, S.

    2011-01-01

    Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.

  18. Incorporating episodicity into estimates of Critical Loads for juvenile salmonids in Scottish streams

    Directory of Open Access Journals (Sweden)

    E. E. Bridcut

    2004-01-01

    Full Text Available Critical Load (CL methodology is currently used throughout Europe to assess the risks of ecological damage due to sulphur and nitrogen emissions. Critical acid neutralising capacity (ANCCRIT is used in CL estimates for freshwater systems as a surrogate for biological damage. Although UK CL maps presently use an ANC value of 0 μeq l-1, this value has been based largely on Norwegian lake studies, in which brown trout is chosen as a representative indicator organism. In this study, an ANC value specific for brown trout in Scottish streams was determined and issues were addressed such as salmon and trout sensitivity in streams, episodicity, afforestation and complicating factors such as dissolved organic carbon (DOC and labile aluminium (Al-L. Catchments with significant forest cover were selected to provide fishless sites and to provide catchment comparisons in unpolluted areas. Chemical factors were the primary determinant with land use a secondary determinant of the distribution of salmonid populations at the twenty-six study sites. ANC explained more variance in brown trout density than pH. The most significant index of episodicity was percent of time spent below an ANC of 0 μeq l-1. An ANCCRIT value of 39 μeq l-1 was obtained based on a 50% probability of brown trout occurrence. The use of this revised ANCCRIT value in the CL equation improved the relationship between trout status and exceedance of CLs. Uncertainties associated with variations in Al-L at any fixed ANCCRIT, particularly within forested catchments, and the role of DOC in modifying the toxicity of Al-L are discussed. Keywords: Critical Load, Critical acid neutralising capacity, brown trout, episodes, streams

  19. Analisis Algoritma Pergantian Cache Pada Proxy Web Server Internet Dengan Simulasi

    OpenAIRE

    Nurwarsito, Heru

    2007-01-01

    Pertumbuhan jumlah client internet dari waktu ke waktu terus bertambah, maka respon akses internet menjadi semakin lambat. Untuk membantu kecepatan akses tersebut maka diperlukan cache pada Proxy Server. Penelitian ini bertujuan untuk menganalisis performansi Proxy Server pada Jaringan Internet terhadap penggunaan algoritma pergantian cache-nya.Analisis Algoritma Pergantian Cache Pada Proxy Server didesain dengan metoda pemodelan simulasi jaringan internet yang terdiri dari Web server, Proxy ...

  20. A Comparison Between Publish-and-Subscribe and Client-Server Models in Distributed Control System Networks

    Science.gov (United States)

    Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)

    1998-01-01

    The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.

  1. SciServer Compute brings Analysis to Big Data in the Cloud

    Science.gov (United States)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts

  2. Solution for an Improved WEB Server

    Directory of Open Access Journals (Sweden)

    George PECHERLE

    2009-12-01

    Full Text Available We want to present a solution with maximum performance from a web server,in terms of services that the server provides. We do not always know what tools to useor how to configure what we have in order to get what we need. Keeping the Internetrelatedservices you provide in working condition can sometimes be a real challenge.And with the increasing demand in Internet services, we need to come up with solutionsto problems that occur every day.

  3. On the single-server retrial queue

    Directory of Open Access Journals (Sweden)

    Djellab Natalia V.

    2006-01-01

    Full Text Available In this work, we review the stochastic decomposition for the number of customers in M/G/1 retrial queues with reliable server and server subjected to breakdowns which has been the subject of investigation in the literature. Using the decomposition property of M/G/1 retrial queues with breakdowns that holds under exponential assumption for retrial times as an approximation in the non-exponential case, we consider an approximate solution for the steady-state queue size distribution.

  4. Server Interface Descriptions for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Møller, Anders; Su, Zhendong

    2013-01-01

    Automated testing of JavaScript web applications is complicated by the communication with servers. Specifically, it is difficult to test the JavaScript code in isolation from the server code and database contents. We present a practical solution to this problem. First, we demonstrate that formal...... server interface descriptions are useful in automated testing of JavaScript web applications for separating the concerns of the client and the server. Second, to support the construction of server interface descriptions for existing applications, we introduce an effective inference technique that learns...... communication patterns from sample data. By incorporating interface descriptions into the testing tool Artemis, our experimental results show that we increase the level of automation for high-coverage testing on a collection of JavaScript web applications that exchange JSON data between the clients and servers...

  5. An Adaptive Model Predictive Load Frequency Control Method for Multi-Area Interconnected Power Systems with Photovoltaic Generations

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2017-11-01

    Full Text Available As the penetration level of renewable distributed generations such as wind turbine generator and photovoltaic stations increases, the load frequency control issue of a multi-area interconnected power system becomes more challenging. This paper presents an adaptive model predictive load frequency control method for a multi-area interconnected power system with photovoltaic generation by considering some nonlinear features such as a dead band for governor and generation rate constraint for steam turbine. The dynamic characteristic of this system is formulated as a discrete-time state space model firstly. Then, the predictive dynamic model is obtained by introducing an expanded state vector, and rolling optimization of control signal is implemented based on a cost function by minimizing the weighted sum of square predicted errors and square future control values. The simulation results on a typical two-area power system consisting of photovoltaic and thermal generator have demonstrated the superiority of the proposed model predictive control method to these state-of-the-art control techniques such as firefly algorithm, genetic algorithm, and population extremal optimization-based proportional-integral control methods in cases of normal conditions, load disturbance and parameters uncertainty.

  6. TRAP: A Three-Way Handshake Server for TCP Connection Establishment

    Directory of Open Access Journals (Sweden)

    Fu-Hau Hsu

    2016-11-01

    Full Text Available Distributed denial of service attacks have become more and more frequent nowadays. In 2013, a massive distributed denial of service (DDoS attack was launched against Spamhaus causing the service to shut down. In this paper, we present a three-way handshaking server for Transmission Control Protocol (TCP connection redirection utilizing TCP header options. When a legitimate client attempted to connect to a server undergoing an SYN-flood DDoS attack, it will try to initiate a three-way handshake. After it has successfully established a connection, the server will reply with a reset (RST packet, in which a new server address and a secret is embedded. The client can, thus, connect to the new server that only accepts SYN packets with the corrected secret using the supplied secret.

  7. A hybrid load flow and event driven simulation approach to multi-state system reliability evaluation

    International Nuclear Information System (INIS)

    George-Williams, Hindolo; Patelli, Edoardo

    2016-01-01

    Structural complexity of systems, coupled with their multi-state characteristics, renders their reliability and availability evaluation difficult. Notwithstanding the emergence of various techniques dedicated to complex multi-state system analysis, simulation remains the only approach applicable to realistic systems. However, most simulation algorithms are either system specific or limited to simple systems since they require enumerating all possible system states, defining the cut-sets associated with each state and monitoring their occurrence. In addition to being extremely tedious for large complex systems, state enumeration and cut-set definition require a detailed understanding of the system's failure mechanism. In this paper, a simple and generally applicable simulation approach, enhanced for multi-state systems of any topology is presented. Here, each component is defined as a Semi-Markov stochastic process and via discrete-event simulation, the operation of the system is mimicked. The principles of flow conservation are invoked to determine flow across the system for every performance level change of its components using the interior-point algorithm. This eliminates the need for cut-set definition and overcomes the limitations of existing techniques. The methodology can also be exploited to account for effects of transmission efficiency and loading restrictions of components on system reliability and performance. The principles and algorithms developed are applied to two numerical examples to demonstrate their applicability. - Highlights: • A discrete event simulation model based on load flow principles. • Model does not require system path or cut sets. • Applicable to binary and multi-state systems of any topology. • Supports multiple output systems with competing demand. • Model is intuitive and generally applicable.

  8. Expert T-SQL window functions in SQL Server

    CERN Document Server

    Kellenberger, Kathi

    2015-01-01

    Expert T-SQL Window Functions in SQL Server takes you from any level of knowledge of windowing functions and turns you into an expert who can use these powerful functions to solve many T-SQL queries. Replace slow cursors and self-joins with queries that are easy to write and fantastically better performing, all through the magic of window functions. First introduced in SQL Server 2005, window functions came into full blossom with SQL Server 2012. They truly are one of the most notable developments in SQL in a decade, and every developer and DBA can benefit from their expressive power in sol

  9. A Fuzzy Control Course on the TED Server

    DEFF Research Database (Denmark)

    Dotoli, Mariagrazia; Jantzen, Jan

    1999-01-01

    , an educational server that serves as a learning central for students and professionals working with fuzzy logic. Through the server, TED offers an online course on fuzzy control. The course concerns automatic control of an inverted pendulum, with a focus on rule based control by means of fuzzy logic. A ball......The Training and Education Committee (TED) is a committee under ERUDIT, a Network of Excellence for fuzzy technology and uncertainty in Europe. The main objective of TED is to improve the training and educational possibilities for the nodes of ERUDIT. Since early 1999, TED has set up the TED server...

  10. Improving Middle School Students’ Critical Thinking Skills Through Reading Infusion-Loaded Discovery Learning Model in the Science Instruction

    Science.gov (United States)

    Nuryakin; Riandi

    2017-02-01

    A study has been conducted to obtain a depiction of middle school students’ critical thinking skills improvement through the implementation of reading infusion-loaded discovery learning model in science instruction. A quasi-experimental study with the pretest-posttest control group design was used to engage 55 eighth-year middle school students in Tasikmalaya, which was divided into the experimental and control group respectively were 28 and 27 students. Critical thinking skills were measured using a critical thinking skills test in multiple-choice with reason format questions that administered before and after a given instruction. The test was 28 items encompassing three essential concepts, vibration, waves and auditory senses. The critical thinking skills improvement was determined by using the normalized gain score and statistically analyzed by using Mann-Whitney U test.. The findings showed that the average of students’ critical thinking skills normalized gain score of both groups were 59 and 43, respectively for experimental and control group in the medium category. There were significant differences between both group’s improvement. Thus, the implementation of reading infusion-loaded discovery learning model could further improve middle school students’ critical thinking skills than conventional learning.

  11. Aplikasi Billing Client/Server Dengan Mengunakan Microsoft Visual Basic 6.0

    OpenAIRE

    Sinukaban, Eva Solida

    2010-01-01

    Kajian ini bertujuan untuk membangun Billing Server yang gratis dalam jaringan Local dengan media transmisi berupa kabel UTP atau Wifi, Jaringan LAN yang dibangun ini merupakan jaringan client server yang memiliki server dengan sistem operasi yang dipakai adalah windows XP Service Pack 2. Tujuan pembuatan Aplikasi Billing Server ini adalah untuk dapat melakukan sharing data dan komunikasi antar komputer sehingga komputer-komputer tersebut dapat dimanfaatkan seoptimal mungkin baik dari sisi Se...

  12. AUTHENTICATION ALGORITHM FOR PARTICIPANTS OF INFORMATION INTEROPERABILITY IN PROCESS OF OPERATING SYSTEM REMOTE LOADING ON THIN CLIENT

    Directory of Open Access Journals (Sweden)

    Y. A. Gatchin

    2016-05-01

    Full Text Available Subject of Research.This paper presents solution of authentication problem for all components of information interoperabilityin process of operation system network loading on thin client from terminal server. System Definition. In the proposed solution operation system integrity check is made by hardware-software module, including USB-token with protected memory for secure storage of cryptographic keys and loader. The key requirement for the solution is mutual authentication of four participants: terminal server, thin client, token and user. We have created two algorithms for the problem solution. The first of the designed algorithms compares the encrypted one-time password (random number with the reference value stored in the memory of the token and updates this number in case of successful authentication. The second algorithm uses the public and private keys of the token and the server. As a result of cryptographic transformation, participants are authenticated and the secure channel is formed between the token, thin client and terminal server. Main Results. Additional research was carried out to find out if the designed algorithms meet the necessary requirements. Criteria used included applicability in a multi-access terminal system architecture, potential threats evaluation and overall system security. According to analysis results, it is recommended to use the algorithm based on PKI due to its high scalability and usability. High level of data security is proved as a result of asymmetric cryptography application with the guarantee that participants' private keys are never sent in the authentication process. Practical Relevance. The designed PKI-based algorithm allows solving the problem with the use of cryptographic algorithms according to state standard even in its absence on asymmetric cryptography. Thus, it can be applied in the State Information Systems with increased requirements to information security.

  13. Openlobby: an open game server for lobby and matchmaking

    Science.gov (United States)

    Zamzami, E. M.; Tarigan, J. T.; Jaya, I.; Hardi, S. M.

    2018-03-01

    Online Multiplayer is one of the most essential feature in modern games. However, while developing a multiplayer feature can be done with a simple computer networking programming, creating a balanced multiplayer session requires more player management components such as game lobby and matchmaking system. Our objective is to develop OpenLobby, a server that available to be used by other developers to support their multiplayer application. The proposed system acts as a lobby and matchmaker where queueing players will be matched to other player according to a certain criteria defined by developer. The solution provides an application programing interface that can be used by developer to interact with the server. For testing purpose, we developed a game that uses the server as their multiplayer server.

  14. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    International Nuclear Information System (INIS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-01-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it 'multi-tier'. The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  15. Using Pattern Recognition Techniques for Server Overload Detection

    NARCIS (Netherlands)

    Bezemer, C.P.; Cheplygina, V.; Zaidman, A.

    2011-01-01

    One of the key factors in customer satisfaction is application performance. To be able to guarantee good performance, it is necessary to take appropriate measures before a server overload occurs. While in small systems it is usually possible to predict server overload using a subjective human

  16. Server virtualization management of corporate network with hyper-v

    OpenAIRE

    Kovalenko, Taras

    2012-01-01

    On a paper main tasks and problems of server virtualization are considerate. Practical value of virtualization in a corporate network, advantages and disadvantages of application of server virtualization are also considerate.

  17. Empirical Analysis of Server Consolidation and Desktop Virtualization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2013-01-01

    Full Text Available Physical server transited to virtual server infrastructure (VSI and desktop device to virtual desktop infrastructure (VDI have the crucial problems of server consolidation, virtualization performance, virtual machine density, total cost of ownership (TCO, and return on investments (ROI. Besides, how to appropriately choose hypervisor for the desired server/desktop virtualization is really challenging, because a trade-off between virtualization performance and cost is a hard decision to make in the cloud. This paper introduces five hypervisors to establish the virtual environment and then gives a careful assessment based on C/P ratio that is derived from composite index, consolidation ratio, virtual machine density, TCO, and ROI. As a result, even though ESX server obtains the highest ROI and lowest TCO in server virtualization and Hyper-V R2 gains the best performance of virtual machine management; both of them however cost too much. Instead the best choice is Proxmox Virtual Environment (Proxmox VE because it not only saves the initial investment a lot to own a virtual server/desktop infrastructure, but also obtains the lowest C/P ratio.

  18. Windows Server® 2008 Inside Out

    CERN Document Server

    Stanek, William R

    2009-01-01

    Learn how to conquer Windows Server 2008-from the inside out! Designed for system administrators, this definitive resource features hundreds of timesaving solutions, expert insights, troubleshooting tips, and workarounds for administering Windows Server 2008-all in concise, fast-answer format. You will learn how to perform upgrades and migrations, automate deployments, implement security features, manage software updates and patches, administer users and accounts, manage Active Directory® directory services, and more. With INSIDE OUT, you'll discover the best and fastest ways to perform core a

  19. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    Science.gov (United States)

    Wen, Qiaoyan; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function. PMID:24982949

  20. Improving data retrieval rates using remote data servers

    International Nuclear Information System (INIS)

    D'Ottavio, T.; Frak, B.; Nemesure, S.; Morris, J.

    2012-01-01

    The power and scope of modern Control Systems has led to an increased amount of data being collected and stored, including data collected at high (kHz) frequencies. One consequence is that users now routinely make data requests that can cause giga-bytes of data to be read and displayed. Given that a user's patience can be measured in seconds, this can be quite a technical challenge. This paper explores one possible solution to this problem - the creation of remote data servers whose performance is optimized to handle context-sensitive data requests. Methods for increasing data delivery performance include the use of high speed network connections between the stored data and the data servers, smart caching of frequently used data, and the culling of data delivered as determined by the context of the data request. This paper describes decisions made when constructing these servers and compares data retrieval performance by clients that use or do not use an intermediate data server. (authors)

  1. Two-cloud-servers-assisted secure outsourcing multiparty computation.

    Science.gov (United States)

    Sun, Yi; Wen, Qiaoyan; Zhang, Yudong; Zhang, Hua; Jin, Zhengping; Li, Wenmin

    2014-01-01

    We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  2. Getting started with Oracle WebLogic Server 12c developer's guide

    CERN Document Server

    Nunes, Fabio Mazanatti

    2013-01-01

    Getting Started with Oracle WebLogic Server 12c is a fast-paced and feature-packed book, designed to get you working with Java EE 6, JDK 7 and Oracle WebLogic Server 12c straight away, so start developing your own applications.Getting Started with Oracle WebLogic Server 12c: Developer's Guide is written for developers who are just getting started, or who have some experience, with Java EE who want to learn how to develop for and use Oracle WebLogic Server. Getting Started with Oracle WebLogic Server 12c: Developer's Guide also provides a great overview of the updated features of the 12c releas

  3. The Impact of Load Carriage on Measures of Power and Agility in Tactical Occupations: A Critical Review.

    Science.gov (United States)

    Joseph, Aaron; Wiley, Amy; Orr, Robin; Schram, Benjamin; Dawes, J Jay

    2018-01-07

    The current literature suggests that load carriage can impact on a tactical officer's mobility, and that survival in the field may rely on the officer's mobility. The ability for humans to generate power and agility is critical for performance of the high-intensity movements required in the field of duty. The aims of this review were to critically examine the literature investigating the impacts of load carriage on measures of power and agility and to synthesize the findings. The authors completed a search of the literature using key search terms in four databases. After relevant studies were located using strict inclusion and exclusion criteria, the studies were critically appraised using the Downs and Black Checklist and relevant data were extracted and tabled. Fourteen studies were deemed relevant for this review, ranging in percentage quality scores from 42.85% to 71.43%. Outcome measures used in these studies to indicate levels of power and agility included short-distance sprints, vertical jumps, and agility runs, among others. Performance of both power and agility was shown to decrease when tactical load was added to the participants. This suggests that the increase in weight carried by tactical officers may put this population at risk of injury or fatality in the line of duty.

  4. The Impact of Load Carriage on Measures of Power and Agility in Tactical Occupations: A Critical Review

    Directory of Open Access Journals (Sweden)

    Aaron Joseph

    2018-01-01

    Full Text Available The current literature suggests that load carriage can impact on a tactical officer’s mobility, and that survival in the field may rely on the officer’s mobility. The ability for humans to generate power and agility is critical for performance of the high-intensity movements required in the field of duty. The aims of this review were to critically examine the literature investigating the impacts of load carriage on measures of power and agility and to synthesize the findings. The authors completed a search of the literature using key search terms in four databases. After relevant studies were located using strict inclusion and exclusion criteria, the studies were critically appraised using the Downs and Black Checklist and relevant data were extracted and tabled. Fourteen studies were deemed relevant for this review, ranging in percentage quality scores from 42.85% to 71.43%. Outcome measures used in these studies to indicate levels of power and agility included short-distance sprints, vertical jumps, and agility runs, among others. Performance of both power and agility was shown to decrease when tactical load was added to the participants. This suggests that the increase in weight carried by tactical officers may put this population at risk of injury or fatality in the line of duty.

  5. Instant Microsoft SQL Server Analysis Services 2012 dimensions and cube

    CERN Document Server

    Acharya, Anurag

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. Written in a practical, friendly manner this book will take you through the journey from installing SQL Server to developing your first cubes.""Microsoft SQL Server Analysis Service 2012 Dimensions"" and Cube Starter is targeted at anyone who wants to get started with cube development in Microsoft SQL Server Analysis Services. Regardless of whether you are a SQL Server developer who knows nothing about cube development or SSAS or even OLAP, you

  6. LigParGen web server: an automatic OPLS-AA parameter generator for organic ligands

    Science.gov (United States)

    Dodda, Leela S.

    2017-01-01

    Abstract The accurate calculation of protein/nucleic acid–ligand interactions or condensed phase properties by force field-based methods require a precise description of the energetics of intermolecular interactions. Despite the progress made in force fields, small molecule parameterization remains an open problem due to the magnitude of the chemical space; the most critical issue is the estimation of a balanced set of atomic charges with the ability to reproduce experimental properties. The LigParGen web server provides an intuitive interface for generating OPLS-AA/1.14*CM1A(-LBCC) force field parameters for organic ligands, in the formats of commonly used molecular dynamics and Monte Carlo simulation packages. This server has high value for researchers interested in studying any phenomena based on intermolecular interactions with ligands via molecular mechanics simulations. It is free and open to all at jorgensenresearch.com/ligpargen, and has no login requirements. PMID:28444340

  7. Client/server approach to image capturing

    Science.gov (United States)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven

  8. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  9. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    International Nuclear Information System (INIS)

    Valassi, A; Kalkhof, A; Bartoldus, R; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farm of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.

  10. Two-stage discrete-continuous multi-objective load optimization: An industrial consumer utility approach to demand response

    International Nuclear Information System (INIS)

    Abdulaal, Ahmed; Moghaddass, Ramin; Asfour, Shihab

    2017-01-01

    Highlights: •Two-stage model links discrete-optimization to real-time system dynamics operation. •The solutions obtained are non-dominated Pareto optimal solutions. •Computationally efficient GA solver through customized chromosome coding. •Modest to considerable savings are achieved depending on the consumer’s preference. -- Abstract: In the wake of today’s highly dynamic and competitive energy markets, optimal dispatching of energy sources requires effective demand responsiveness. Suppliers have adopted a dynamic pricing strategy in efforts to control the downstream demand. This method however requires consumer awareness, flexibility, and timely responsiveness. While residential activities are more flexible and schedulable, larger commercial consumers remain an obstacle due to the impacts on industrial performance. This paper combines methods from quadratic, stochastic, and evolutionary programming with multi-objective optimization and continuous simulation, to propose a two-stage discrete-continuous multi-objective load optimization (DiCoMoLoOp) autonomous approach for industrial consumer demand response (DR). Stage 1 defines discrete-event load shifting targets. Accordingly, controllable loads are continuously optimized in stage 2 while considering the consumer’s utility. Utility functions, which measure the loads’ time value to the consumer, are derived and weights are assigned through an analytical hierarchy process (AHP). The method is demonstrated for an industrial building model using real data. The proposed method integrates with building energy management system and solves in real-time with autonomous and instantaneous load shifting in the hour-ahead energy price (HAP) market. The simulation shows the occasional existence of multiple load management options on the Pareto frontier. Finally, the computed savings, based on the simulation analysis with real consumption, climate, and price data, ranged from modest to considerable amounts

  11. Mac OS X Snow Leopard Server For Dummies

    CERN Document Server

    Rizzo, John

    2009-01-01

    Making Everything Easier!. Mac OS® X Snow Leopard Server for Dummies. Learn to::;. Set up and configure a Mac network with Snow Leopard Server;. Administer, secure, and troubleshoot the network;. Incorporate a Mac subnet into a Windows Active Directory® domain;. Take advantage of Unix® power and security. John Rizzo. Want to set up and administer a network even if you don't have an IT department? Read on!. Like everything Mac, Snow Leopard Server was designed to be easy to set up and use. Still, there are so many options and features that this book will save you heaps of time and effort. It wa

  12. Susceptibility of forests in the northeastern USA to nitrogen and sulfur deposition: critical load exceedance and forest health

    Science.gov (United States)

    N. Duarte; L.H. Pardo; M.J. Robin-Abbott

    2013-01-01

    The objectives of this study were to assess susceptibility to acidification and nitrogen (N) saturation caused by atmospheric deposition to northeastern US forests, evaluate the benefits and shortcomings of making critical load assessments using regional data, and assess the relationship between expected risk (exceedance) and forest health. We calculated the critical...

  13. Measurement study of multi-party video conferencing

    NARCIS (Netherlands)

    Lu, Y.; Zhao, Y.; Kuipers, F.A.; Van Mieghem, P.

    2010-01-01

    More and more free multi-party video conferencing applications are readily available over the Internet and both Server-to-Client (S/C) or Peer-to-Peer (P2P) technologies are used. Investigating their mechanisms, analyzing their system performance, and measuring their quality are important objectives

  14. Evolving Relationship Structures in Multi-sourcing Arrangements: The Case of Mission Critical Outsourcing

    Science.gov (United States)

    Heitlager, Ilja; Helms, Remko; Brinkkemper, Sjaak

    Information Technology Outsourcing practice and research mainly considers the outsourcing phenomenon as a generic fulfilment of the IT function by external parties. Inspired by the logic of commodity, core competencies and economies of scale; assets, existing departments and IT functions are transferred to external parties. Although the generic approach might work for desktop outsourcing, where standardisation is the dominant factor, it does not work for the management of mission critical applications. Managing mission critical applications requires a different approach where building relationships is critical. The relationships involve inter and intra organisational parties in a multi-sourcing arrangement, called an IT service chain, consisting of multiple (specialist) parties that have to collaborate closely to deliver high quality services.

  15. Lichen-based critical loads for atmospheric nitrogen deposition in Western Oregon and Washington forests, USA

    Science.gov (United States)

    Linda H. Geiser; Sarah E. Jovan; Doug A. Glavich; Matthew K. Porter

    2010-01-01

    Critical loads (CLs) define maximum atmospheric deposition levels apparently preventative of ecosystem harm. We present first nitrogen CLs for northwestern North America's maritime forests. Using multiple linear regression, we related epiphytic-macrolichen community composition to: 1) wet deposition from the National Atmospheric Deposition Program, 2) wet, dry,...

  16. Effect of Eccentricity of Load on Critical Force of Thin-Walled Columns CFRP

    Directory of Open Access Journals (Sweden)

    Pawel Wysmulski

    2017-09-01

    Full Text Available The subject of study was a thin-walled C-section made of carbon fiber reinforced polymer (CFRP. Column was subjected to eccentric compression in the established direction. In the computer simulation, the boundary conditions were assumed in the form of articulated support of the sections of the column. Particular studies included an analysis of the effects of eccentricity on the critical force value. The research was conducted using two independent research methods: numerical and experimental. Numerical simulations were done using the finite element method using the advanced system Abaqus®. The high sensitivity of the critical force value corresponding to the local buckling of the channel section to the load eccentricity was demonstrated.

  17. Multi Canister Overpack (MCO) Handling Machine Trolley Seismic Uplift Constraint Design Loads

    International Nuclear Information System (INIS)

    SWENSON, C.E.

    2000-01-01

    The MCO Handling Machine (MHM) trolley moves along the top of the MHM bridge girders on east-west oriented rails. To prevent trolley wheel uplift during a seismic event, passive uplift constraints are provided as shown in Figure 1-1. North-south trolley wheel movement is prevented by flanges on the trolley wheels. When the MHM is positioned over a Multi-Canister Overpack (MCO) storage tube, east-west seismic restraints are activated to prevent trolley movement during MCO handling. The active seismic constraints consist of a plunger, which is inserted into slots positioned along the tracks as shown in Figure 1-1. When the MHM trolley is moving between storage tube positions, the active seismic restraints are not engaged. The MHM has been designed and analyzed in accordance with ASME NOG-1-1995. The ALSTHOM seismic analysis (Reference 3) reported seismic uplift restraint loading and EDERER performed corresponding structural calculations. The ALSTHOM and EDERER calculations were performed with the east-west seismic restraints activated and the uplift restraints experiencing only vertical loading. In support of development of the CSB Safety Analysis Report (SAR), an evaluation of the MHM seismic response was requested for the case where the east-west trolley restraints are not engaged. For this case, the associated trolley movements would result in east-west lateral loads on the uplift constraints due to friction, as shown in Figure 1-2. During preliminary evaluations, questions were raised as to whether the EDERER calculations considered the latest ALSTHOM seismic analysis loads (See NCR No. 00-SNFP-0008, Reference 5). Further evaluation led to the conclusion that the EDERER calculations used appropriate vertical loading, but the uplift restraints would need to be re-analyzed and modified to account for lateral loading. The disposition of NCR 00-SNFP-0008 will track the redesign and modification effort. The purpose of this calculation is to establish bounding seismic

  18. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  19. From Server to Desktop: Capital and Institutional Planning for Client/Server Technology.

    Science.gov (United States)

    Mullig, Richard M.; Frey, Keith W.

    1994-01-01

    Beginning with a request for an enhanced system for decision/strategic planning support, the University of Chicago's biological sciences division has developed a range of administrative client/server tools, instituted a capital replacement plan for desktop technology, and created a planning and staffing approach enabling rapid introduction of new…

  20. iPhone with Microsoft Exchange Server 2010 Business Integration and Deployment

    CERN Document Server

    Goodman, Steve

    2012-01-01

    iPhone with Microsoft Exchange Server 2010 - Business Integration and Deployment is a practical, step-by-step tutorial on planning, installing and configuring Exchange Server to deploy iPhones into your business. This book is aimed at system administrators who don't necessarily know about Exchange Server 2010 or ActiveSync-based mobile devices. A basic level of knowledge around Windows Servers is expected, and knowledge of smartphones and email systems in general will make some topics a little easier.

  1. Fracture toughness of epoxy/multi-walled carbon nanotube nano-composites under bending and shear loading conditions

    International Nuclear Information System (INIS)

    Ayatollahi, M.R.; Shadlou, S.; Shokrieh, M.M.

    2011-01-01

    Research highlights: → Mode I and mode II fracture tests were conducted on epoxy/MWCNT nano-composites. → Addition of MWCNT to epoxy increased both K Ic and K IIc of nano-composites. → The improvement in K IIc was more pronounced than in K Ic . → Mode I and mode II fracture surfaces were studied by scanning electron microscopy. -- Abstract: The effects of multi-walled carbon nanotubes (MWCNTs) on the mechanical properties of epoxy/MWCNT nano-composites were studied with emphasis on fracture toughness under bending and shear loading conditions. Several finite element (FE) analyses were performed to determine appropriate shear loading boundary conditions for a single-edge notch bend specimen (SENB) and an equation was derived for calculating the shear loading fracture toughness from the fracture load. It was seen that the increase in fracture toughness of nano-composite depends on the type of loading. That is to say, the presence of MWCNTs had a greater effect on fracture toughness of nano-composites under shear loading compared with normal loading. To study the fracture mechanisms, several scanning electron microscopy (SEM) pictures were taken from the fracture surfaces. A correlation was found between the characteristics of fracture surface and the mechanical behaviors observed in the fracture tests.

  2. Multi-thread Parallel Speech Recognition for Mobile Applications

    Directory of Open Access Journals (Sweden)

    LOJKA Martin

    2014-05-01

    Full Text Available In this paper, the server based solution of the multi-thread large vocabulary automatic speech recognition engine is described along with the Android OS and HTML5 practical application examples. The basic idea was to bring speech recognition available for full variety of applications for computers and especially for mobile devices. The speech recognition engine should be independent of commercial products and services (where the dictionary could not be modified. Using of third-party services could be also a security and privacy problem in specific applications, when the unsecured audio data could not be sent to uncontrolled environments (voice data transferred to servers around the globe. Using our experience with speech recognition applications, we have been able to construct a multi-thread speech recognition serverbased solution designed for simple applications interface (API to speech recognition engine modified to specific needs of particular application.

  3. PONDEROSA-C/S: client-server based software package for automated protein 3D structure determination.

    Science.gov (United States)

    Lee, Woonghee; Stark, Jaime L; Markley, John L

    2014-11-01

    Peak-picking Of Noe Data Enabled by Restriction Of Shift Assignments-Client Server (PONDEROSA-C/S) builds on the original PONDEROSA software (Lee et al. in Bioinformatics 27:1727-1728. doi: 10.1093/bioinformatics/btr200, 2011) and includes improved features for structure calculation and refinement. PONDEROSA-C/S consists of three programs: Ponderosa Server, Ponderosa Client, and Ponderosa Analyzer. PONDEROSA-C/S takes as input the protein sequence, a list of assigned chemical shifts, and nuclear Overhauser data sets ((13)C- and/or (15)N-NOESY). The output is a set of assigned NOEs and 3D structural models for the protein. Ponderosa Analyzer supports the visualization, validation, and refinement of the results from Ponderosa Server. These tools enable semi-automated NMR-based structure determination of proteins in a rapid and robust fashion. We present examples showing the use of PONDEROSA-C/S in solving structures of four proteins: two that enable comparison with the original PONDEROSA package, and two from the Critical Assessment of automated Structure Determination by NMR (Rosato et al. in Nat Methods 6:625-626. doi: 10.1038/nmeth0909-625 , 2009) competition. The software package can be downloaded freely in binary format from http://pine.nmrfam.wisc.edu/download_packages.html. Registered users of the National Magnetic Resonance Facility at Madison can submit jobs to the PONDEROSA-C/S server at http://ponderosa.nmrfam.wisc.edu, where instructions, tutorials, and instructions can be found. Structures are normally returned within 1-2 days.

  4. Microsoft SQL Server OLAP Solution - A Survey

    OpenAIRE

    Badiozamany, Sobhan

    2010-01-01

    Microsoft SQL Server 2008 offers technologies for performing On-Line Analytical Processing (OLAP), directly on data stored in data warehouses, instead of moving the data into some offline OLAP tool. This brings certain benefits, such as elimination of data copying and better integration with the DBMS compared with off-line OLAP tools. This report reviews SQL Server support for OLAP, solution architectures, tools and components involved. Standard storage options are discussed but the focus of ...

  5. Critical experiments on enriched uranium graphite moderated cores

    International Nuclear Information System (INIS)

    Kaneko, Yoshihiko; Akino, Fujiyoshi; Kitadate, Kenji; Kurokawa, Ryosuke

    1978-07-01

    A variety of 20 % enriched uranium loaded and graphite-moderated cores consisting of the different lattice cells in a wide range of the carbon to uranium atomic ratio have been built at Semi-Homogeneous Critical Experimental Assembly (SHE) to perform the critical experiments systematically. In the present report, the experimental results for homogeneously or heterogeneously fuel loaded cores and for simulation core of the experimental reactor for a multi-purpose high temperature reactor are filed so as to be utilized for evaluating the accuracy of core design calculation for the experimental reactor. The filed experimental data are composed of critical masses of uranium, kinetic parameters, reactivity worths of the experimental control rods and power distributions in the cores with those rods. Theoretical analyses are made for the experimental data by adopting a simple ''homogenized cylindrical core model'' using the nuclear data of ENDF/B-III, which treats the neutron behaviour after smearing the lattice cell structure. It is made clear from a comparison between the measurement and the calculation that the group constants and fundamental methods of calculations, based on this theoretical model, are valid for the homogeneously fuel loaded cores, but not for both of the heterogeneously fuel loaded cores and the core for simulation of the experimental reactor. Then, it is pointed out that consideration to semi-homogeneous property of the lattice cells for reactor neutrons is essential for high temperature graphite-moderated reactors using dispersion fuel elements of graphite and uranium. (author)

  6. The impact of occupational load carriage on carrier mobility: a critical review of the literature.

    Science.gov (United States)

    Carlton, Simon D; Orr, Robin M

    2014-01-01

    Military personnel and firefighters are required to carry occupational loads and complete tasks in hostile and unpredictable environments where a lack of mobility may risk lives. This review critically examines the literature investigating the impacts of load carriage on the mobility of these specialist personnel. Several literature databases, reference lists, and subject matter experts were employed to identify relevant studies. Studies meeting the inclusion criteria were critiqued using the Downs and Black protocol. Inter-rater agreement was determined by Cohen's κ. Twelve original research studies, which included male and female participants from military and firefighting occupations, were critiqued (κ = .81). A review of these papers found that as the carried load weight increased, carrier mobility during aerobic tasks (like road marching) and anaerobic tasks (like obstacle course negotiation) decreased. As such, it can be concluded that the load carried by some specialist personnel may increase their occupational risk by reducing their mobility.

  7. Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2014-01-01

    Full Text Available We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users’ public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.

  8. Evaluation of the Intel Westmere-EP server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2010-01-01

    In this paper we report on a set of benchmark results recently obtained by CERN openlab when comparing the 6-core “Westmere-EP” processor with Intel’s previous generation of the same microarchitecture, the “Nehalem-EP”. The former is produced in a new 32nm process, the latter in 45nm. Both platforms are dual-socket servers. Multiple benchmarks were used to get a good understanding of the performance of the new processor. We used both industry-standard benchmarks, such as SPEC2006, and specific High Energy Physics benchmarks, representing both simulation of physics detectors and data analysis of physics events. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores via Simultaneous Multi-Threading (SMT), the cache sizes available, the memory configuration installed, as well...

  9. Mfold web server for nucleic acid folding and hybridization prediction.

    Science.gov (United States)

    Zuker, Michael

    2003-07-01

    The abbreviated name, 'mfold web server', describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and 'energy dot plots', are available for the folding of single sequences. A variety of 'bulk' servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as 'MFOLDROOT'.

  10. A Collaborative Digital Pathology System for Multi-Touch Mobile and Desktop Computing Platforms

    KAUST Repository

    Jeong, W.

    2013-06-13

    Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server system that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch-enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. We introduce a domain-specific image-stack compression method that leverages real-time hardware decoding on mobile devices. It adaptively encodes image stacks in a decorrelated colour space to achieve extremely low bitrates (0.8 bpp) with very low loss of image quality. We evaluate the image quality of our compression method and the performance of our system for diagnosis with an in-depth user study. Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server systems that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  11. A Collaborative Digital Pathology System for Multi-Touch Mobile and Desktop Computing Platforms

    KAUST Repository

    Jeong, W.; Schneider, J.; Hansen, A.; Lee, M.; Turney, S. G.; Faulkner-Jones, B. E.; Hecht, J. L.; Najarian, R.; Yee, E.; Lichtman, J. W.; Pfister, H.

    2013-01-01

    Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server system that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch-enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. We introduce a domain-specific image-stack compression method that leverages real-time hardware decoding on mobile devices. It adaptively encodes image stacks in a decorrelated colour space to achieve extremely low bitrates (0.8 bpp) with very low loss of image quality. We evaluate the image quality of our compression method and the performance of our system for diagnosis with an in-depth user study. Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server systems that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  12. CERN Document Server (CDS): Introduction

    CERN Multimedia

    CERN. Geneva; Costa, Flavio

    2017-01-01

    A short online tutorial introducing the CERN Document Server (CDS). Basic functionality description, the notion of Revisions and the CDS test environment. Links: CDS Production environment CDS Test environment  

  13. Identification of critical equipment and determination of operational limits in helium refrigerators under pulsed heat load

    Science.gov (United States)

    Dutta, Rohan; Ghosh, Parthasarathi; Chowdhury, Kanchan

    2014-01-01

    Large-scale helium refrigerators are subjected to pulsed heat load from tokamaks. As these plants are designed for constant heat loads, operation under such varying load may lead to instability in plants thereby tripping the operation of different equipment. To understand the behavior of the plant subjected to pulsed heat load, an existing plant of 120 W at 4.2 K and another large-scale plant of 18 kW at 4.2 K have been analyzed using a commercial process simulator Aspen Hysys®. A similar heat load characteristic has been applied in both quasi steady state and dynamic analysis to determine critical stages and equipment of these plants from operational point of view. It has been found that the coldest part of both the cycles consisting JT-stage and its preceding reverse Brayton stage are the most affected stages of the cycles. Further analysis of the above stages and constituting equipment revealed limits of operation with respect to variation of return stream flow rate resulted from such heat load variations. The observations on the outcome of the analysis can be used for devising techniques for steady operation of the plants subjected to pulsed heat load.

  14. Instant Hyper-v Server Virtualization starter

    CERN Document Server

    Eguibar, Vicente Rodriguez

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks.The approach would be in a tutorial manner that will guide the users in an orderly manner toward virtualization.This book is conceived for system administrator and advanced PC enthusiasts who want to venture into the virtualization world. Although this book goes from scratch up, knowledge on server Operative Systems, LAN and networking has to be in place. Having a good background on server administration is desirable, including networking service

  15. Look-ahead policies for admission to a single server loss system

    NARCIS (Netherlands)

    Nawijn, W.M.

    1990-01-01

    Consider a single server loss system in which the server, being idle, may reject or accept an arriving customer for service depending on the state at the arrival epoch. It is assumed that at every arrival epoch the server knows the service time of the arriving customer, the arrival time of the next

  16. Dynamic stability under sudden loads

    International Nuclear Information System (INIS)

    Simitses, G.J.

    1998-01-01

    The concept of dynamic stability of elastic structures subjected to sudden (step) loads is discussed. The various criteria and related methodologies for estimating critical conditions are presented with the emphasis on their similarities and differences. These are demonstrated by employing a simple mechanical model. Several structural configurations are analyzed, for demonstration purposes, with the intention of comparing critical dynamic loads to critical static loads. These configurations include shallow arches and shallow spherical caps, two bar frames, and imperfect cylindrical shells of metallic as well as laminated composite construction. In the demonstration examples, the effect of static pre loading on the dynamic critical load is presented

  17. HDF-EOS Web Server

    Science.gov (United States)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  18. Sending servers to Morocco

    CERN Multimedia

    Joannah Caborn Wengler

    2012-01-01

    Did you know that computer centres are like people? They breathe air in and out like a person, they have to be kept at the right temperature, and they can even be organ donors. As part of a regular cycle of equipment renewal, the CERN Computer Centre has just donated 161 retired servers to universities in Morocco.   Prof. Abdeslam Hoummada and CERN DG Rolf Heuer seeing off the servers on the beginning of their journey to Morocco. “Many people don’t realise, but the Computer Centre is like a living thing. You don’t just install equipment and it runs forever. We’re continually replacing machines, broken parts and improving things like the cooling.” Wayne Salter, Leader of the IT Computing Facilities Group, watches over the Computer Centre a bit like a nurse monitoring a patient’s temperature, especially since new international recommendations for computer centre environmental conditions were released. “A new international s...

  19. Supervisory control system implemented in programmable logical controller web server

    OpenAIRE

    Milavec, Simon

    2012-01-01

    In this thesis, we study the feasibility of supervisory control and data acquisition (SCADA) system realisation in a web server of a programmable logic controller. With the introduction of Ethernet protocol to the area of process control, the more powerful programmable logic controllers obtained integrated web servers. The web server of a programmable logic controller, produced by Siemens, will also be described in this thesis. Firstly, the software and the hardware equipment used for real...

  20. KFC Server: interactive forecasting of protein interaction hot spots.

    Science.gov (United States)

    Darnell, Steven J; LeGault, Laura; Mitchell, Julie C

    2008-07-01

    The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model-a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein-protein or protein-DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org.

  1. The design and implementation about the project of optimizing proxy servers

    International Nuclear Information System (INIS)

    Wu Ling; Liu Baoxu

    2006-01-01

    Proxy server is an important facility in the network of an organization, which play an important role in security and access control and accelerating access of Internet. This article introduces the action of proxy servers, and expounds the resolutions to optimize proxy servers at IHEP: integration, dynamic domain name resolves and data synchronization. (authors)

  2. Prototype Sistem Multi-Telemetri Wireless untuk Mengukur Suhu Udara Berbasis Mikrokontroler ESP8266 pada Greenhouse

    Directory of Open Access Journals (Sweden)

    Hanum Shirotu Nida

    2017-07-01

    Full Text Available Telemetri wireless adalah proses pengukuran parameter suatu obyek yang hasil pengukurannya dikirimkan ke tempat lain melalui proses pengiriman data tanpa menggunakan kabel (wireless, sedangkan multi telemetri adalah gabungan dari beberapa telemeteri itu sendiri. Penelitian ini merancang prototype sistem multi-telemetri wireless untuk mengukur suhu udara dan kelembaban udara pada greenhouse dengan menggunakan sensor DHT11 dan data hasil dari pembacaan sensor dikirim dengan menggunakan modul WiFi ESP8266 ke server dengan menggunakan protokol HTTP. Dalam penelitian ini diuji nilai sensor DHT11, heap memory ESP8266, jarak atau jangkauan ESP8266, uji coba data missing handling dan kestabilan jaringan. Berdasarkan hasil pengujian diketahui bahwa sensor DHT11 memiliki rata-rata kesalahan ukur suhu 0.92 oC dan kelembaban 3.1%. Modul WiFi ESP8266 mampu menyimpan dan mengirim buffer hingga 100 data dan dapat melakukan pengiriman dalam jangkauan 50 meter. Data missing handling memanfaatkan buffer untuk menyimpan data selama server sedang tidak dapat diakses oleh sensor node agar data tidak hillang. Kestabilan pengiriman data atau koneksi sensor node dengan server dipengaruhi oleh jumlah access point yang sedang berkomunikasi disekitar access point server dengan menggunakan channel yang sama.

  3. Rclick: a web server for comparison of RNA 3D structures.

    Science.gov (United States)

    Nguyen, Minh N; Verma, Chandra

    2015-03-15

    RNA molecules play important roles in key biological processes in the cell and are becoming attractive for developing therapeutic applications. Since the function of RNA depends on its structure and dynamics, comparing and classifying the RNA 3D structures is of crucial importance to molecular biology. In this study, we have developed Rclick, a web server that is capable of superimposing RNA 3D structures by using clique matching and 3D least-squares fitting. Our server Rclick has been benchmarked and compared with other popular servers and methods for RNA structural alignments. In most cases, Rclick alignments were better in terms of structure overlap. Our server also recognizes conformational changes between structures. For this purpose, the server produces complementary alignments to maximize the extent of detectable similarity. Various examples showcase the utility of our web server for comparison of RNA, RNA-protein complexes and RNA-ligand structures. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Adventures in the evolution of a high-bandwidth network for central servers

    International Nuclear Information System (INIS)

    Swartz, K.L.; Cottrell, L.; Dart, M.

    1994-08-01

    In a small network, clients and servers may all be connected to a single Ethernet without significant performance concerns. As the number of clients on a network grows, the necessity of splitting the network into multiple sub-networks, each with a manageable number of clients, becomes clear. Less obvious is what to do with the servers. Group file servers on subnets and multihomed servers offer only partial solutions -- many other types of servers do not lend themselves to a decentralized model, and tend to collect on another, well-connected but overloaded Ethernet. The higher speed of FDDI seems to offer an easy solution, but in practice both expense and interoperability problems render FDDI a poor choice. Ethernet switches appear to permit cheaper and more reliable networking to the servers while providing an aggregate network bandwidth greater than a simple Ethernet. This paper studies the evolution of the server networks at SLAC. Difficulties encountered in the deployment of FDDI are described, as are the tools and techniques used to characterize the traffic patterns on the server network. Performance of Ethernet, FDDI, and switched Ethernet networks is analyzed, as are reliability and maintainability issues for these alternatives. The motivations for re-designing the SLAC general server network to use a switched Ethernet instead of FDDI are described, as are the reasons for choosing FDDI for the farm and firewall networks at SLAC. Guidelines are developed which may help in making this choice for other networks

  5. Efficient approach for simulating response of multi-body structure in reactor core subjected to seismic loading

    International Nuclear Information System (INIS)

    Zhang Hongkun; Cen Song; Wang Haitao; Cheng Huanyu

    2012-01-01

    An efficient 3D approach is proposed for simulating the complicated responses of the multi-body structure in reactor core under seismic loading. By utilizing the rigid-body and connector functions of the software Abaqus, the multi-body structure of the reactor core is simplified as a mass-point system interlinked by spring-dashpot connectors. And reasonable schemes are used for determining various connector coefficients. Furthermore, a scripting program is also complied for the 3D parametric modeling. Numerical examples show that, the proposed method can not only produce the results which satisfy the engineering requirements, but also improve the computational efficiency more than 100 times. (authors)

  6. A novel reformulation of the Theory of Critical Distances to design notched metals against dynamic loading

    International Nuclear Information System (INIS)

    Yin, T.; Tyas, A.; Plekhov, O.; Terekhina, A.; Susmel, L.

    2015-01-01

    Highlights: • The proposed method is successful in estimating dynamic strength of metals. • The critical distance varies as the loading/strain/displacement rate increases. • The reference strength varies as the loading/strain/displacement rate increases. • This method is recommended to be used with safety factors larger than 1.25. - Abstract: In the present study the linear-elastic Theory of Critical Distances (TCD) is reformulated to make it suitable for predicting the strength of notched metallic materials subjected to dynamic loading. The accuracy and reliability of the proposed reformulation of the TCD was checked against a number of experimental results generated by testing, under different loading/strain rates, notched cylindrical samples of aluminium alloy 6063-T5, titanium alloy Ti–6Al–4V, aluminium alloy AlMg6, and an AlMn alloy. To further validate the proposed design method also different data sets taken from the literature were considered. Such an extensive validation exercise allowed us to prove that the proposed reformulation of the TCD is successful in predicting the dynamic strength of notched metallic materials, this approach proving to be capable of estimates falling within an error interval of ±20%. Such a high level of accuracy is certainly remarkable, especially in light of the fact that it was reached without the need for explicitly modelling the stress vs. strain dynamic behaviour of the investigated ductile metals

  7. Grids heat loading of an ion source in two-stage acceleration system

    International Nuclear Information System (INIS)

    Okumura, Yoshikazu; Ohara, Yoshihiro; Ohga, Tokumichi

    1978-05-01

    Heat loading of the extraction grids, which is one of the critical problems limiting the beam pulse duration at high power level, has been investigated experimentally, with an ion source in a two-stage acceleration system of four multi-aperture grids. The loading of each grid depends largely on extraction current and grid gap pressures; it decreases with improvement of the beam optics and with decrease of the pressures. In optimum operating modes, its level is typically less than -- 2% of the total beam power or -- 200 W/cm 2 at beam energies of 50 - 70 kV. (auth.)

  8. Client-server password recovery

    NARCIS (Netherlands)

    Chmielewski, Ł.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect - people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the

  9. Comparing in-service multi-input loads applied on non-stiff components submitted to vibration fatigue to provide specifications for robust design

    Directory of Open Access Journals (Sweden)

    Le Corre Gwenaëlle

    2018-01-01

    Full Text Available This study focuses on applications from the automotive industry, on mechanical components submitted to vibration loads. On one hand, the characterization of loading for dimensioning new structures in fatigue is enriched and updated by customer data analysis. On the other hand, the loads characterization also aims to provide robust specifications for simulation or physical tests. These specifications are needed early in the project, in order to perform the first durability verification activities. At this time, detailed information about the geometry and the material is rare. Vibration specifications need to be adapted to a calculation time or physical test durations in accordance with the pace imposed by the projects timeframe. In the trucks industry, the dynamic behaviour can vary significantly from one configuration of truck to another, as the trucks architecture impacts the load environment of the components. The vibration specifications need to be robust by taking care of the diversity of vehicles and markets considered in the scope of the projects. For non-stiff structures, the lifetime depends, among other things, on the frequency content of the loads, as well as the interactions between the components of the multi-input loads. In this context, this paper proposes an approach to compare sets of variable amplitude multi-input loads applied on non-stiff structures. The comparison is done in terms of damage, with limited information on the structure where the loads sets are applied on. The methodology is presented, as well as an application. Activities planned to validate the methodology are also exposed.

  10. Demonstration of Advanced Technologies for Multi-Load Washers in Hospitality and Healthcare -- Wastewater Recycling Technology

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, Brian K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Parker, Graham B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Petersen, Joseph M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg [Efficiency Solutions, LLC (United States); Goetzler, W. [Navigant Consulting, Inc. (United States); Foley, K. J. [Navigant Consulting, Inc. (United States); Sutherland, T. A. [Navigant Consulting, Inc. (United States)

    2014-08-14

    The objective of this demonstration project was to evaluate market-ready retrofit technologies for reducing the energy and water use of multi-load washers in healthcare and hospitality facilities. Specifically, this project evaluated laundry wastewater recycling technology in the hospitality sector and ozone laundry technology in both the healthcare and hospitality sectors. This report documents the demonstration of a wastewater recycling system installed in the Grand Hyatt Seattle.

  11. Conversation Threads Hidden within Email Server Logs

    Science.gov (United States)

    Palus, Sebastian; Kazienko, Przemysław

    Email server logs contain records of all email Exchange through this server. Often we would like to analyze those emails not separately but in conversation thread, especially when we need to analyze social network extracted from those email logs. Unfortunately each mail is in different record and those record are not tided to each other in any obvious way. In this paper method for discussion threads extraction was proposed together with experiments on two different data sets - Enron and WrUT..

  12. CCTOP: a Consensus Constrained TOPology prediction web server.

    Science.gov (United States)

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. 2MASS Catalog Server Kit Version 2.1

    Science.gov (United States)

    Yamauchi, C.

    2013-10-01

    The 2MASS Catalog Server Kit is open source software for use in easily constructing a high performance search server for important astronomical catalogs. This software utilizes the open source RDBMS PostgreSQL, therefore, any users can setup the database on their local computers by following step-by-step installation guide. The kit provides highly optimized stored functions for positional searchs similar to SDSS SkyServer. Together with these, the powerful SQL environment of PostgreSQL will meet various user's demands. We released 2MASS Catalog Server Kit version 2.1 in 2012 May, which supports the latest WISE All-Sky catalog (563,921,584 rows) and 9 major all-sky catalogs. Local databases are often indispensable for observatories with unstable or narrow-band networks or severe use, such as retrieving large numbers of records within a small period of time. This software is the best for such purposes, and increasing supported catalogs and improvements of version 2.1 can cover a wider range of applications including advanced calibration system, scientific studies using complicated SQL queries, etc. Official page: http://www.ir.isas.jaxa.jp/~cyamauch/2masskit/

  14. Power Generation by Zinc Antimonide Thin Film under Various Load Resistances at its Critical Operating Temperature

    DEFF Research Database (Denmark)

    Mir Hosseini, Seyed Mojtaba; Rezaniakolaei, Alireza; Rosendahl, Lasse Aistrup

    slightly reduces during unload conditions, although it is expected that by eliminating load in each step, the initial amount of voltage exactly repeats. Similar behavior is observed for Seebeck coefficient distribution versus time of working particularly in lower load resistances. Based on variation...... thin films operating under different load resistances at around its critical operating temperature, 400 ᵒC. The thermoelement is subjected to constant hot side temperature and to room temperature at the cold junction in order to measure the thin film TEG’s sample performance. The nominal loads equal...... to 10, 15, 20, 25, 30, 35, 40, 45… 175, and also 200 Ohms were applied. The results show that the value of the Seebeck coefficient is 0.0002 [V/K] for the specimen, which is in agreement with quantities of other zinc antimonide bulks materials in literature. The results also show that the voltage...

  15. Optimal Selection of Clustering Algorithm via Multi-Criteria Decision Analysis (MCDA for Load Profiling Applications

    Directory of Open Access Journals (Sweden)

    Ioannis P. Panapakidis

    2018-02-01

    Full Text Available Due to high implementation rates of smart meter systems, considerable amount of research is placed in machine learning tools for data handling and information retrieval. A key tool in load data processing is clustering. In recent years, a number of researches have proposed different clustering algorithms in the load profiling field. The present paper provides a methodology for addressing the aforementioned problem through Multi-Criteria Decision Analysis (MCDA and namely, using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS. A comparison of the algorithms is employed. Next, a single test case on the selection of an algorithm is examined. User specific weights are applied and based on these weight values, the optimal algorithm is drawn.

  16. Client-Server Password Recovery

    NARCIS (Netherlands)

    Chmielewski, L.; Hoepman, J.H.; Rossum, P. van

    2009-01-01

    Human memory is not perfect – people constantly memorize new facts and forget old ones. One example is forgetting a password, a common problem raised at IT help desks. We present several protocols that allow a user to automatically recover a password from a server using partial knowledge of the

  17. Scoping Report: Advanced Technologies for Multi-Load Washers in Hospitality and Healthcare

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Graham B.; Boyd, Brian K.; Petersen, Joseph M.; Goetzler, W.; Foley, K. J.; Sutherland, T. A.

    2013-03-27

    The purpose of this demonstration project is to quantify the energy savings and water efficiency potential of commercial laundry wastewater recycling systems and low-temperature detergent supply systems to help promote the adoption of these technologies in the commercial sector. This project will create a set of technical specifications for efficient multi-load laundry systems (both new and retrofit) tailored for specific applications and/or sectors (e.g., hospitality, health care). The specifications will be vetted with the appropriate Better Buildings Alliance (BBA) members (e.g., Commercial Real Estate Energy Alliance, Hospital Energy Alliance), finalized, published, and disseminated to enable widespread technology transfer in the industry and specifically among BBA partners.

  18. The Difference Between Using Proxy Server and VPN

    Directory of Open Access Journals (Sweden)

    David Dwiputra Kurniadi

    2015-11-01

    For example, looking for software, game through internet. But sometimes, there are some websites that cannot be opened as they have Internet Positive notificatio. To solve that problem, hacker found the solution by creating Proxy Server or VPN. In this time internet is very modern and very easy to access and there are a lot of Proxy Server and VPN that can be easly used.

  19. DoS attacks targeting SIP server and improvements of robustness

    OpenAIRE

    Vozňák, Miroslav; Šafařík, Jakub

    2012-01-01

    The paper describes the vulnerability of SIP servers to DoS attacks and methods for server protection. For each attack, this paper describes their impact on a SIP server, evaluation of the threat and the way in which they are executed. Attacks are described in detail, and a security precaution is made to prevent each of them. The proposed solution of the protection is based on a specific topology of an intrusion protection systems components consisting of a combination of...

  20. Optimal Configuration of Fault-Tolerance Parameters for Distributed Server Access

    DEFF Research Database (Denmark)

    Daidone, Alessandro; Renier, Thibault; Bondavalli, Andrea

    2013-01-01

    Server replication is a common fault-tolerance strategy to improve transaction dependability for services in communications networks. In distributed architectures, fault-diagnosis and recovery are implemented via the interaction of the server replicas with the clients and other entities...... model using stochastic activity networks (SAN) for the evaluation of performance and dependability metrics of a generic transaction-based service implemented on a distributed replication architecture. The composite SAN model can be easily adapted to a wide range of client-server applications deployed...

  1. An Array Library for Microsoft SQL Server with Astrophysical Applications

    Science.gov (United States)

    Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.

    2012-09-01

    Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory

  2. Investigation on pitch system loads by means of an integral multi body simulation approach

    Science.gov (United States)

    Berroth, J.; Jacobs, G.; Kroll, T.; Schelenz, R.

    2016-09-01

    In modern horizontal axis wind turbines the rotor blades are adjusted by three individual pitch systems to control power output. The pitch system consists of either a hydraulic or an electrical actuator, the blade bearing, the rotor blade itself and the control. In case of an electrical drive a gearbox is used to transmit the high torques that are required for blade pitch angle adjustment. In this contribution a new integral multi body simulation approach is presented that enables detailed assessment of dynamic pitch system loads. The simulation results presented are compared and evaluated with measurement data of a 2 MW-class reference wind turbine. Major focus of this contribution is on the assessment of non linear tooth contact behaviour incorporating tooth backlash for the single gear stages and the impact on dynamic pitch system loads.

  3. Fast assessment of the critical principal stress direction for multiple separated multiaxial loadings

    Directory of Open Access Journals (Sweden)

    M. Cova

    2015-07-01

    Full Text Available The critical plane calculation for multiaxial damage assessment is often a demanding task, particularly for large FEM models of real components. Anyway, in actual engineering requests, sometime, it is possible to take advantage of the specific properties of the investigated case. This paper deals with the problem of a mechanical component loaded by multiple, but “time-separated”, multiaxial external loads. The specific material damage is dependent from the max principal stress variation with a significant mean stress sensitivity too. A specifically fitted procedure was developed for a fast computation, at each node of a large FEM model, of the direction undergoing the maximum fatigue damage; the procedure is defined according to an effective stress definition based on the max principal stress amplitude and mean value. The procedure is presented in a general form, applicable to the similar cases.

  4. Critical levels and loads of atmospheric pollutants for terrestrial and aquatic ecosystems. The emergence of a scientific concept. Application potentials and their limits

    International Nuclear Information System (INIS)

    Landmann, G.

    1993-01-01

    The 'critical loads and levels' are defined as the highest atmospheric deposition rate or concentration of a gaseous pollutant, respectively, that will not cause harmful effects on sensitive elements of an ecosystem. The recent emergence of the concept of critical loads and levels is described, from the first explicit mention in 1986 to the production of the first European maps in 1991. The difficulties linked to the definition of the concept and to its english-derived terminology are discussed. The main approaches used for assessing critical loads and levels are briefly described. Important research is developed under the auspices of the Convention of Geneva (Long Range Transboundary Air Pollution Transport, UN-ECE), arising from intensive studies which have been carried out on the effects of air pollution on terrestrial and aquatic ecosystems for the past ten or fifteen years. Current knowledge is summarized, as well as the remaining gaps (and questions) which hinder the calculation of the critical thresholds. Finally, beyond the fundamental relevance of this scientifically sound and easily understood concept, its limits are pointed out. In brief, the 'critical loads and levels' concept is attractive and motivating to many scientists: it implies to apply an integrated and finalized approach, favors the prospecting of poorly known ecosystems and regions, and represents an interesting interface with decision makers

  5. The SAMGrid database server component: its upgraded infrastructure and future development path

    International Nuclear Information System (INIS)

    Loebel-Carpenter, L.; White, S.; Baranovski, A.; Garzoglio, G.; Herber, R.; Illingworth, R.; Kennedy, R.; Kreymer, A.; Kumar, A.; Lueking, L.; Lyon, A.; Merritt, W.; Terekhov, I.; Trumbo, J.; Veseli, S.; Burgon-Lyon, M.; St Denis, R.; Belforte, S.; Kerzel, U.; Bartsch, V.; Leslie, M.

    2004-01-01

    The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes required for the unified metadata catalog has warranted a complete redesign of the DB Server. We describe here the architecture and features of the new server. In particular, we discuss the new CORBA infrastructure that utilizes python wrapper classes around IDL structs and exceptions. Such infrastructure allows us to use the same code on both server and client sides, which in turn results in significantly improved code maintainability and easier development. We also discuss future integration of the new server with an SBIR II project which is directed toward allowing the DB Server to access distributed databases, implemented in different DB systems and possibly using different schema

  6. Note on a tandem queue with delayed server release

    NARCIS (Netherlands)

    Nawijn, W.M.

    2000-01-01

    We consider a tandem queue with two stations. The first station is an $s$-server queue with Poisson arrivals and exponential service times. After terminating his service in the first station, a customer enters the second station to require service at a single server, while in the meantime he is

  7. A tandem queue with server slow-down and blocking

    NARCIS (Netherlands)

    van Foreest, N.D.; van Ommeren, Jan C.W.; Mandjes, M.R.H.; Scheinhardt, Willem R.W.

    2005-01-01

    We consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a 'blocking threshold.' In addition, in variant 2 the first server decreases its service rate when the second queue exceeds a

  8. A tandem queue with server slow-down and blocking.

    NARCIS (Netherlands)

    van Foreest, N.; van Ommeren, J.C.; Mandjes, M.R.H.; Scheinhardt, W.

    2005-01-01

    We consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a 'blocking threshold.' In addition, in variant 2 the first server decreases its service rate when the second queue exceeds a

  9. Secure Server Login by Using Third Party and Chaotic System

    Science.gov (United States)

    Abdulatif, Firas A.; zuhiar, Maan

    2018-05-01

    Server is popular among all companies and it used by most of them but due to the security threat on the server make this companies are concerned when using it so that in this paper we will design a secure system based on one time password and third parity authentication (smart phone). The proposed system make security to the login process of server by using one time password to authenticate person how have permission to login and third parity device (smart phone) as other level of security.

  10. Advancing the Power and Utility of Server-Side Aggregation

    Science.gov (United States)

    Fulker, Dave; Gallagher, James

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters.Topics 2 through 7 will be relevant to data consumers, data providers andnotably, due to the open-source nature of all OPeNDAP softwareto developers wishing to extend Hyrax, to build compatible clients and servers, andor to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  11. Design of SIP transformation server for efficient media negotiation

    Science.gov (United States)

    Pack, Sangheon; Paik, Eun Kyoung; Choi, Yanghee

    2001-07-01

    Voice over IP (VoIP) is one of the advanced services supported by the next generation mobile communication. VoIP should support various media formats and terminals existing together. This heterogeneous environment may prevent diverse users from establishing VoIP sessions among them. To solve the problem an efficient media negotiation mechanism is required. In this paper, we propose the efficient media negotiation architecture using the transformation server and the Intelligent Location Server (ILS). The transformation server is an extended Session Initiation Protocol (SIP) proxy server. It can modify an unacceptable session INVITE message into an acceptable one using the ILS. The ILS is a directory server based on the Lightweight Directory Access Protocol (LDAP) that keeps userí*s location information and available media information. The proposed architecture can eliminate an unnecessary response and re-INVITE messages of the standard SIP architecture. It takes only 1.5 round trip times to negotiate two different media types while the standard media negotiation mechanism takes 2.5 round trip times. The extra processing time in message handling is negligible in comparison to the reduced round trip time. The experimental results show that the session setup time in the proposed architecture is less than the setup time in the standard SIP. These results verify that the proposed media negotiation mechanism is more efficient in solving diversity problems.

  12. The impact of acid deposition and forest harvesting on lakes and their forested catchments in south central Ontario: a critical loads approach

    Directory of Open Access Journals (Sweden)

    S. A. Watmough

    2002-01-01

    Full Text Available The impact of acid deposition and tree harvesting on three lakes and their representative sub-catchments in the Muskoka-Haliburton region of south-central Ontario was assessed using a critical loads approach. As nitrogen dynamics in forest soils are complex and poorly understood, for simplicity and to allow comparison among lakes and their catchments, CLs (A for both lakes and forest soils were calculated assuming that nitrate leaching from catchments will not change over time (i.e. a best case scenario. In addition, because soils in the region are shallow, base cation weathering rates for the representative sub-catchments were calculated for the entire soil profile and these estimates were also used to calculate critical loads for the lakes. These results were compared with critical loads obtained by the Steady State Water Chemistry (SSWC model. Using the SSWC model, critical loads for lakes were between 7 and 19 meq m-2yr-1 higher than those obtained from soil measurements. Lakes and forests are much more sensitive to acid deposition if forests are harvested, but two acid-sensitive lakes had much lower critical loads than their respective forested sub-catchments implying that acceptable acid deposition levels should be dictated by the most acid-sensitive lakes in the region. Under conditions that assume harvesting, the CL (A is exceeded at two of the three lakes and five of the six sub-catchments assessed in this study. However, sulphate export from catchments greatly exceeds input in bulk deposition and, to prevent lakes from falling below the critical chemical limit, sulphate inputs to lakes must be reduced by between 37% and 92% if forests are harvested. Similarly, sulphate leaching from forested catchments that are harvested must be reduced by between 16 and 79% to prevent the ANC of water draining the rooting zone from falling below 0 μeq l-1. These calculations assume that extremely low calcium leaching losses (9–27 μeq l-1 from

  13. Saving Money and Time with Virtual Server

    CERN Document Server

    Sanders, Chris

    2006-01-01

    Microsoft Virtual Server 2005 consistently proves to be worth its weight in gold, with new implementations thought up every day. With this product now a free download from Microsoft, scores of new users are able to experience what the power of virtualization can do for their networks. This guide is aimed at network administrators who are interested in ways that Virtual Server 2005 can be implemented in their organizations in order to save money and increase network productivity. It contains information on setting up a virtual network, virtual consolidation, virtual security, virtual honeypo

  14. Development of a Personal Digital Assistant (PDA) based client/server NICU patient data and charting system.

    Science.gov (United States)

    Carroll, A E; Saluja, S; Tarczy-Hornoch, P

    2001-01-01

    Personal Digital Assistants (PDAs) offer clinicians the ability to enter and manage critical information at the point of care. Although PDAs have always been designed to be intuitive and easy to use, recent advances in technology have made them even more accessible. The ability to link data on a PDA (client) to a central database (server) allows for near-unlimited potential in developing point of care applications and systems for patient data management. Although many stand-alone systems exist for PDAs, none are designed to work in an integrated client/server environment. This paper describes the design, software and hardware selection, and preliminary testing of a PDA based patient data and charting system for use in the University of Washington Neonatal Intensive Care Unit (NICU). This system will be the subject of a subsequent study to determine its impact on patient outcomes and clinician efficiency.

  15. LHCb: Fabric Management with Diskless Servers and Quattor on LHCb

    CERN Multimedia

    Schweitzer, P; Brarda, L; Neufeld, N

    2011-01-01

    Large scientific experiments nowadays very often are using large computer farms to process the events acquired from the detectors. In LHCb a small sysadmin team manages 1400 servers of the LHCb Event Filter Farm, but also a wide variety of control servers for the detector electronics and infrastructure computers: file servers, gateways, DNS, DHCP and others. This variety of servers could not be handled without a solid fabric management system. We choose the Quattor toolkit for this task. We will present our use of this toolkit, with an emphasis on how we handle our diskless nodes (Event filter farm nodes and computers embedded in the acquisition electronic cards). We will show our current tests to replace the standard (RedHat/Scientific Linux) way of handling diskless nodes to fusion filesystems and how it improves fabric management.

  16. Microsoft SQL Server 2012 Business Intelligence ja sen tuomat uudistukset

    OpenAIRE

    Luoma, Lauri

    2013-01-01

    Insinöörityö käsittelee Metropolia Ammattikorkeakoulun Microsoft Business Intelligence (BI) ratkaisut -kurssilla käytetyn laboratoriomanuaalin MCTS Self-Paced Training Kit (Exam 70-448) -harjoituksien siirtämistä SQL Serverille 2012. Työllä on tarkoitus osoittaa, että SQL Server 2012 BI -työkalut soveltuvat harjoituksiin. Työssä siirrytään käyttämään uudempaa työkalua, SQL Server Data Toolsia, joka korvaa SQL Server 2008 R2 Business Intelligence Development Studion. Työn alussa tutust...

  17. Ramifications of structural deformations on collapse loads of critically cracked pipe bends under in-plane bending and internal pressure

    Energy Technology Data Exchange (ETDEWEB)

    Sasidharan, Sumesh; Arunachalam, Veerappan; Subramaniam, Shanmugam [Dept. of Mechanical Engineering, National Institute of Technology, Tiruchirappalli (India)

    2017-02-15

    Finite-element analysis based on elastic-perfectly plastic material was conducted to examine the influence of structural deformations on collapse loads of circumferential through-wall critically cracked 90 .deg. pipe bends undergoing in-plane closing bending and internal pressure. The critical crack is defined for a through-wall circumferential crack at the extrados with a subtended angle below which there is no weakening effect on collapse moment of elbows subjected to in-plane closing bending. Elliptical and semioval cross sections were postulated at the bend regions and compared. Twice-elastic-slope method was utilized to obtain the collapse loads. Structural deformations, namely, ovality and thinning, were each varied from 0% to 20% in steps of 5% and the normalized internal pressure was varied from 0.2 to 0.6. Results indicate that elliptic cross sections were suitable for pipe ratios 5 and 10, whereas for pipe ratio 20, semioval cross sections gave satisfactory solutions. The effect of ovality on collapse loads is significant, although it cancelled out at a certain value of applied internal pressure. Thinning had a negligible effect on collapse loads of bends with crack geometries considered.

  18. Energy Servers Deliver Clean, Affordable Power

    Science.gov (United States)

    2010-01-01

    K.R. Sridhar developed a fuel cell device for Ames Research Center, that could use solar power to split water into oxygen for breathing and hydrogen for fuel on Mars. Sridhar saw the potential of the technology, when reversed, to create clean energy on Earth. He founded Bloom Energy, of Sunnyvale, California, to advance the technology. Today, the Bloom Energy Server is providing cost-effective, environmentally friendly energy to a host of companies such as eBay, Google, and The Coca-Cola Company. Bloom's NASA-derived Energy Servers generate energy that is about 67-percent cleaner than a typical coal-fired power plant when using fossil fuels and 100-percent cleaner with renewable fuels.

  19. Emergency Load Shedding Strategy Based on Sensitivity Analysis of Relay Operation Margin against Cascading Events

    DEFF Research Database (Denmark)

    Liu, Zhou; Chen, Zhe; Sun, Haishun Sun

    2012-01-01

    the runtime emergent states of related system component. Based on sensitivity analysis between the relay operation margin and power system state variables, an optimal load shedding strategy is applied to adjust the emergent states timely before the unwanted relay operation. Load dynamics is also taken...... into account to compensate load shedding amount calculation. And the multi-agent technology is applied for the whole strategy implementation. A test system is built in real time digital simulator (RTDS) and has demonstrated the effectiveness of the proposed strategy.......In order to prevent long term voltage instability and induced cascading events, a load shedding strategy based on the sensitivity of relay operation margin to load powers is discussed and proposed in this paper. The operation margin of critical impedance backup relay is defined to identify...

  20. RNAiFold: a web server for RNA inverse folding and molecular design.

    Science.gov (United States)

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-07-01

    Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.

  1. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    Science.gov (United States)

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  2. Server consolidation for heterogeneous computer clusters using Colored Petri Nets and CPN Tools

    Directory of Open Access Journals (Sweden)

    Issam Al-Azzoni

    2015-10-01

    Full Text Available In this paper, we present a new approach to server consolidation in heterogeneous computer clusters using Colored Petri Nets (CPNs. Server consolidation aims to reduce energy costs and improve resource utilization by reducing the number of servers necessary to run the existing virtual machines in the cluster. It exploits the emerging technology of live migration which allows migrating virtual machines between servers without stopping their provided services. Server consolidation approaches attempt to find migration plans that aim to minimize the necessary size of the cluster. Our approach finds plans which not only minimize the overall number of used servers, but also minimize the total data migration overhead. The latter objective is not taken into consideration by other approaches and heuristics. We explore the use of CPN Tools in analyzing the state spaces of the CPNs. Since the state space of the CPN model can grow exponentially with the size of the cluster, we examine different techniques to generate and analyze the state space in order to find good plans to server consolidation within acceptable time and computing power.

  3. Instant migration from Windows Server 2008 and 2008 R2 to 2012 how-to

    CERN Document Server

    Sivarajan, Santhosh

    2013-01-01

    Presented in a hands-on reference manual style, with real-world scenarios to lead you through each process. This book is intended for Windows server administrators who are performing migrations from their existing Windows Server 2008 / 2008 R2 environment to Windows Server 2012. The reader must be familiar with Windows Server 2008.

  4. On the optimal use of a slow server in two-stage queueing systems

    Science.gov (United States)

    Papachristos, Ioannis; Pandelis, Dimitrios G.

    2017-07-01

    We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.

  5. Using Web Server Logs in Evaluating Instructional Web Sites.

    Science.gov (United States)

    Ingram, Albert L.

    2000-01-01

    Web server logs contain a great deal of information about who uses a Web site and how they use it. This article discusses the analysis of Web logs for instructional Web sites; reviews the data stored in most Web server logs; demonstrates what further information can be gleaned from the logs; and discusses analyzing that information for the…

  6. Chondrocyte deformations as a function of tibiofemoral joint loading predicted by a generalized high-throughput pipeline of multi-scale simulations.

    Directory of Open Access Journals (Sweden)

    Scott C Sibole

    Full Text Available Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method's generalized nature also allows for substitution of any macro

  7. Chondrocyte Deformations as a Function of Tibiofemoral Joint Loading Predicted by a Generalized High-Throughput Pipeline of Multi-Scale Simulations

    Science.gov (United States)

    Sibole, Scott C.; Erdemir, Ahmet

    2012-01-01

    Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro

  8. A performance analysis of advanced I/O architectures for PC-based network file servers

    Science.gov (United States)

    Huynh, K. D.; Khoshgoftaar, T. M.

    1994-12-01

    In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.

  9. Comparison of approaches for mobile document image analysis using server supported smartphones

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  10. CERN servers go to Mexico

    CERN Multimedia

    Stefania Pandolfi

    2015-01-01

    On Wednesday, 26 August, 384 servers from the CERN Computing Centre were donated to the Faculty of Science in Physics and Mathematics (FCFM) and the Mesoamerican Centre for Theoretical Physics (MCTP) at the University of Chiapas, Mexico.   CERN’s Director-General, Rolf Heuer, met the Mexican representatives in an official ceremony in Building 133, where the servers were prepared for shipment. From left to right: Frédéric Hemmer, CERN IT Department Head; Raúl Heredia Acosta, Deputy Permanent Representative of Mexico to the United Nations and International Organizations in Geneva; Jorge Castro-Valle Kuehne, Ambassador of Mexico to the Swiss Confederation and the Principality of Liechtenstein; Rolf Heuer, CERN Director-General; Luis Roberto Flores Castillo, President of the Swiss Chapter of the Global Network of Qualified Mexicans Abroad; Virginia Romero Tellez, Coordinator of Institutional Relations of the Swiss Chapter of the Global Network of Qualified Me...

  11. The Medicago truncatula gene expression atlas web server

    Directory of Open Access Journals (Sweden)

    Tang Yuhong

    2009-12-01

    Full Text Available Abstract Background Legumes (Leguminosae or Fabaceae play a major role in agriculture. Transcriptomics studies in the model legume species, Medicago truncatula, are instrumental in helping to formulate hypotheses about the role of legume genes. With the rapid growth of publically available Affymetrix GeneChip Medicago Genome Array GeneChip data from a great range of tissues, cell types, growth conditions, and stress treatments, the legume research community desires an effective bioinformatics system to aid efforts to interpret the Medicago genome through functional genomics. We developed the Medicago truncatula Gene Expression Atlas (MtGEA web server for this purpose. Description The Medicago truncatula Gene Expression Atlas (MtGEA web server is a centralized platform for analyzing the Medicago transcriptome. Currently, the web server hosts gene expression data from 156 Affymetrix GeneChip® Medicago genome arrays in 64 different experiments, covering a broad range of developmental and environmental conditions. The server enables flexible, multifaceted analyses of transcript data and provides a range of additional information about genes, including different types of annotation and links to the genome sequence, which help users formulate hypotheses about gene function. Transcript data can be accessed using Affymetrix probe identification number, DNA sequence, gene name, functional description in natural language, GO and KEGG annotation terms, and InterPro domain number. Transcripts can also be discovered through co-expression or differential expression analysis. Flexible tools to select a subset of experiments and to visualize and compare expression profiles of multiple genes have been implemented. Data can be downloaded, in part or full, in a tabular form compatible with common analytical and visualization software. The web server will be updated on a regular basis to incorporate new gene expression data and genome annotation, and is accessible

  12. Self-assembling process of flash nanoprecipitation in a multi-inlet vortex mixer to produce drug-loaded polymeric nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Shen Hao [University of Illinois at Chicago, Department of Chemical Engineering (United States); Hong, Seungpyo [University of Illinois at Chicago, Department of Biopharmaceutical Sciences (United States); Prud' homme, Robert K. [Princeton University, Department of Chemical Engineering (United States); Liu Ying, E-mail: liuying@uic.edu [University of Illinois at Chicago, Department of Chemical Engineering (United States)

    2011-09-15

    We present an experimental study of self-assembled polymeric nanoparticles in the process of flash nanoprecipitation using a multi-inlet vortex mixer (MIVM). {beta}-Carotene and polyethyleneimine (PEI) are used as a model drug and a macromolecule, respectively, and encapsulated in diblock copolymers. Flow patterns in the MIVM are microscopically visualized by mixing iron nitrate (Fe(NO{sub 3}){sub 3}) and potassium thiocyanate (KSCN) to precipitate Fe(SCN){sub x}{sup (3-x)+}. Effects of physical parameters, including Reynolds number, supersaturation rate, interaction force, and drug-loading rate, on size distribution of the nanoparticle suspensions are investigated. It is critical for the nanoprecipitation process to have a short mixing time, so that the solvent replacement starts homogeneously in the reactor. The properties of the nanoparticles depend on the competitive kinetics of polymer aggregation and organic solute nucleation and growth. We report the existence of a threshold Reynolds number over which nanoparticle sizes become independent of mixing. A similar value of the threshold Reynolds number is confirmed by independent measurements of particle size, flow-pattern visualization, and our previous numerical simulation along with experimental study of competitive reactions in the MIVM.

  13. Remote Laboratory Java Server Based on JACOB Project

    Directory of Open Access Journals (Sweden)

    Pavol Bisták

    2011-02-01

    Full Text Available Remote laboratories play an important role in the educational process of engineers. This paper deals with the structure of remote laboratories. The principle of the proposed remote laboratory structure is based on the Java server application that communicates with Matlab through the COM technology for the data exchange under the Windows operating system. Java does not support COM directly so the results of the JACOB project are used and modified to cope with this problem. In laboratories for control engineering education a control algorithm usually runs on a PC with Matlab that really controls the real plant. This is the server side described in the paper in details. To demonstrate the possibilities of a remote control a Java client server application is also introduced. It covers communication and offers a user friendly interface for the control of a remote plant and visualization of measured data.

  14. Improving consensus contact prediction via server correlation reduction.

    Science.gov (United States)

    Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming

    2009-05-06

    Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  15. Improving consensus contact prediction via server correlation reduction

    Directory of Open Access Journals (Sweden)

    Xu Jinbo

    2009-05-01

    Full Text Available Abstract Background Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. Results In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Conclusion Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  16. QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks.

    Directory of Open Access Journals (Sweden)

    Asa Thibodeau

    2016-06-01

    Full Text Available Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1 building and visualizing chromatin interaction networks, 2 annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3 querying network components based on gene name or chromosome location, and 4 utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions.QuIN's web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.

  17. QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks.

    Science.gov (United States)

    Thibodeau, Asa; Márquez, Eladio J; Luo, Oscar; Ruan, Yijun; Menghi, Francesca; Shin, Dong-Guk; Stitzel, Michael L; Vera-Licona, Paola; Ucar, Duygu

    2016-06-01

    Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. QuIN's web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.

  18. MINIMUM BRACING STIFFNESS FOR MULTI-COLUMN SYSTEMS: THEORY

    OpenAIRE

    ARISTIZÁBAL-OCHOA, J. DARÍO

    2011-01-01

    A method that determines the minimum bracing stiffness required by a multi-column elastic system to achieve non-sway buckling conditions is proposed. Equations that evaluate the required minimum stiffness of the lateral and torsional bracings and the corresponding “braced" critical buckling load for each column of the story level are derived using the modified stability functions. The following effects are included: 1) the types of end connections (rigid, semirigid, and simple); 2) the bluepr...

  19. MCSA Windows Server 2012 R2 installation and configuration study guide exam 70-410

    CERN Document Server

    Panek, William

    2015-01-01

    Master Windows Server installation and configuration withhands-on practice and interactive study aids for the MCSA: WindowsServer 2012 R2 exam 70-410 MCSA: Windows Server 2012 R2 Installation and ConfigurationStudy Guide: Exam 70-410 provides complete preparationfor exam 70-410: Installing and Configuring Windows Server 2012 R2.With comprehensive coverage of all exam topics and plenty ofhands-on practice, this self-paced guide is the ideal resource forthose preparing for the MCSA on Windows Server 2012 R2. Real-worldscenarios demonstrate how the lessons are applied in everydaysettings. Reader

  20. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    Science.gov (United States)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining