WorldWideScience

Sample records for replicated resources computing

  1. Replicated Data Management for Mobile Computing

    CERN Document Server

    Douglas, Terry

    2008-01-01

    Managing data in a mobile computing environment invariably involves caching or replication. In many cases, a mobile device has access only to data that is stored locally, and much of that data arrives via replication from other devices, PCs, and services. Given portable devices with limited resources, weak or intermittent connectivity, and security vulnerabilities, data replication serves to increase availability, reduce communication costs, foster sharing, and enhance survivability of critical information. Mobile systems have employed a variety of distributed architectures from client-server

  2. Multiprocessor Real-Time Locking Protocols for Replicated Resources

    Science.gov (United States)

    2016-07-01

    assignment problem, the ac- tual identities of the allocated replicas must be known. When locking protocols are used, tasks may experience delays due to both...Multiprocessor Real-Time Locking Protocols for Replicated Resources ∗ Catherine E. Jarrett1, Kecheng Yang1, Ming Yang1, Pontus Ekberg2, and James H...replicas to execute. In prior work on replicated resources, k-exclusion locks have been used, but this restricts tasks to lock only one replica at a time. To

  3. Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xiaolong Cui

    2014-08-01

    Full Text Available As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

  4. Resource Management in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  5. Statistics Online Computational Resource for Education

    Science.gov (United States)

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  6. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  7. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resourcesresources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  8. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  9. Discovery of replicating circular RNAs by RNA-seq and computational algorithms.

    Directory of Open Access Journals (Sweden)

    Zhixiang Zhang

    2014-12-01

    Full Text Available Replicating circular RNAs are independent plant pathogens known as viroids, or act to modulate the pathogenesis of plant and animal viruses as their satellite RNAs. The rate of discovery of these subviral pathogens was low over the past 40 years because the classical approaches are technical demanding and time-consuming. We previously described an approach for homology-independent discovery of replicating circular RNAs by analysing the total small RNA populations from samples of diseased tissues with a computational program known as progressive filtering of overlapping small RNAs (PFOR. However, PFOR written in PERL language is extremely slow and is unable to discover those subviral pathogens that do not trigger in vivo accumulation of extensively overlapping small RNAs. Moreover, PFOR is yet to identify a new viroid capable of initiating independent infection. Here we report the development of PFOR2 that adopted parallel programming in the C++ language and was 3 to 8 times faster than PFOR. A new computational program was further developed and incorporated into PFOR2 to allow the identification of circular RNAs by deep sequencing of long RNAs instead of small RNAs. PFOR2 analysis of the small RNA libraries from grapevine and apple plants led to the discovery of Grapevine latent viroid (GLVd and Apple hammerhead viroid-like RNA (AHVd-like RNA, respectively. GLVd was proposed as a new species in the genus Apscaviroid, because it contained the typical structural elements found in this group of viroids and initiated independent infection in grapevine seedlings. AHVd-like RNA encoded a biologically active hammerhead ribozyme in both polarities, and was not specifically associated with any of the viruses found in apple plants. We propose that these computational algorithms have the potential to discover novel circular RNAs in plants, invertebrates and vertebrates regardless of whether they replicate and/or induce the in vivo accumulation of small

  10. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  11. Computer Resources | College of Engineering & Applied Science

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  12. Exploitation of heterogeneous resources for ATLAS Computing

    CERN Document Server

    Chudoba, Jiri; The ATLAS collaboration

    2018-01-01

    LHC experiments require significant computational resources for Monte Carlo simulations and real data processing and the ATLAS experiment is not an exception. In 2017, ATLAS exploited steadily almost 3M HS06 units, which corresponds to about 300 000 standard CPU cores. The total disk and tape capacity managed by the Rucio data management system exceeded 350 PB. Resources are provided mostly by Grid computing centers distributed in geographically separated locations and connected by the Grid middleware. The ATLAS collaboration developed several systems to manage computational jobs, data files and network transfers. ATLAS solutions for job and data management (PanDA and Rucio) were generalized and now are used also by other collaborations. More components are needed to include new resources such as private and public clouds, volunteers' desktop computers and primarily supercomputers in major HPC centers. Workflows and data flows significantly differ for these less traditional resources and extensive software re...

  13. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  14. Framework of Resource Management for Intercloud Computing

    Directory of Open Access Journals (Sweden)

    Mohammad Aazam

    2014-01-01

    Full Text Available There has been a very rapid increase in digital media content, due to which media cloud is gaining importance. Cloud computing paradigm provides management of resources and helps create extended portfolio of services. Through cloud computing, not only are services managed more efficiently, but also service discovery is made possible. To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible for standalone clouds to handle everything with the increasing user demands. For scalability and better service provisioning, at times, clouds have to communicate with other clouds and share their resources. This scenario is called Intercloud computing or cloud federation. The study on Intercloud computing is still in its start. Resource management is one of the key concerns to be addressed in Intercloud computing. Already done studies discuss this issue only in a trivial and simplistic way. In this study, we present a resource management model, keeping in view different types of services, different customer types, customer characteristic, pricing, and refunding. The presented framework was implemented using Java and NetBeans 8.0 and evaluated using CloudSim 3.0.3 toolkit. Presented results and their discussion validate our model and its efficiency.

  15. Improving ATLAS computing resource utilization with HammerCloud

    CERN Document Server

    Schovancova, Jaroslava; The ATLAS collaboration

    2018-01-01

    HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which have yet to be automated, and improve utilization of available computing resources. We present recent evolution of the auto-exclusion/recovery tools: faster inclusion of new resources in testing machinery, machine learning algorithms for anomaly detection, categorized resources as master vs. slave for the purpose of blacklisting, and a tool for auto-exclusion/recovery of resources triggered by Event Service job failures that is being extended to other workflows besides the Event Service. We describe how HammerCloud helped commissioning various concepts and components of distributed systems: simplified configuration of qu...

  16. ResourceGate: A New Solution for Cloud Computing Resource Allocation

    OpenAIRE

    Abdullah A. Sheikh

    2012-01-01

    Cloud computing has taken place to be focused by educational and business communities. These concerns include their needs to improve the Quality of Services (QoS) provided, also services such as reliability, performance and reducing costs. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring these benefits is considered to be the major factor in the cloud computing environment. This paper surveys recent research related to cloud computing resource al...

  17. LHCb experience with LFC replication

    CERN Document Server

    Bonifazi, F; Perez, E D; D'Apice, A; dell'Agnello, L; Düllmann, D; Girone, M; Re, G L; Martelli, B; Peco, G; Ricci, P P; Sapunenko, V; Vagnoni, V; Vitlacil, D

    2008-01-01

    Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.

  18. LHCb experience with LFC replication

    International Nuclear Information System (INIS)

    Bonifazi, F; Carbone, A; D'Apice, A; Dell'Agnello, L; Re, G L; Martelli, B; Ricci, P P; Sapunenko, V; Vitlacil, D; Perez, E D; Duellmann, D; Girone, M; Peco, G; Vagnoni, V

    2008-01-01

    Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements

  19. Aggregated Computational Toxicology Resource (ACTOR)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aggregated Computational Toxicology Resource (ACTOR) is a database on environmental chemicals that is searchable by chemical name and other identifiers, and by...

  20. Aggregated Computational Toxicology Online Resource

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...

  1. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    International Nuclear Information System (INIS)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  2. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  3. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  4. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  5. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  6. Computing Bounds on Resource Levels for Flexible Plans

    Science.gov (United States)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow

  7. Some issues of creation of belarusian language computer resources

    OpenAIRE

    Rubashko, N.; Nevmerjitskaia, G.

    2003-01-01

    The main reason for creation of computer resources of natural language is the necessity to bring into accord the ways of language normalization with the form of its existence - the computer form of language usage should correspond to the computer form of language standards fixation. This paper discusses various aspects of the creation of Belarusian language computer resources. It also briefly gives an overview of the objectives of the project involved.

  8. Physical-resource requirements and the power of quantum computation

    International Nuclear Information System (INIS)

    Caves, Carlton M; Deutsch, Ivan H; Blume-Kohout, Robin

    2004-01-01

    The primary resource for quantum computation is the Hilbert-space dimension. Whereas Hilbert space itself is an abstract construction, the number of dimensions available to a system is a physical quantity that requires physical resources. Avoiding a demand for an exponential amount of these resources places a fundamental constraint on the systems that are suitable for scalable quantum computation. To be scalable, the number of degrees of freedom in the computer must grow nearly linearly with the number of qubits in an equivalent qubit-based quantum computer. These considerations rule out quantum computers based on a single particle, a single atom, or a single molecule consisting of a fixed number of atoms or on classical waves manipulated using the transformations of linear optics

  9. Research on elastic resource management for multi-queue under cloud computing environment

    Science.gov (United States)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  10. A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking

    Directory of Open Access Journals (Sweden)

    Fang-Yie Leu

    2012-03-01

    Full Text Available In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short, which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET, users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.

  11. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  12. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  13. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  14. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  15. Optimal Computing Resource Management Based on Utility Maximization in Mobile Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Haoyu Meng

    2017-01-01

    Full Text Available Mobile crowdsourcing, as an emerging service paradigm, enables the computing resource requestor (CRR to outsource computation tasks to each computing resource provider (CRP. Considering the importance of pricing as an essential incentive to coordinate the real-time interaction among the CRR and CRPs, in this paper, we propose an optimal real-time pricing strategy for computing resource management in mobile crowdsourcing. Firstly, we analytically model the CRR and CRPs behaviors in form of carefully selected utility and cost functions, based on concepts from microeconomics. Secondly, we propose a distributed algorithm through the exchange of control messages, which contain the information of computing resource demand/supply and real-time prices. We show that there exist real-time prices that can align individual optimality with systematic optimality. Finally, we also take account of the interaction among CRPs and formulate the computing resource management as a game with Nash equilibrium achievable via best response. Simulation results demonstrate that the proposed distributed algorithm can potentially benefit both the CRR and CRPs. The coordinator in mobile crowdsourcing can thus use the optimal real-time pricing strategy to manage computing resources towards the benefit of the overall system.

  16. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  17. Load/resource matching for period-of-record computer simulation

    International Nuclear Information System (INIS)

    Lindsey, E.D. Jr.; Robbins, G.E. III

    1991-01-01

    The Southwestern Power Administration (Southwestern), an agency of the Department of Energy, is responsible for marketing the power and energy produced at Federal hydroelectric power projects developed by the U.S. Army Corps of Engineers in the southwestern United States. This paper reports that in order to maximize benefits from limited resources, to evaluate proposed changes in the operation of existing projects, and to determine the feasibility and marketability of proposed new projects, Southwestern utilizes a period-of-record computer simulation model created in the 1960's. Southwestern is constructing a new computer simulation model to take advantage of changes in computers, policy, and procedures. Within all hydroelectric power reservoir systems, the ability of the resources to match the load demand is critical and presents complex problems. Therefore, the method used to compare available energy resources to energy load demands is a very important aspect of the new model. Southwestern has developed an innovative method which compares a resource duration curve with a load duration curve, adjusting the resource duration curve to make the most efficient use of the available resources

  18. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  19. LHCb Data Replication During SC3

    CERN Multimedia

    Smith, A

    2006-01-01

    LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to allow high bandwidth distribution of data across the grid in accordance with the computing model. To enable reliable bulk replication of data, LHCb's DIRAC system has been integrated with gLite's File Transfer Service middleware component to make use of dedicated network links between LHCb computing centres. DIRAC's Data Management tools previously allowed the replication, registration and deletion of files on the grid. For SC3 supplementary functionality has been added to allow bulk replication of data (using FTS) and efficient mass registration to the LFC replica catalog.Provisional performance results have shown that the system developed can meet the expected data replication rate required by the computing model in 2007. This paper details the experience and results of integration and utilisation of DIRAC with the SC3 transfer machinery.

  20. Shared-resource computing for small research labs.

    Science.gov (United States)

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  1. How many bootstrap replicates are necessary?

    Science.gov (United States)

    Pattengale, Nicholas D; Alipour, Masoud; Bininda-Emonds, Olaf R P; Moret, Bernard M E; Stamatakis, Alexandros

    2010-03-01

    Phylogenetic bootstrapping (BS) is a standard technique for inferring confidence values on phylogenetic trees that is based on reconstructing many trees from minor variations of the input data, trees called replicates. BS is used with all phylogenetic reconstruction approaches, but we focus here on one of the most popular, maximum likelihood (ML). Because ML inference is so computationally demanding, it has proved too expensive to date to assess the impact of the number of replicates used in BS on the relative accuracy of the support values. For the same reason, a rather small number (typically 100) of BS replicates are computed in real-world studies. Stamatakis et al. recently introduced a BS algorithm that is 1 to 2 orders of magnitude faster than previous techniques, while yielding qualitatively comparable support values, making an experimental study possible. In this article, we propose stopping criteria--that is, thresholds computed at runtime to determine when enough replicates have been generated--and we report on the first large-scale experimental study to assess the effect of the number of replicates on the quality of support values, including the performance of our proposed criteria. We run our tests on 17 diverse real-world DNA--single-gene as well as multi-gene--datasets, which include 125-2,554 taxa. We find that our stopping criteria typically stop computations after 100-500 replicates (although the most conservative criterion may continue for several thousand replicates) while producing support values that correlate at better than 99.5% with the reference values on the best ML trees. Significantly, we also find that the stopping criteria can recommend very different numbers of replicates for different datasets of comparable sizes. Our results are thus twofold: (i) they give the first experimental assessment of the effect of the number of BS replicates on the quality of support values returned through BS, and (ii) they validate our proposals for

  2. Towards minimal resources of measurement-based quantum computation

    International Nuclear Information System (INIS)

    Perdrix, Simon

    2007-01-01

    We improve the upper bound on the minimal resources required for measurement-only quantum computation (M A Nielsen 2003 Phys. Rev. A 308 96-100; D W Leung 2004 Int. J. Quantum Inform. 2 33; S Perdrix 2005 Int. J. Quantum Inform. 3 219-23). Minimizing the resources required for this model is a key issue for experimental realization of a quantum computer based on projective measurements. This new upper bound also allows one to reply in the negative to the open question presented by Perdrix (2004 Proc. Quantum Communication Measurement and Computing) about the existence of a trade-off between observable and ancillary qubits in measurement-only QC

  3. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  4. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  5. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  6. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  7. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  8. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  9. Using OSG Computing Resources with (iLC)Dirac

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Petric, Marko

    2017-01-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called 'SiteDirectors', which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional sitespecific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were develo...

  10. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  11. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  12. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  13. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Tupputi, S A; Girolamo, A Di; Kouba, T; Schovancová, J

    2014-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  14. Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data

    Science.gov (United States)

    Koranda, Scott

    2004-03-01

    The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.

  15. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  16. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  17. Development of Computer-Based Resources for Textile Education.

    Science.gov (United States)

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  18. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  19. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  20. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  1. CMS computing upgrade and evolution

    CERN Document Server

    Hernandez Calama, Jose

    2013-01-01

    The distributed Grid computing infrastructure has been instrumental in the successful exploitation of the LHC data leading to the discovery of the Higgs boson. The computing system will need to face new challenges from 2015 on when LHC restarts with an anticipated higher detector output rate and event complexity, but with only a limited increase in the computing resources. A more efficient use of the available resources will be mandatory. CMS is improving the data storage, distribution and access as well as the processing efficiency. Remote access to the data through the WAN, dynamic data replication and deletion based on the data access patterns, and separation of disk and tape storage are some of the areas being actively developed. Multi-core processing and scheduling is being pursued in order to make a better use of the multi-core nodes available at the sites. In addition, CMS is exploring new computing techniques, such as Cloud Computing, to get access to opportunistic resources or as a means of using wit...

  2. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  3. Parametrised Constants and Replication for Spatial Mobility

    DEFF Research Database (Denmark)

    Hüttel, Hans; Haagensen, Bjørn

    2009-01-01

    Parametrised replication and replication are common ways of expressing infinite computation in process calculi. While parametrised constants can be encoded using replication in the π-calculus, this changes in the presence of spatial mobility as found in e.g. the distributed π- calculus...... of the distributed π-calculus with parametrised constants and replication are incomparable. On the other hand, we shall see that there exists a simple encoding of recursion in mobile ambients....

  4. Computing Resource And Work Allocations Using Social Profiles

    Directory of Open Access Journals (Sweden)

    Peter Lavin

    2013-01-01

    Full Text Available If several distributed and disparate computer resources exist, many of whichhave been created for different and diverse reasons, and several large scale com-puting challenges also exist with similar diversity in their backgrounds, then oneproblem which arises in trying to assemble enough of these resources to addresssuch challenges is the need to align and accommodate the different motivationsand objectives which may lie behind the existence of both the resources andthe challenges. Software agents are offered as a mainstream technology formodelling the types of collaborations and relationships needed to do this. Asan initial step towards forming such relationships, agents need a mechanism toconsider social and economic backgrounds. This paper explores addressing so-cial and economic differences using a combination of textual descriptions knownas social profiles and search engine technology, both of which are integrated intoan agent technology.

  5. GridFactory - Distributed computing on ephemeral resources

    DEFF Research Database (Denmark)

    Orellana, Frederik; Niinimaki, Marko

    2011-01-01

    A novel batch system for high throughput computing is presented. The system is specifically designed to leverage virtualization and web technology to facilitate deployment on cloud and other ephemeral resources. In particular, it implements a security model suited for forming collaborations...

  6. LHCb Computing Resources: 2012 re-assessment, 2013 request and 2014 forecast

    CERN Document Server

    Graciani Diaz, Ricardo

    2012-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2012 data-taking period, request of computing resource needs for 2013, and a first forecast of the 2014 needs, when restart of data-taking is foreseen. Estimates are based on 2011 experience, as well as on the results of a simulation of the computing model described in the document. Differences in the model and deviations in the estimates from previous presented results are stressed.

  7. A Distributed OpenCL Framework using Redundant Computation and Data Replication

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Junghyun [Seoul National University, Korea; Gangwon, Jo [Seoul National University, Korea; Jaehoon, Jung [Seoul National University, Korea; Lee, Jaejin [Seoul National University, Korea

    2016-01-01

    Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined in a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.

  8. LHCb Computing Resources: 2011 re-assessment, 2012 request and 2013 forecast

    CERN Document Server

    Graciani, R

    2011-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2011 data taking period, request of computing resource needs for 2012 data taking period and a first forecast of the 2013 needs, when no data taking is foreseen. Estimates are based on 2010 experienced and last updates from LHC schedule, as well as on a new implementation of the computing model simulation tool. Differences in the model and deviations in the estimates from previous presented results are stressed.

  9. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  10. Next Generation Computer Resources: Reference Model for Project Support Environments (Version 2.0)

    National Research Council Canada - National Science Library

    Brown, Alan

    1993-01-01

    The objective of the Next Generation Computer Resources (NGCR) program is to restructure the Navy's approach to acquisition of standard computing resources to take better advantage of commercial advances and investments...

  11. Active resources concept of computation for enterprise software

    Directory of Open Access Journals (Sweden)

    Koryl Maciej

    2017-06-01

    Full Text Available Traditional computational models for enterprise software are still to a great extent centralized. However, rapid growing of modern computation techniques and frameworks causes that contemporary software becomes more and more distributed. Towards development of new complete and coherent solution for distributed enterprise software construction, synthesis of three well-grounded concepts is proposed: Domain-Driven Design technique of software engineering, REST architectural style and actor model of computation. As a result new resources-based framework arises, which after first cases of use seems to be useful and worthy of further research.

  12. LHCb Computing Resource usage in 2017

    CERN Document Server

    Bozzi, Concezio

    2018-01-01

    This document reports the usage of computing resources by the LHCb collaboration during the period January 1st – December 31st 2017. The data in the following sections have been compiled from the EGI Accounting portal: https://accounting.egi.eu. For LHCb specific information, the data is taken from the DIRAC Accounting at the LHCb DIRAC Web portal: http://lhcb-portal-dirac.cern.ch.

  13. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Science.gov (United States)

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  14. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Nan Zhang

    Full Text Available Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  15. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  16. Dynamic provisioning of local and remote compute resources with OpenStack

    Science.gov (United States)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  17. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  18. LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents the computing resources needed by LHCb in 2019 and a reassessment of the 2018 requests, as resulting from the current experience of Run2 data taking and minor changes in the LHCb computing model parameters.

  19. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  20. Resource-Aware Load Balancing Scheme using Multi-objective Optimization in Cloud Computing

    OpenAIRE

    Kavita Rana; Vikas Zandu

    2016-01-01

    Cloud computing is a service based, on-demand, pay per use model consisting of an interconnected and virtualizes resources delivered over internet. In cloud computing, usually there are number of jobs that need to be executed with the available resources to achieve optimal performance, least possible total time for completion, shortest response time, and efficient utilization of resources etc. Hence, job scheduling is the most important concern that aims to ensure that use’s requirement are ...

  1. VECTR: Virtual Environment Computational Training Resource

    Science.gov (United States)

    Little, William L.

    2018-01-01

    The Westridge Middle School Curriculum and Community Night is an annual event designed to introduce students and parents to potential employers in the Central Florida area. NASA participated in the event in 2017, and has been asked to come back for the 2018 event on January 25. We will be demonstrating our Microsoft Hololens Virtual Rovers project, and the Virtual Environment Computational Training Resource (VECTR) virtual reality tool.

  2. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    Science.gov (United States)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  3. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    OpenAIRE

    Cirasella, Jill

    2009-01-01

    This article is an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news.

  4. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  5. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  6. Understanding how replication processes can maintain systems away from equilibrium using Algorithmic Information Theory.

    Science.gov (United States)

    Devine, Sean D

    2016-02-01

    Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  7. Replication of clinical innovations in multiple medical practices.

    Science.gov (United States)

    Henley, N S; Pearce, J; Phillips, L A; Weir, S

    1998-11-01

    Many clinical innovations had been successfully developed and piloted in individual medical practice units of Kaiser Permanente in North Carolina during 1995 and 1996. Difficulty in replicating these clinical innovations consistently throughout all 21 medical practice units led to development of the interdisciplinary Clinical Innovation Implementation Team, which was formed by using existing resources from various departments across the region. REPLICATION MODEL: Based on a model of transfer of best practices, the implementation team developed a process and tools (master schedule and activity matrix) to quickly replicate successful pilot projects throughout all medical practice units. The process involved the following steps: identifying a practice and delineating its characteristics and measures (source identification); identifying a team to receive the (new) practice; piloting the practice; and standardizing, including the incorporation of learnings. The model includes the following components for each innovation: sending and receiving teams, an innovation coordinator role, an innovation expert role, a location expert role, a master schedule, and a project activity matrix. Communication depended on a partnership among the location experts (local knowledge and credibility), the innovation coordinator (process expertise), and the innovation experts (content expertise). Results after 12 months of working with the 21 medical practice units include integration of diabetes care team services into the practices, training of more than 120 providers in the use of personal computers and an icon-based clinical information system, and integration of a planwide self-care program into the medical practices--all with measurable improved outcomes. The model for sequential replication and the implementation team structure and function should be successful in other organizational settings.

  8. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  9. Discovery of resources using MADM approaches for parallel and distributed computing

    Directory of Open Access Journals (Sweden)

    Mandeep Kaur

    2017-06-01

    Full Text Available Grid, a form of parallel and distributed computing, allows the sharing of data and computational resources among its users from various geographical locations. The grid resources are diverse in terms of their underlying attributes. The majority of the state-of-the-art resource discovery techniques rely on the static resource attributes during resource selection. However, the matching resources based on the static resource attributes may not be the most appropriate resources for the execution of user applications because they may have heavy job loads, less storage space or less working memory (RAM. Hence, there is a need to consider the current state of the resources in order to find the most suitable resources. In this paper, we have proposed a two-phased multi-attribute decision making (MADM approach for discovery of grid resources by using P2P formalism. The proposed approach considers multiple resource attributes for decision making of resource selection and provides the best suitable resource(s to grid users. The first phase describes a mechanism to discover all matching resources and applies SAW method to shortlist the top ranked resources, which are communicated to the requesting super-peer. The second phase of our proposed methodology applies integrated MADM approach (AHP enriched PROMETHEE-II on the list of selected resources received from different super-peers. The pairwise comparison of the resources with respect to their attributes is made and the rank of each resource is determined. The top ranked resource is then communicated to the grid user by the grid scheduler. Our proposed methodology enables the grid scheduler to allocate the most suitable resource to the user application and also reduces the search complexity by filtering out the less suitable resources during resource discovery.

  10. Optimised resource construction for verifiable quantum computation

    International Nuclear Information System (INIS)

    Kashefi, Elham; Wallden, Petros

    2017-01-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. (paper)

  11. Impact of changing computer technology on hydrologic and water resource modeling

    OpenAIRE

    Loucks, D.P.; Fedra, K.

    1987-01-01

    The increasing availability of substantial computer power at relatively low costs and the increasing ease of using computer graphics, of communicating with other computers and data bases, and of programming using high-level problem-oriented computer languages, is providing new opportunities and challenges for those developing and using hydrologic and water resources models. This paper reviews some of the progress made towards the development and application of computer support systems designe...

  12. ACToR - Aggregated Computational Toxicology Resource

    International Nuclear Information System (INIS)

    Judson, Richard; Richard, Ann; Dix, David; Houck, Keith; Elloumi, Fathi; Martin, Matthew; Cathey, Tommy; Transue, Thomas R.; Spencer, Richard; Wolf, Maritja

    2008-01-01

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast TM

  13. Optorsim: A Grid Simulator for Studying Dynamic Data Replication Strategies

    CERN Document Server

    Bell, William H; Millar, A Paul; Capozza, Luigi; Stockinger, Kurt; Zini, Floriano

    2003-01-01

    Computational grids process large, computationally intensive problems on small data sets. In contrast, data grids process large computational problems that in turn require evaluating, mining and producing large amounts of data. Replication, creating geographically disparate identical copies of data, is regarded as one of the major optimization techniques for reducing data access costs. In this paper, several replication algorithms are discussed. These algorithms were studied using the Grid simulator: OptorSim. OptorSim provides a modular framework within which optimization strategies can be studied under different Grid configurations. The goal is to explore the stability and transient behaviour of selected optimization techniques. We detail the design and implementation of OptorSim and analyze various replication algorithms based on different Grid workloads.

  14. Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony Optimization Algorithm

    OpenAIRE

    Lingna He; Qingshui Li; Linan Zhu

    2012-01-01

    In order to replace the traditional Internet software usage patterns and enterprise management mode, this paper proposes a new business calculation mode- cloud computing, resources scheduling strategy is the key technology in cloud computing, Based on the study of cloud computing system structure and the mode of operation, The key research for cloud computing the process of the work scheduling and resource allocation problems based on ant colony algorithm , Detailed analysis and design of the...

  15. Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research.

    Science.gov (United States)

    Erdemir, Ahmet; Hunter, Peter J; Holzapfel, Gerhard A; Loew, Leslie M; Middleton, John; Jacobs, Christopher R; Nithiarasu, Perumal; Löhner, Rainlad; Wei, Guowei; Winkelstein, Beth A; Barocas, Victor H; Guilak, Farshid; Ku, Joy P; Hicks, Jennifer L; Delp, Scott L; Sacks, Michael; Weiss, Jeffrey A; Ateshian, Gerard A; Maas, Steve A; McCulloch, Andrew D; Peng, Grace C Y

    2018-02-01

    The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate

  16. Replication of urban innovations - prioritization of strategies for the replication of Dhaka's community-based decentralized composting model.

    Science.gov (United States)

    Yedla, Sudhakar

    2012-01-01

    Dhaka's community-based decentralized composting (DCDC) is a successful demonstration of solid waste management by adopting low-cost technology, local resources community participation and partnerships among the various actors involved. This paper attempts to understand the model, necessary conditions, strategies and their priorities to replicate DCDC in the other developing cities of Asia. Thirteen strategies required for its replication are identified and assessed based on various criteria, namely transferability, longevity, economic viability, adaptation and also overall replication. Priority setting by multi-criteria analysis by applying analytic hierarchy process revealed that immediate transferability without long-term and economic viability consideration is not advisable as this would result in unsustainable replication of DCDC. Based on the analysis, measures to ensure the product quality control; partnership among stakeholders (public-private-community); strategies to achieve better involvement of the private sector in solid waste management (entrepreneurship in approach); simple and low-cost technology; and strategies to provide an effective interface among the complementing sectors are identified as important strategies for its replication.

  17. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  18. Computer-aided resource planning and scheduling for radiological services

    Science.gov (United States)

    Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.

    1996-05-01

    There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.

  19. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    Science.gov (United States)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in

  20. Can the Teachers' Creativity Overcome Limited Computer Resources?

    Science.gov (United States)

    Nikolov, Rumen; Sendova, Evgenia

    1988-01-01

    Describes experiences of the Research Group on Education (RGE) at the Bulgarian Academy of Sciences and the Ministry of Education in using limited computer resources when teaching informatics. Topics discussed include group projects; the use of Logo; ability grouping; and out-of-class activities, including publishing a pupils' magazine. (13…

  1. Integration of Openstack cloud resources in BES III computing cluster

    Science.gov (United States)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  2. On ad valorem taxation of nonrenewable resource production

    International Nuclear Information System (INIS)

    Rowse, John

    1997-01-01

    Taxing a nonrenewable resource typically shifts production through time, compresses the economically recoverable resource base and shrinks social welfare. But by how much? In this paper a computational model of natural gas use, representing numerous demand and supply features believed important for shaping efficient intertemporal allocations, is utilized to answer this question under different ad valorem royalty taxes on wellhead production. Proportionate social welfare losses from fixed royalties up to 30% are found to be small and the excess burden stands at less than 6.5% for a 30% royalty. This result replicates findings of several earlier studies and points to a general conclusion

  3. Recent development of computational resources for new antibiotics discovery

    DEFF Research Database (Denmark)

    Kim, Hyun Uk; Blin, Kai; Lee, Sang Yup

    2017-01-01

    Understanding a complex working mechanism of biosynthetic gene clusters (BGCs) encoding secondary metabolites is a key to discovery of new antibiotics. Computational resources continue to be developed in order to better process increasing volumes of genome and chemistry data, and thereby better...

  4. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud

  5. Computational resources for ribosome profiling: from database to Web server and software.

    Science.gov (United States)

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  7. Wide area data replication in an ITER-relevant data environment

    International Nuclear Information System (INIS)

    Centioli, C.; Iannone, F.; Panella, M.; Vitale, V.; Bracco, G.; Guadagni, R.; Migliori, S.; Steffe, M.; Eccher, S.; Maslennikov, A.; Mililotti, M.; Molowny, M.; Palumbo, G.; Carboni, M.

    2005-01-01

    The next generation of tokamak experiments will require a new way of approaching data sharing issues among fusion organizations. In the fusion community, many researchers at different worldwide sites will analyse data produced by International Thermonuclear Experimental Reactor (ITER), wherever it will be built. In this context, an efficient availability of the data in the sites where the computational resources are located becomes a major architectural issue for the deployment of ITER computational infrastructure. The approach described in this paper goes beyond the usual site-centric model mainly devoted to granting access exclusively to experimental data stored at the device sites. To this aim, we propose a new data replication architecture relying on a wide area network, based on a Master/Slave model and on synchronization techniques producing mirrored data sites. In this architecture, data replication will affect large databases (TB) as well as large UNIX-like file systems, using open source-based software components, namely MySQL, as database management system, and RSYNC and BBFTP for data transfer. A test-bed has been set up to evaluate the performance of the software components underlying the proposed architecture. The test-bed hardware layout deploys a cluster of four Dual-Xeon Supermicro each with a raid array of 1 TB. High performance network line (1 Gbit over 400 km) provides the infrastructure to test the components on a wide area network. The results obtained will be thoroughly discussed

  8. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  9. Function Package for Computing Quantum Resource Measures

    Science.gov (United States)

    Huang, Zhiming

    2018-05-01

    In this paper, we present a function package for to calculate quantum resource measures and dynamics of open systems. Our package includes common operators and operator lists, frequently-used functions for computing quantum entanglement, quantum correlation, quantum coherence, quantum Fisher information and dynamics in noisy environments. We briefly explain the functions of the package and illustrate how to use the package with several typical examples. We expect that this package is a useful tool for future research and education.

  10. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  11. Universal resources for approximate and stochastic measurement-based quantum computation

    International Nuclear Information System (INIS)

    Mora, Caterina E.; Piani, Marco; Miyake, Akimasa; Van den Nest, Maarten; Duer, Wolfgang; Briegel, Hans J.

    2010-01-01

    We investigate which quantum states can serve as universal resources for approximate and stochastic measurement-based quantum computation in the sense that any quantum state can be generated from a given resource by means of single-qubit (local) operations assisted by classical communication. More precisely, we consider the approximate and stochastic generation of states, resulting, for example, from a restriction to finite measurement settings or from possible imperfections in the resources or local operations. We show that entanglement-based criteria for universality obtained in M. Van den Nest et al. [New J. Phys. 9, 204 (2007)] for the exact, deterministic case can be lifted to the much more general approximate, stochastic case. This allows us to move from the idealized situation (exact, deterministic universality) considered in previous works to the practically relevant context of nonperfect state preparation. We find that any entanglement measure fulfilling some basic requirements needs to reach its maximum value on some element of an approximate, stochastic universal family of resource states, as the resource size grows. This allows us to rule out various families of states as being approximate, stochastic universal. We prove that approximate, stochastic universality is in general a weaker requirement than deterministic, exact universality and provide resources that are efficient approximate universal, but not exact deterministic universal. We also study the robustness of universal resources for measurement-based quantum computation under realistic assumptions about the (imperfect) generation and manipulation of entangled states, giving an explicit expression for the impact that errors made in the preparation of the resource have on the possibility to use it for universal approximate and stochastic state preparation. Finally, we discuss the relation between our entanglement-based criteria and recent results regarding the uselessness of states with a high

  12. Spacetime Replication of Quantum Information Using (2 , 3) Quantum Secret Sharing and Teleportation

    Science.gov (United States)

    Wu, Yadong; Khalid, Abdullah; Davijani, Masoud; Sanders, Barry

    The aim of this work is to construct a protocol to replicate quantum information in any valid configuration of causal diamonds and assess resources required to physically realize spacetime replication. We present a set of codes to replicate quantum information along with a scheme to realize these codes using continuous-variable quantum optics. We use our proposed experimental realizations to determine upper bounds on the quantum and classical resources required to simulate spacetime replication. For four causal diamonds, our implementation scheme is more efficient than the one proposed previously. Our codes are designed using a decomposition algorithm for complete directed graphs, (2 , 3) quantum secret sharing, quantum teleportation and entanglement swapping. These results show the simulation of spacetime replication of quantum information is feasible with existing experimental methods. Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center (NSF Grant PHY-1125565) with support of the Gordon and Betty Moore Foundation (GBMF-2644).

  13. Data Service: Distributed Data Capture and Replication

    Science.gov (United States)

    Warner, P. B.; Pietrowicz, S. R.

    2007-10-01

    Data Service is a critical component of the NOAO Data Management and Science Support (DMaSS) Solutions Platform, which is based on a service-oriented architecture, and is to replace the current NOAO Data Transport System. Its responsibilities include capturing data from NOAO and partner telescopes and instruments and replicating the data across multiple (currently six) storage sites. Java 5 was chosen as the implementation language, and Java EE as the underlying enterprise framework. Application metadata persistence is performed using EJB and Hibernate on the JBoss Application Server, with PostgreSQL as the persistence back-end. Although potentially any underlying mass storage system may be used as the Data Service file persistence technology, DTS deployments and Data Service test deployments currently use the Storage Resource Broker from SDSC. This paper presents an overview and high-level design of the Data Service, including aspects of deployment, e.g., for the LSST Data Challenge at the NCSA computing facilities.

  14. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  15. Intrinsically bent DNA in replication origins and gene promoters.

    Science.gov (United States)

    Gimenes, F; Takeda, K I; Fiorini, A; Gouveia, F S; Fernandez, M A

    2008-06-24

    Intrinsically bent DNA is an alternative conformation of the DNA molecule caused by the presence of dA/dT tracts, 2 to 6 bp long, in a helical turn phase DNA or with multiple intervals of 10 to 11 bp. Other than flexibility, intrinsic bending sites induce DNA curvature in particular chromosome regions such as replication origins and promoters. Intrinsically bent DNA sites are important in initiating DNA replication, and are sometimes found near to regions associated with the nuclear matrix. Many methods have been developed to localize bent sites, for example, circular permutation, computational analysis, and atomic force microscopy. This review discusses intrinsically bent DNA sites associated with replication origins and gene promoter regions in prokaryote and eukaryote cells. We also describe methods for identifying bent DNA sites for circular permutation and computational analysis.

  16. Extremal dynamics in random replicator ecosystems

    Energy Technology Data Exchange (ETDEWEB)

    Kärenlampi, Petri P., E-mail: petri.karenlampi@uef.fi

    2015-10-02

    The seminal numerical experiment by Bak and Sneppen (BS) is repeated, along with computations with replicator models, including a greater amount of features. Both types of models do self-organize, and do obey power-law scaling for the size distribution of activity cycles. However species extinction within the replicator models interferes with the BS self-organized critical (SOC) activity. Speciation–extinction dynamics ruins any stationary state which might contain a steady size distribution of activity cycles. The BS-type activity appears as a dissimilar phenomenon in comparison to speciation–extinction dynamics in the replicator system. No criticality is found from the speciation–extinction dynamics. Neither are speciations and extinctions in real biological macroevolution known to contain any diverging distributions, or self-organization towards any critical state. Consequently, biological macroevolution probably is not a self-organized critical phenomenon. - Highlights: • Extremal Dynamics organizes random replicator ecosystems to two phases in fitness space. • Replicator systems show power-law scaling of activity. • Species extinction interferes with Bak–Sneppen type mutation activity. • Speciation–extinction dynamics does not show any critical phase transition. • Biological macroevolution probably is not a self-organized critical phenomenon.

  17. The Trope Tank: A Laboratory with Material Resources for Creative Computing

    Directory of Open Access Journals (Sweden)

    Nick Montfort

    2014-12-01

    Full Text Available http://dx.doi.org/10.5007/1807-9288.2014v10n2p53 Principles for organizing and making use of a laboratory with material computing resources are articulated. This laboratory, the Trope Tank, is a facility for teaching, research, and creative collaboration and offers hardware (in working condition and set up for use from the 1970s, 1980s, and 1990s, including videogame systems, home computers, and an arcade cabinet. To aid in investigating the material history of texts, the lab has a small 19th century letterpress, a typewriter, a print terminal, and dot-matrix printers. Other resources include controllers, peripherals, manuals, books, and software on physical media. These resources are used for teaching, loaned for local exhibitions and presentations, and accessed by researchers and artists. The space is primarily a laboratory (rather than a library, studio, or museum, so materials are organized by platform and intended use. Textual information about the historical contexts of the available systems, and resources are set up to allow easy operation, and even casual use, by researchers, teachers, students, and artists.

  18. NMRbox: A Resource for Biomolecular NMR Computation.

    Science.gov (United States)

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  19. A novel resource management method of providing operating system as a service for mobile transparent computing.

    Science.gov (United States)

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  20. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    Directory of Open Access Journals (Sweden)

    Yonghua Xiong

    2014-01-01

    Full Text Available This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU virtualization and mobile agent for mobile transparent computing (MTC to devise a method of managing shared resources and services management (SRSM. It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user’s requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  1. Measuring the impact of computer resource quality on the software development process and product

    Science.gov (United States)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  2. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  3. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  4. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  5. The Interstellar Ethics of Self-Replicating Probes

    Science.gov (United States)

    Cooper, K.

    Robotic spacecraft have been our primary means of exploring the Universe for over 50 years. Should interstellar travel become reality it seems unlikely that humankind will stop using robotic probes. These probes will be able to replicate themselves ad infinitum by extracting raw materials from the space resources around them and reconfiguring them into replicas of themselves, using technology such as 3D printing. This will create a colonising wave of probes across the Galaxy. However, such probes could have negative as well as positive consequences and it is incumbent upon us to factor self-replicating probes into our interstellar philosophies and to take responsibility for their actions.

  6. A Gossip-Based Optimistic Replication for Efficient Delay-Sensitive Streaming Using an Interactive Middleware Support System

    Science.gov (United States)

    Mavromoustakis, Constandinos X.; Karatza, Helen D.

    2010-06-01

    While sharing resources the efficiency is substantially degraded as a result of the scarceness of availability of the requested resources in a multiclient support manner. These resources are often aggravated by many factors like the temporal constraints for availability or node flooding by the requested replicated file chunks. Thus replicated file chunks should be efficiently disseminated in order to enable resource availability on-demand by the mobile users. This work considers a cross layered middleware support system for efficient delay-sensitive streaming by using each device's connectivity and social interactions in a cross layered manner. The collaborative streaming is achieved through the epidemically replicated file chunk policy which uses a transition-based approach of a chained model of an infectious disease with susceptible, infected, recovered and death states. The Gossip-based stateful model enforces the mobile nodes whether to host a file chunk or not or, when no longer a chunk is needed, to purge it. The proposed model is thoroughly evaluated through experimental simulation taking measures for the effective throughput Eff as a function of the packet loss parameter in contrast with the effectiveness of the replication Gossip-based policy.

  7. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  8. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  9. Reliable self-replicating machines in asynchronous cellular automata.

    Science.gov (United States)

    Lee, Jia; Adachi, Susumu; Peper, Ferdinand

    2007-01-01

    We propose a self-replicating machine that is embedded in a two-dimensional asynchronous cellular automaton with von Neumann neighborhood. The machine dynamically encodes its shape into description signals, and despite the randomness of cell updating, it is able to successfully construct copies of itself according to the description signals. Self-replication on asynchronously updated cellular automata may find application in nanocomputers, where reconfigurability is an essential property, since it allows avoidance of defective parts and simplifies programming of such computers.

  10. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    Science.gov (United States)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  11. An integrated system for land resources supervision based on the IoT and cloud computing

    Science.gov (United States)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  12. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  13. An interactive computer approach to performing resource analysis for a multi-resource/multi-project problem. [Spacelab inventory procurement planning

    Science.gov (United States)

    Schlagheck, R. A.

    1977-01-01

    New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.

  14. Testing a computer-based ostomy care training resource for staff nurses.

    Science.gov (United States)

    Bales, Isabel

    2010-05-01

    Fragmented teaching and ostomy care provided by nonspecialized clinicians unfamiliar with state-of-the-art care and products have been identified as problems in teaching ostomy care to the new ostomate. After conducting a literature review of theories and concepts related to the impact of nurse behaviors and confidence on ostomy care, the author developed a computer-based learning resource and assessed its effect on staff nurse confidence. Of 189 staff nurses with a minimum of 1 year acute-care experience employed in the acute care, emergency, and rehabilitation departments of an acute care facility in the Midwestern US, 103 agreed to participate and returned completed pre- and post-tests, each comprising the same eight statements about providing ostomy care. F and P values were computed for differences between pre- and post test scores. Based on a scale where 1 = totally disagree and 5 = totally agree with the statement, baseline confidence and perceived mean knowledge scores averaged 3.8 and after viewing the resource program post-test mean scores averaged 4.51, a statistically significant improvement (P = 0.000). The largest difference between pre- and post test scores involved feeling confident in having the resources to learn ostomy skills independently. The availability of an electronic ostomy care resource was rated highly in both pre- and post testing. Studies to assess the effects of increased confidence and knowledge on the quality and provision of care are warranted.

  15. Distributional Replication

    OpenAIRE

    Beare, Brendan K.

    2009-01-01

    Suppose that X and Y are random variables. We define a replicating function to be a function f such that f(X) and Y have the same distribution. In general, the set of replicating functions for a given pair of random variables may be infinite. Suppose we have some objective function, or cost function, defined over the set of replicating functions, and we seek to estimate the replicating function with the lowest cost. We develop an approach to estimating the cheapest replicating function that i...

  16. Chromatin Immunoprecipitation of Replication Factors Moving with the Replication Fork

    OpenAIRE

    Rapp, Jordan B.; Ansbach, Alison B.; Noguchi, Chiaki; Noguchi, Eishi

    2009-01-01

    Replication of chromosomes involves a variety of replication proteins including DNA polymerases, DNA helicases, and other accessory factors. Many of these proteins are known to localize at replication forks and travel with them as components of the replisome complex. Other proteins do not move with replication forks but still play an essential role in DNA replication. Therefore, in order to understand the mechanisms of DNA replication and its controls, it is important to examine localization ...

  17. Blockchain-Empowered Fair Computational Resource Sharing System in the D2D Network

    Directory of Open Access Journals (Sweden)

    Zhen Hong

    2017-11-01

    Full Text Available Device-to-device (D2D communication is becoming an increasingly important technology in future networks with the climbing demand for local services. For instance, resource sharing in the D2D network features ubiquitous availability, flexibility, low latency and low cost. However, these features also bring along challenges when building a satisfactory resource sharing system in the D2D network. Specifically, user mobility is one of the top concerns for designing a cooperative D2D computational resource sharing system since mutual communication may not be stably available due to user mobility. A previous endeavour has demonstrated and proven how connectivity can be incorporated into cooperative task scheduling among users in the D2D network to effectively lower average task execution time. There are doubts about whether this type of task scheduling scheme, though effective, presents fairness among users. In other words, it can be unfair for users who contribute many computational resources while receiving little when in need. In this paper, we propose a novel blockchain-based credit system that can be incorporated into the connectivity-aware task scheduling scheme to enforce fairness among users in the D2D network. Users’ computational task cooperation will be recorded on the public blockchain ledger in the system as transactions, and each user’s credit balance can be easily accessible from the ledger. A supernode at the base station is responsible for scheduling cooperative computational tasks based on user mobility and user credit balance. We investigated the performance of the credit system, and simulation results showed that with a minor sacrifice of average task execution time, the level of fairness can obtain a major enhancement.

  18. Replication Catastrophe

    DEFF Research Database (Denmark)

    Toledo, Luis; Neelsen, Kai John; Lukas, Jiri

    2017-01-01

    Proliferating cells rely on the so-called DNA replication checkpoint to ensure orderly completion of genome duplication, and its malfunction may lead to catastrophic genome disruption, including unscheduled firing of replication origins, stalling and collapse of replication forks, massive DNA...... breakage, and, ultimately, cell death. Despite many years of intensive research into the molecular underpinnings of the eukaryotic replication checkpoint, the mechanisms underlying the dismal consequences of its failure remain enigmatic. A recent development offers a unifying model in which the replication...... checkpoint guards against global exhaustion of rate-limiting replication regulators. Here we discuss how such a mechanism can prevent catastrophic genome disruption and suggest how to harness this knowledge to advance therapeutic strategies to eliminate cancer cells that inherently proliferate under...

  19. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    The general potential of computer games for teaching and learning is becoming widely recognized. In particular, within the application contexts of primary and lower secondary education, the relevance and value and computer games seem more accepted, and the possibility and willingness to incorporate...... computer games as a possible resource at the level of other educational resources seem more frequent. For some reason, however, to apply computer games in processes of teaching and learning at the high school level, seems an almost non-existent event. This paper reports on study of incorporating...... the learning game “Global Conflicts: Latin America” as a resource into the teaching and learning of a course involving the two subjects “English language learning” and “Social studies” at the final year in a Danish high school. The study adapts an explorative research design approach and investigates...

  20. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  1. Database Replication Prototype

    OpenAIRE

    Vandewall, R.

    2000-01-01

    This report describes the design of a Replication Framework that facilitates the implementation and com-parison of database replication techniques. Furthermore, it discusses the implementation of a Database Replication Prototype and compares the performance measurements of two replication techniques based on the Atomic Broadcast communication primitive: pessimistic active replication and optimistic active replication. The main contributions of this report can be split into four parts....

  2. Negative quasi-probability as a resource for quantum computation

    International Nuclear Information System (INIS)

    Veitch, Victor; Ferrie, Christopher; Emerson, Joseph; Gross, David

    2012-01-01

    A central problem in quantum information is to determine the minimal physical resources that are required for quantum computational speed-up and, in particular, for fault-tolerant quantum computation. We establish a remarkable connection between the potential for quantum speed-up and the onset of negative values in a distinguished quasi-probability representation, a discrete analogue of the Wigner function for quantum systems of odd dimension. This connection allows us to resolve an open question on the existence of bound states for magic state distillation: we prove that there exist mixed states outside the convex hull of stabilizer states that cannot be distilled to non-stabilizer target states using stabilizer operations. We also provide an efficient simulation protocol for Clifford circuits that extends to a large class of mixed states, including bound universal states. (paper)

  3. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    Science.gov (United States)

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  4. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  5. Modeling HIV-1 intracellular replication: two simulation approaches

    NARCIS (Netherlands)

    Zarrabi, N.; Mancini, E.; Tay, J.; Shahand, S.; Sloot, P.M.A.

    2010-01-01

    Many mathematical and computational models have been developed to investigate the complexity of HIV dynamics, immune response and drug therapy. However, there are not many models which consider the dynamics of virus intracellular replication at a single level. We propose a model of HIV intracellular

  6. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    Science.gov (United States)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  7. The Replication Recipe: What makes for a convincing replication?

    NARCIS (Netherlands)

    Brandt, M.J.; IJzerman, H.; Dijksterhuis, A.J.; Farach, F.J.; Geller, J.; Giner-Sorolla, R.; Grange, J.A.; Perugini, M.; Spies, J.R.; Veer, A. van 't

    2014-01-01

    Psychological scientists have recently started to reconsider the importance of close replications in building a cumulative knowledge base; however, there is no consensus about what constitutes a convincing close replication study. To facilitate convincing close replication attempts we have developed

  8. The replication recipe : What makes for a convincing replication?

    NARCIS (Netherlands)

    Brandt, M.J.; IJzerman, H.; Dijksterhuis, Ap; Farach, Frank J.; Geller, Jason; Giner-Sorolla, Roger; Grange, James A.; Perugini, Marco; Spies, Jeffrey R.; van 't Veer, Anna

    Psychological scientists have recently started to reconsider the importance of close replications in building a cumulative knowledge base; however, there is no consensus about what constitutes a convincing close replication study. To facilitate convincing close replication attempts we have developed

  9. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  10. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  11. Photonic entanglement as a resource in quantum computation and quantum communication

    OpenAIRE

    Prevedel, Robert; Aspelmeyer, Markus; Brukner, Caslav; Jennewein, Thomas; Zeilinger, Anton

    2008-01-01

    Entanglement is an essential resource in current experimental implementations for quantum information processing. We review a class of experiments exploiting photonic entanglement, ranging from one-way quantum computing over quantum communication complexity to long-distance quantum communication. We then propose a set of feasible experiments that will underline the advantages of photonic entanglement for quantum information processing.

  12. Genome-wide alterations of the DNA replication program during tumor progression

    Science.gov (United States)

    Arneodo, A.; Goldar, A.; Argoul, F.; Hyrien, O.; Audit, B.

    2016-08-01

    Oncogenic stress is a major driving force in the early stages of cancer development. Recent experimental findings reveal that, in precancerous lesions and cancers, activated oncogenes may induce stalling and dissociation of DNA replication forks resulting in DNA damage. Replication timing is emerging as an important epigenetic feature that recapitulates several genomic, epigenetic and functional specificities of even closely related cell types. There is increasing evidence that chromosome rearrangements, the hallmark of many cancer genomes, are intimately associated with the DNA replication program and that epigenetic replication timing changes often precede chromosomic rearrangements. The recent development of a novel methodology to map replication fork polarity using deep sequencing of Okazaki fragments has provided new and complementary genome-wide replication profiling data. We review the results of a wavelet-based multi-scale analysis of genomic and epigenetic data including replication profiles along human chromosomes. These results provide new insight into the spatio-temporal replication program and its dynamics during differentiation. Here our goal is to bring to cancer research, the experimental protocols and computational methodologies for replication program profiling, and also the modeling of the spatio-temporal replication program. To illustrate our purpose, we report very preliminary results obtained for the chronic myelogeneous leukemia, the archetype model of cancer. Finally, we discuss promising perspectives on using genome-wide DNA replication profiling as a novel efficient tool for cancer diagnosis, prognosis and personalized treatment.

  13. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  14. Resource allocation for maximizing prediction accuracy and genetic gain of genomic selection in plant breeding: a simulation experiment.

    Science.gov (United States)

    Lorenz, Aaron J

    2013-03-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  15. Surgical positioning of orthodontic mini-implants with guides fabricated on models replicated with cone-beam computed tomography.

    Science.gov (United States)

    Kim, Seong-Hun; Choi, Yong-Suk; Hwang, Eui-Hwan; Chung, Kyu-Rhim; Kook, Yoon-Ah; Nelson, Gerald

    2007-04-01

    This article illustrates a new surgical guide system that uses cone-beam computed tomography (CBCT) images to replicate dental models; surgical guides for the proper positioning of orthodontic mini-implants were fabricated on the replicas, and the guides were used for precise placement. The indications, efficacy, and possible complications of this method are discussed. Patients who were planning to have orthodontic mini-implant treatment were recruited for this study. A CBCT system (PSR 9000N, Asahi Roentgen, Kyoto, Japan) was used to acquire virtual slices of the posterior maxilla that were 0.1 to 0.15 mm thick. Color 3-dimensional rapid prototyping was used to differentiate teeth, alveolus, and maxillary sinus wall. A surgical guide for the mini-implant was fabricated on the replica model. Proper positioning for mini-implants on the posterior maxilla was determined by viewing the CBCT images. The surgical guide was placed on the clinical site, and it allowed precise pilot drilling and accurate placement of the mini-implant. CBCT imaging allows remarkably lower radiation doses and thinner acquisition slices compared with medical computed tomography. Virtually reproduced replica models enable precise planning for mini-implant positions in anatomically complex sites.

  16. Resource-constrained project scheduling: computing lower bounds by solving minimum cut problems

    NARCIS (Netherlands)

    Möhring, R.H.; Nesetril, J.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen

    1999-01-01

    We present a novel approach to compute Lagrangian lower bounds on the objective function value of a wide class of resource-constrained project scheduling problems. The basis is a polynomial-time algorithm to solve the following scheduling problem: Given a set of activities with start-time dependent

  17. A Safety Resource Allocation Mechanism against Connection Fault for Vehicular Cloud Computing

    Directory of Open Access Journals (Sweden)

    Tianpeng Ye

    2016-01-01

    Full Text Available The Intelligent Transportation System (ITS becomes an important component of the smart city toward safer roads, better traffic control, and on-demand service by utilizing and processing the information collected from sensors of vehicles and road side infrastructure. In ITS, Vehicular Cloud Computing (VCC is a novel technology balancing the requirement of complex services and the limited capability of on-board computers. However, the behaviors of the vehicles in VCC are dynamic, random, and complex. Thus, one of the key safety issues is the frequent disconnections between the vehicle and the Vehicular Cloud (VC when this vehicle is computing for a service. More important, the connection fault will disturb seriously the normal services of VCC and impact the safety works of the transportation. In this paper, a safety resource allocation mechanism is proposed against connection fault in VCC by using a modified workflow with prediction capability. We firstly propose the probability model for the vehicle movement which satisfies the high dynamics and real-time requirements of VCC. And then we propose a Prediction-based Reliability Maximization Algorithm (PRMA to realize the safety resource allocation for VCC. The evaluation shows that our mechanism can improve the reliability and guarantee the real-time performance of the VCC.

  18. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  19. Mathematical Analysis of Replication by Cash Flow Matching

    Directory of Open Access Journals (Sweden)

    Jan Natolski

    2017-02-01

    Full Text Available The replicating portfolio approach is a well-established approach carried out by many life insurance companies within their Solvency II framework for the computation of risk capital. In this note,weelaborateononespecificformulationofareplicatingportfolioproblem. Incontrasttothetwo most popular replication approaches, it does not yield an analytic solution (if, at all, a solution exists andisunique. Further,althoughconvex,theobjectivefunctionseemstobenon-smooth,andhencea numericalsolutionmightthusbemuchmoredemandingthanforthetwomostpopularformulations. Especially for the second reason, this formulation did not (yet receive much attention in practical applications, in contrast to the other two formulations. In the following, we will demonstrate that the (potential non-smoothness can be avoided due to an equivalent reformulation as a linear second order cone program (SOCP. This allows for a numerical solution by efficient second order methods like interior point methods or similar. We also show that—under weak assumptions—existence and uniqueness of the optimal solution can be guaranteed. We additionally prove that—under a further similarly weak condition—the fair value of the replicating portfolio equals the fair value of liabilities. Based on these insights, we argue that this unloved stepmother child within the replication problem family indeed represents an equally good formulation for practical purposes.

  20. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  1. Open Educational Resources: The Role of OCW, Blogs and Videos in Computer Networks Classroom

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2012-09-01

    Full Text Available This paper analyzes the learning experiences and opinions obtained from a group of undergraduate students in their interaction with several on-line multimedia resources included in a free on-line course about Computer Networks. These new educational resources employed are based on the Web2.0 approach such as blogs, videos and virtual labs which have been added in a web-site for distance self-learning.

  2. Recent advances in the genome-wide study of DNA replication origins in yeast

    Directory of Open Access Journals (Sweden)

    Chong ePeng

    2015-02-01

    Full Text Available DNA replication, one of the central events in the cell cycle, is the basis of biological inheritance. In order to be duplicated, a DNA double helix must be opened at defined sites, which are called DNA replication origins (ORIs. Unlike in bacteria, where replication initiates from a single replication origin, multiple origins are utilized in the eukaryotic genome. Among them, the ORIs in budding yeast Saccharomyces cerevisiae and the fission yeast Schizosaccharomyces pombe have been best characterized. In recent years, advances in DNA microarray and next-generation sequencing technologies have increased the number of yeast species involved in ORIs research dramatically. The ORIs in some nonconventional yeast species such as Kluyveromyces lactis and Pichia pastoris have also been genome-widely identified. Relevant databases of replication origins in yeast were constructed, then the comparative genomic analysis can be carried out. Here, we review several experimental approaches that have been used to map replication origins in yeast and some of the available web resources related to yeast ORIs. We also discuss the sequence characteristics and chromosome structures of ORIs in the four yeast species, which can be utilized to improve the replication origins prediction.

  3. Recent advances in the genome-wide study of DNA replication origins in yeast

    Science.gov (United States)

    Peng, Chong; Luo, Hao; Zhang, Xi; Gao, Feng

    2015-01-01

    DNA replication, one of the central events in the cell cycle, is the basis of biological inheritance. In order to be duplicated, a DNA double helix must be opened at defined sites, which are called DNA replication origins (ORIs). Unlike in bacteria, where replication initiates from a single replication origin, multiple origins are utilized in the eukaryotic genomes. Among them, the ORIs in budding yeast Saccharomyces cerevisiae and the fission yeast Schizosaccharomyces pombe have been best characterized. In recent years, advances in DNA microarray and next-generation sequencing technologies have increased the number of yeast species involved in ORIs research dramatically. The ORIs in some non-conventional yeast species such as Kluyveromyces lactis and Pichia pastoris have also been genome-widely identified. Relevant databases of replication origins in yeast were constructed, then the comparative genomic analysis can be carried out. Here, we review several experimental approaches that have been used to map replication origins in yeast and some of the available web resources related to yeast ORIs. We also discuss the sequence characteristics and chromosome structures of ORIs in the four yeast species, which can be utilized to improve yeast replication origins prediction. PMID:25745419

  4. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  5. Replicative Intermediates of Human Papillomavirus Type 11 in Laryngeal Papillomas: Site of Replication Initiation and Direction of Replication

    Science.gov (United States)

    Auborn, K. J.; Little, R. D.; Platt, T. H. K.; Vaccariello, M. A.; Schildkraut, C. L.

    1994-07-01

    We have examined the structures of replication intermediates from the human papillomavirus type 11 genome in DNA extracted from papilloma lesions (laryngeal papillomas). The sites of replication initiation and termination utilized in vivo were mapped by using neutral/neutral and neutral/alkaline two-dimensional agarose gel electrophoresis methods. Initiation of replication was detected in or very close to the upstream regulatory region (URR; the noncoding, regulatory sequences upstream of the open reading frames in the papillomavirus genome). We also show that replication forks proceed bidirectionally from the origin and converge 180circ opposite the URR. These results demonstrate the feasibility of analysis of replication of viral genomes directly from infected tissue.

  6. DNA replication and cancer: From dysfunctional replication origin activities to therapeutic opportunities.

    Science.gov (United States)

    Boyer, Anne-Sophie; Walter, David; Sørensen, Claus Storgaard

    2016-06-01

    A dividing cell has to duplicate its DNA precisely once during the cell cycle to preserve genome integrity avoiding the accumulation of genetic aberrations that promote diseases such as cancer. A large number of endogenous impacts can challenge DNA replication and cells harbor a battery of pathways to promote genome integrity during DNA replication. This includes suppressing new replication origin firing, stabilization of replicating forks, and the safe restart of forks to prevent any loss of genetic information. Here, we describe mechanisms by which oncogenes can interfere with DNA replication thereby causing DNA replication stress and genome instability. Further, we describe cellular and systemic responses to these insults with a focus on DNA replication restart pathways. Finally, we discuss the therapeutic potential of exploiting intrinsic replicative stress in cancer cells for targeted therapy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  8. Nonequilibrium Phase Transitions Associated with DNA Replication

    Science.gov (United States)

    2011-02-11

    polymerases) catalyzing the growth of a DNA primer strand (the nascent chain of nucleotides complementary to the template strand) based on the Watson ...the fraction (error rate) of monomers for which y, where y is the correct Watson - Crick complementary base of , can be obtained by ¼ X...Nonequilibrium Phase Transitions Associated with DNA Replication Hyung-June Woo* and Anders Wallqvist Biotechnology High Performance Computing

  9. Prediction Interval: What to Expect When You're Expecting … A Replication.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Spence

    Full Text Available A challenge when interpreting replications is determining whether the results of a replication "successfully" replicate the original study. Looking for consistency between two studies is challenging because individual studies are susceptible to many sources of error that can cause study results to deviate from each other and the population effect in unpredictable directions and magnitudes. In the current paper, we derive methods to compute a prediction interval, a range of results that can be expected in a replication due to chance (i.e., sampling error, for means and commonly used indexes of effect size: correlations and d-values. The prediction interval is calculable based on objective study characteristics (i.e., effect size of the original study and sample sizes of the original study and planned replication even when sample sizes across studies are unequal. The prediction interval provides an a priori method for assessing if the difference between an original and replication result is consistent with what can be expected due to sample error alone. We provide open-source software tools that allow researchers, reviewers, replicators, and editors to easily calculate prediction intervals.

  10. Applications of the pipeline environment for visual informatics and genomics computations

    Directory of Open Access Journals (Sweden)

    Genco Alex

    2011-07-01

    Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data, as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community.

  11. Repair replication in replicating and nonreplicating DNA after irradiation with uv light

    Energy Technology Data Exchange (ETDEWEB)

    Slor, H.; Cleaver, J.E.

    1978-06-01

    Ultraviolet light induces more pyrimidine dimers and more repair replication in DNA that replicates within 2 to 3 h of irradiation than in DNA that does not replicate during this period. This difference may be due to special conformational changes in DNA and chromatin that might be associated with semiconservative DNA replication.

  12. Resource allocation in grid computing

    NARCIS (Netherlands)

    Koole, Ger; Righter, Rhonda

    2007-01-01

    Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of

  13. A study of an adaptive replication framework for orchestrated composite web services.

    Science.gov (United States)

    Mohamed, Marwa F; Elyamany, Hany F; Nassar, Hamed M

    2013-01-01

    Replication is considered one of the most important techniques to improve the Quality of Services (QoS) of published Web Services. It has achieved impressive success in managing resource sharing and usage in order to moderate the energy consumed in IT environments. For a robust and successful replication process, attention should be paid to suitable time as well as the constraints and capabilities in which the process runs. The replication process is time-consuming since outsourcing some new replicas into other hosts is lengthy. Furthermore, nowadays, most of the business processes that might be implemented over the Web are composed of multiple Web services working together in two main styles: Orchestration and Choreography. Accomplishing a replication over such business processes is another challenge due to the complexity and flexibility involved. In this paper, we present an adaptive replication framework for regular and orchestrated composite Web services. The suggested framework includes a number of components for detecting unexpected and unhappy events that might occur when consuming the original published web services including failure or overloading. It also includes a specific replication controller to manage the replication process and select the best host that would encapsulate a new replica. In addition, it includes a component for predicting the incoming load in order to decrease the time needed for outsourcing new replicas, enhancing the performance greatly. A simulation environment has been created to measure the performance of the suggested framework. The results indicate that adaptive replication with prediction scenario is the best option for enhancing the performance of the replication process in an online business environment.

  14. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  15. Human Parvovirus B19 Utilizes Cellular DNA Replication Machinery for Viral DNA Replication.

    Science.gov (United States)

    Zou, Wei; Wang, Zekun; Xiong, Min; Chen, Aaron Yun; Xu, Peng; Ganaie, Safder S; Badawi, Yomna; Kleiboeker, Steve; Nishimune, Hiroshi; Ye, Shui Qing; Qiu, Jianming

    2018-03-01

    Human parvovirus B19 (B19V) infection of human erythroid progenitor cells (EPCs) induces a DNA damage response and cell cycle arrest at late S phase, which facilitates viral DNA replication. However, it is not clear exactly which cellular factors are employed by this single-stranded DNA virus. Here, we used microarrays to systematically analyze the dynamic transcriptome of EPCs infected with B19V. We found that DNA metabolism, DNA replication, DNA repair, DNA damage response, cell cycle, and cell cycle arrest pathways were significantly regulated after B19V infection. Confocal microscopy analyses revealed that most cellular DNA replication proteins were recruited to the centers of viral DNA replication, but not the DNA repair DNA polymerases. Our results suggest that DNA replication polymerase δ and polymerase α are responsible for B19V DNA replication by knocking down its expression in EPCs. We further showed that although RPA32 is essential for B19V DNA replication and the phosphorylated forms of RPA32 colocalized with the replicating viral genomes, RPA32 phosphorylation was not necessary for B19V DNA replication. Thus, this report provides evidence that B19V uses the cellular DNA replication machinery for viral DNA replication. IMPORTANCE Human parvovirus B19 (B19V) infection can cause transient aplastic crisis, persistent viremia, and pure red cell aplasia. In fetuses, B19V infection can result in nonimmune hydrops fetalis and fetal death. These clinical manifestations of B19V infection are a direct outcome of the death of human erythroid progenitors that host B19V replication. B19V infection induces a DNA damage response that is important for cell cycle arrest at late S phase. Here, we analyzed dynamic changes in cellular gene expression and found that DNA metabolic processes are tightly regulated during B19V infection. Although genes involved in cellular DNA replication were downregulated overall, the cellular DNA replication machinery was tightly

  16. MOF Suppresses Replication Stress and Contributes to Resolution of Stalled Replication Forks.

    Science.gov (United States)

    Singh, Dharmendra Kumar; Pandita, Raj K; Singh, Mayank; Chakraborty, Sharmistha; Hambarde, Shashank; Ramnarain, Deepti; Charaka, Vijaya; Ahmed, Kazi Mokim; Hunt, Clayton R; Pandita, Tej K

    2018-03-15

    The human MOF (hMOF) protein belongs to the MYST family of histone acetyltransferases and plays a critical role in transcription and the DNA damage response. MOF is essential for cell proliferation; however, its role during replication and replicative stress is unknown. Here we demonstrate that cells depleted of MOF and under replicative stress induced by cisplatin, hydroxyurea, or camptothecin have reduced survival, a higher frequency of S-phase-specific chromosome damage, and increased R-loop formation. MOF depletion decreased replication fork speed and, when combined with replicative stress, also increased stalled replication forks as well as new origin firing. MOF interacted with PCNA, a key coordinator of replication and repair machinery at replication forks, and affected its ubiquitination and recruitment to the DNA damage site. Depletion of MOF, therefore, compromised the DNA damage repair response as evidenced by decreased Mre11, RPA70, Rad51, and PCNA focus formation, reduced DNA end resection, and decreased CHK1 phosphorylation in cells after exposure to hydroxyurea or cisplatin. These results support the argument that MOF plays an important role in suppressing replication stress induced by genotoxic agents at several stages during the DNA damage response. Copyright © 2018 American Society for Microbiology.

  17. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  18. Replication of Holograms with Corn Syrup by Rubbing

    Directory of Open Access Journals (Sweden)

    Arturo Olivares-Pérez

    2012-08-01

    Full Text Available Corn syrup films are used to replicate holograms in order to fabricate micro-structural patterns without the toxins commonly found in photosensitive salts and dyes. We use amplitude and relief masks with lithographic techniques and rubbing techniques in order to transfer holographic information to corn syrup material. Holographic diffraction patterns from holographic gratings and computer Fourier holograms fabricated with corn syrup are shown. We measured the diffraction efficiency parameter in order to characterize the film. The versatility of this material for storage information is promising. Holographic gratings achieved a diffraction efficiency of around 8.4% with an amplitude mask and 36% for a relief mask technique. Preliminary results using corn syrup as an emulsion for replicating holograms are also shown in this work.

  19. Replication of Holograms with Corn Syrup by Rubbing

    Science.gov (United States)

    Mejias-Brizuela, Nildia Y.; Olivares-Pérez, Arturo; Ortiz-Gutiérrez, Mauricio

    2012-01-01

    Corn syrup films are used to replicate holograms in order to fabricate micro-structural patterns without the toxins commonly found in photosensitive salts and dyes. We use amplitude and relief masks with lithographic techniques and rubbing techniques in order to transfer holographic information to corn syrup material. Holographic diffraction patterns from holographic gratings and computer Fourier holograms fabricated with corn syrup are shown. We measured the diffraction efficiency parameter in order to characterize the film. The versatility of this material for storage information is promising. Holographic gratings achieved a diffraction efficiency of around 8.4% with an amplitude mask and 36% for a relief mask technique. Preliminary results using corn syrup as an emulsion for replicating holograms are also shown in this work.

  20. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  1. Contract on using computer resources of another

    Directory of Open Access Journals (Sweden)

    Cvetković Mihajlo

    2016-01-01

    Full Text Available Contractual relations involving the use of another's property are quite common. Yet, the use of computer resources of others over the Internet and legal transactions arising thereof certainly diverge from the traditional framework embodied in the special part of contract law dealing with this issue. Modern performance concepts (such as: infrastructure, software or platform as high-tech services are highly unlikely to be described by the terminology derived from Roman law. The overwhelming novelty of high-tech services obscures the disadvantageous position of contracting parties. In most cases, service providers are global multinational companies which tend to secure their own unjustified privileges and gain by providing lengthy and intricate contracts, often comprising a number of legal documents. General terms and conditions in these service provision contracts are further complicated by the '.service level agreement', rules of conduct and (nonconfidentiality guarantees. Without giving the issue a second thought, users easily accept the pre-fabricated offer without reservations, unaware that such a pseudo-gratuitous contract actually conceals a highly lucrative and mutually binding agreement. The author examines the extent to which the legal provisions governing sale of goods and services, lease, loan and commodatum may apply to 'cloud computing' contracts, and analyses the scope and advantages of contractual consumer protection, as a relatively new area in contract law. The termination of a service contract between the provider and the user features specific post-contractual obligations which are inherent to an online environment.

  2. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  3. Big Data in Cloud Computing: A Resource Management Perspective

    Directory of Open Access Journals (Sweden)

    Saeed Ullah

    2018-01-01

    Full Text Available The modern day advancement is increasingly digitizing our lives which has led to a rapid growth of data. Such multidimensional datasets are precious due to the potential of unearthing new knowledge and developing decision-making insights from them. Analyzing this huge amount of data from multiple sources can help organizations to plan for the future and anticipate changing market trends and customer requirements. While the Hadoop framework is a popular platform for processing larger datasets, there are a number of other computing infrastructures, available to use in various application domains. The primary focus of the study is how to classify major big data resource management systems in the context of cloud computing environment. We identify some key features which characterize big data frameworks as well as their associated challenges and issues. We use various evaluation metrics from different aspects to identify usage scenarios of these platforms. The study came up with some interesting findings which are in contradiction with the available literature on the Internet.

  4. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    Science.gov (United States)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  5. Mobile devices and computing cloud resources allocation for interactive applications

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2017-06-01

    Full Text Available Using mobile devices such as smartphones or iPads for various interactive applications is currently very common. In the case of complex applications, e.g. chess games, the capabilities of these devices are insufficient to run the application in real time. One of the solutions is to use cloud computing. However, there is an optimization problem of mobile device and cloud resources allocation. An iterative heuristic algorithm for application distribution is proposed. The algorithm minimizes the energy cost of application execution with constrained execution time.

  6. X-irradiation affects all DNA replication intermediates when inhibiting replication initiation

    International Nuclear Information System (INIS)

    Loenn, U.; Karolinska Hospital, Stockholm

    1982-01-01

    When a human melanoma line was irradiated with 10 Gy, there was, after 30 to 60 min, a gradual reduction in the DNA replication rate. Ten to twelve hours after the irradiation, the DNA replication had returned to near normal rate. The results showed tht low dose-rate X-irradiation inhibits preferentially the formation of small DNA replication intermediates. There is no difference between the inhibition of these replication intermediates formed only in the irradiated cells and those formed also in untreated cells. (U.K.)

  7. Chromatin Structure and Replication Origins: Determinants Of Chromosome Replication And Nuclear Organization

    Science.gov (United States)

    Smith, Owen K.; Aladjem, Mirit I.

    2014-01-01

    The DNA replication program is, in part, determined by the epigenetic landscape that governs local chromosome architecture and directs chromosome duplication. Replication must coordinate with other biochemical processes occurring concomitantly on chromatin, such as transcription and remodeling, to insure accurate duplication of both genetic and epigenetic features and to preserve genomic stability. The importance of genome architecture and chromatin looping in coordinating cellular processes on chromatin is illustrated by two recent sets of discoveries. First, chromatin-associated proteins that are not part of the core replication machinery were shown to affect the timing of DNA replication. These chromatin-associated proteins could be working in concert, or perhaps in competition, with the transcriptional machinery and with chromatin modifiers to determine the spatial and temporal organization of replication initiation events. Second, epigenetic interactions are mediated by DNA sequences that determine chromosomal replication. In this review we summarize recent findings and current models linking spatial and temporal regulation of the replication program with epigenetic signaling. We discuss these issues in the context of the genome’s three-dimensional structure with an emphasis on events occurring during the initiation of DNA replication. PMID:24905010

  8. Prelife catalysts and replicators

    OpenAIRE

    Ohtsuki, Hisashi; Nowak, Martin A.

    2009-01-01

    Life is based on replication and evolution. But replication cannot be taken for granted. We must ask what there was prior to replication and evolution. How does evolution begin? We have proposed prelife as a generative system that produces information and diversity in the absence of replication. We model prelife as a binary soup of active monomers that form random polymers. ‘Prevolutionary’ dynamics can have mutation and selection prior to replication. Some sequences might have catalytic acti...

  9. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    Science.gov (United States)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  10. Data from Investigating Variation in Replicability: A “Many Labs” Replication Project

    Directory of Open Access Journals (Sweden)

    Richard A. Klein

    2014-04-01

    Full Text Available This dataset is from the Many Labs Replication Project in which 13 effects were replicated across 36 samples and over 6,000 participants. Data from the replications are included, along with demographic variables about the participants and contextual information about the environment in which the replication was conducted. Data were collected in-lab and online through a standardized procedure administered via an online link. The dataset is stored on the Open Science Framework website. These data could be used to further investigate the results of the included 13 effects or to study replication and generalizability more broadly.

  11. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  12. Grid computing the European Data Grid Project

    CERN Document Server

    Segal, B; Gagliardi, F; Carminati, F

    2000-01-01

    The goal of this project is the development of a novel environment to support globally distributed scientific exploration involving multi- PetaByte datasets. The project will devise and develop middleware solutions and testbeds capable of scaling to handle many PetaBytes of distributed data, tens of thousands of resources (processors, disks, etc.), and thousands of simultaneous users. The scale of the problem and the distribution of the resources and user community preclude straightforward replication of the data at different sites, while the aim of providing a general purpose application environment precludes distributing the data using static policies. We will construct this environment by combining and extending newly emerging "Grid" technologies to manage large distributed datasets in addition to computational elements. A consequence of this project will be the emergence of fundamental new modes of scientific exploration, as access to fundamental scientific data is no longer constrained to the producer of...

  13. Computer Processing 10-20-30. Teacher's Manual. Senior High School Teacher Resource Manual.

    Science.gov (United States)

    Fisher, Mel; Lautt, Ray

    Designed to help teachers meet the program objectives for the computer processing curriculum for senior high schools in the province of Alberta, Canada, this resource manual includes the following sections: (1) program objectives; (2) a flowchart of curriculum modules; (3) suggestions for short- and long-range planning; (4) sample lesson plans;…

  14. Hierarchical Data Replication and Service Monitoring Methods in a Scientific Data Grid

    Directory of Open Access Journals (Sweden)

    Weizhong Lu

    2009-04-01

    Full Text Available In a grid and distributed computing environment, data replication is an effective way to improve data accessibility and data accessing efficiency. It is also significant in developing a real-time service monitoring system for a Chinese Scientific Data Grid to guarantee the system stability and data availability. Hierarchical data replication and service monitoring methods are proposed in this paper. The hierarchical data replication method divides the network into different domains and replicates data in local domains. The nodes in a local domain are classified into hierarchies to improve data accessibility according to bandwidth and storage memory space. An extensible agent-based prototype of a hierarchical service monitoring system is presented. The status information of services in the Chinese Scientific Data Grid is collected from the grid nodes based on agent technology and then is transformed into real-time operational pictures for management needs. This paper presents frameworks of the hierarchical data replication and service monitoring methods and gives detailed resolutions. Simulation analyses have demonstrated improved data accessing efficiency and verified the effectiveness of the methods at the same time.

  15. Functions of Ubiquitin and SUMO in DNA Replication and Replication Stress

    Science.gov (United States)

    García-Rodríguez, Néstor; Wong, Ronald P.; Ulrich, Helle D.

    2016-01-01

    Complete and faithful duplication of its entire genetic material is one of the essential prerequisites for a proliferating cell to maintain genome stability. Yet, during replication DNA is particularly vulnerable to insults. On the one hand, lesions in replicating DNA frequently cause a stalling of the replication machinery, as most DNA polymerases cannot cope with defective templates. This situation is aggravated by the fact that strand separation in preparation for DNA synthesis prevents common repair mechanisms relying on strand complementarity, such as base and nucleotide excision repair, from working properly. On the other hand, the replication process itself subjects the DNA to a series of hazardous transformations, ranging from the exposure of single-stranded DNA to topological contortions and the generation of nicks and fragments, which all bear the risk of inducing genomic instability. Dealing with these problems requires rapid and flexible responses, for which posttranslational protein modifications that act independently of protein synthesis are particularly well suited. Hence, it is not surprising that members of the ubiquitin family, particularly ubiquitin itself and SUMO, feature prominently in controlling many of the defensive and restorative measures involved in the protection of DNA during replication. In this review we will discuss the contributions of ubiquitin and SUMO to genome maintenance specifically as they relate to DNA replication. We will consider cases where the modifiers act during regular, i.e., unperturbed stages of replication, such as initiation, fork progression, and termination, but also give an account of their functions in dealing with lesions, replication stalling and fork collapse. PMID:27242895

  16. Sustainable computational science

    DEFF Research Database (Denmark)

    Rougier, Nicolas; Hinsen, Konrad; Alexandre, Frédéric

    2017-01-01

    Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results, however computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research...... workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested, hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages...... the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience...

  17. NACSA Charter School Replication Guide: The Spectrum of Replication Options. Authorizing Matters. Replication Brief 1

    Science.gov (United States)

    O'Neill, Paul

    2010-01-01

    One of the most important and high-profile issues in public education reform today is the replication of successful public charter school programs. With more than 5,000 failing public schools in the United States, there is a tremendous need for strong alternatives for parents and students. Replicating successful charter school models is an…

  18. SYSTEMATIC LITERATURE REVIEW ON RESOURCE ALLOCATION AND RESOURCE SCHEDULING IN CLOUD COMPUTING

    OpenAIRE

    B. Muni Lavanya; C. Shoba Bindu

    2016-01-01

    The objective the work is intended to highlight the key features and afford finest future directions in the research community of Resource Allocation, Resource Scheduling and Resource management from 2009 to 2016. Exemplifying how research on Resource Allocation, Resource Scheduling and Resource management has progressively increased in the past decade by inspecting articles, papers from scientific and standard publications. Survey materialized in three-fold process. Firstly, investigate on t...

  19. Social learning and the replication process: an experimental investigation.

    Science.gov (United States)

    Derex, Maxime; Feron, Romain; Godelle, Bernard; Raymond, Michel

    2015-06-07

    Human cultural traits typically result from a gradual process that has been described as analogous to biological evolution. This observation has led pioneering scholars to draw inspiration from population genetics to develop a rigorous and successful theoretical framework of cultural evolution. Social learning, the mechanism allowing information to be transmitted between individuals, has thus been described as a simple replication mechanism. Although useful, the extent to which this idealization appropriately describes the actual social learning events has not been carefully assessed. Here, we used a specifically developed computer task to evaluate (i) the extent to which social learning leads to the replication of an observed behaviour and (ii) the consequences it has for fitness landscape exploration. Our results show that social learning does not lead to a dichotomous choice between disregarding and replicating social information. Rather, it appeared that individuals combine and transform information coming from multiple sources to produce new solutions. As a consequence, landscape exploration was promoted by the use of social information. These results invite us to rethink the way social learning is commonly modelled and could question the validity of predictions coming from models considering this process as replicative. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. DNA Replication in Engineered Escherichia coli Genomes with Extra Replication Origins.

    Science.gov (United States)

    Milbredt, Sarah; Farmani, Neda; Sobetzko, Patrick; Waldminghaus, Torsten

    2016-10-21

    The standard outline of bacterial genomes is a single circular chromosome with a single replication origin. From the bioengineering perspective, it appears attractive to extend this basic setup. Bacteria with split chromosomes or multiple replication origins have been successfully constructed in the last few years. The characteristics of these engineered strains will largely depend on the respective DNA replication patterns. However, the DNA replication has not been investigated systematically in engineered bacteria with multiple origins or split replicons. Here we fill this gap by studying a set of strains consisting of (i) E. coli strains with an extra copy of the native replication origin (oriC), (ii) E. coli strains with an extra copy of the replication origin from the secondary chromosome of Vibrio cholerae (oriII), and (iii) a strain in which the E. coli chromosome is split into two linear replicons. A combination of flow cytometry, microarray-based comparative genomic hybridization (CGH), and modeling revealed silencing of extra oriC copies and differential timing of ectopic oriII copies compared to the native oriC. The results were used to derive construction rules for future multiorigin and multireplicon projects.

  1. Hydroxyurea-Induced Replication Stress

    Directory of Open Access Journals (Sweden)

    Kenza Lahkim Bennani-Belhaj

    2010-01-01

    Full Text Available Bloom's syndrome (BS displays one of the strongest known correlations between chromosomal instability and a high risk of cancer at an early age. BS cells combine a reduced average fork velocity with constitutive endogenous replication stress. However, the response of BS cells to replication stress induced by hydroxyurea (HU, which strongly slows the progression of replication forks, remains unclear due to publication of conflicting results. Using two different cellular models of BS, we showed that BLM deficiency is not associated with sensitivity to HU, in terms of clonogenic survival, DSB generation, and SCE induction. We suggest that surviving BLM-deficient cells are selected on the basis of their ability to deal with an endogenous replication stress induced by replication fork slowing, resulting in insensitivity to HU-induced replication stress.

  2. Modeling of Groundwater Resources Heavy Metals Concentration Using Soft Computing Methods: Application of Different Types of Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Meysam Alizamir

    2017-09-01

    Full Text Available Nowadays, groundwater resources play a vital role as a source of drinking water in arid and semiarid regions and forecasting of pollutants content in these resources is very important. Therefore, this study aimed to compare two soft computing methods for modeling Cd, Pb and Zn concentration in groundwater resources of Asadabad Plain, Western Iran. The relative accuracy of several soft computing models, namely multi-layer perceptron (MLP and radial basis function (RBF for forecasting of heavy metals concentration have been investigated. In addition, Levenberg-Marquardt, gradient descent and conjugate gradient training algorithms were utilized for the MLP models. The ANN models for this study were developed using MATLAB R 2014 Software program. The MLP performs better than the other models for heavy metals concentration estimation. The simulation results revealed that MLP model was able to model heavy metals concentration in groundwater resources favorably. It generally is effectively utilized in environmental applications and in the water quality estimations. In addition, out of three algorithms, Levenberg-Marquardt was better than the others were. This study proposed soft computing modeling techniques for the prediction and estimation of heavy metals concentration in groundwater resources of Asadabad Plain. Based on collected data from the plain, MLP and RBF models were developed for each heavy metal. MLP can be utilized effectively in applications of prediction of heavy metals concentration in groundwater resources of Asadabad Plain.

  3. Replicating animal mitochondrial DNA

    Directory of Open Access Journals (Sweden)

    Emily A. McKinney

    2013-01-01

    Full Text Available The field of mitochondrial DNA (mtDNA replication has been experiencing incredible progress in recent years, and yet little is certain about the mechanism(s used by animal cells to replicate this plasmid-like genome. The long-standing strand-displacement model of mammalian mtDNA replication (for which single-stranded DNA intermediates are a hallmark has been intensively challenged by a new set of data, which suggests that replication proceeds via coupled leading-and lagging-strand synthesis (resembling bacterial genome replication and/or via long stretches of RNA intermediates laid on the mtDNA lagging-strand (the so called RITOLS. The set of proteins required for mtDNA replication is small and includes the catalytic and accessory subunits of DNA polymerase y, the mtDNA helicase Twinkle, the mitochondrial single-stranded DNA-binding protein, and the mitochondrial RNA polymerase (which most likely functions as the mtDNA primase. Mutations in the genes coding for the first three proteins are associated with human diseases and premature aging, justifying the research interest in the genetic, biochemical and structural properties of the mtDNA replication machinery. Here we summarize these properties and discuss the current models of mtDNA replication in animal cells.

  4. Integrating GRID tools to build a computing resource broker: activities of DataGrid WP1

    International Nuclear Information System (INIS)

    Anglano, C.; Barale, S.; Gaido, L.; Guarise, A.; Lusso, S.; Werbrouck, A.

    2001-01-01

    Resources on a computational Grid are geographically distributed, heterogeneous in nature, owned by different individuals or organizations with their own scheduling policies, have different access cost models with dynamically varying loads and availability conditions. This makes traditional approaches to workload management, load balancing and scheduling inappropriate. The first work package (WP1) of the EU-funded DataGrid project is addressing the issue of optimizing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of-date (collected in a finite amount of time at a very loosely coupled site). The authors describe the DataGrid approach in integrating existing software components (from Condor, Globus, etc.) to build a Grid Resource Broker, and the early efforts to define a workable scheduling strategy

  5. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  6. Chromatin Controls DNA Replication Origin Selection, Lagging-Strand Synthesis, and Replication Fork Rates.

    Science.gov (United States)

    Kurat, Christoph F; Yeeles, Joseph T P; Patel, Harshil; Early, Anne; Diffley, John F X

    2017-01-05

    The integrity of eukaryotic genomes requires rapid and regulated chromatin replication. How this is accomplished is still poorly understood. Using purified yeast replication proteins and fully chromatinized templates, we have reconstituted this process in vitro. We show that chromatin enforces DNA replication origin specificity by preventing non-specific MCM helicase loading. Helicase activation occurs efficiently in the context of chromatin, but subsequent replisome progression requires the histone chaperone FACT (facilitates chromatin transcription). The FACT-associated Nhp6 protein, the nucleosome remodelers INO80 or ISW1A, and the lysine acetyltransferases Gcn5 and Esa1 each contribute separately to maximum DNA synthesis rates. Chromatin promotes the regular priming of lagging-strand DNA synthesis by facilitating DNA polymerase α function at replication forks. Finally, nucleosomes disrupted during replication are efficiently re-assembled into regular arrays on nascent DNA. Our work defines the minimum requirements for chromatin replication in vitro and shows how multiple chromatin factors might modulate replication fork rates in vivo. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Modeling inhomogeneous DNA replication kinetics.

    Directory of Open Access Journals (Sweden)

    Michel G Gauthier

    Full Text Available In eukaryotic organisms, DNA replication is initiated at a series of chromosomal locations called origins, where replication forks are assembled proceeding bidirectionally to replicate the genome. The distribution and firing rate of these origins, in conjunction with the velocity at which forks progress, dictate the program of the replication process. Previous attempts at modeling DNA replication in eukaryotes have focused on cases where the firing rate and the velocity of replication forks are homogeneous, or uniform, across the genome. However, it is now known that there are large variations in origin activity along the genome and variations in fork velocities can also take place. Here, we generalize previous approaches to modeling replication, to allow for arbitrary spatial variation of initiation rates and fork velocities. We derive rate equations for left- and right-moving forks and for replication probability over time that can be solved numerically to obtain the mean-field replication program. This method accurately reproduces the results of DNA replication simulation. We also successfully adapted our approach to the inverse problem of fitting measurements of DNA replication performed on single DNA molecules. Since such measurements are performed on specified portion of the genome, the examined DNA molecules may be replicated by forks that originate either within the studied molecule or outside of it. This problem was solved by using an effective flux of incoming replication forks at the model boundaries to represent the origin activity outside the studied region. Using this approach, we show that reliable inferences can be made about the replication of specific portions of the genome even if the amount of data that can be obtained from single-molecule experiments is generally limited.

  8. Mechanisms of DNA replication termination.

    Science.gov (United States)

    Dewar, James M; Walter, Johannes C

    2017-08-01

    Genome duplication is carried out by pairs of replication forks that assemble at origins of replication and then move in opposite directions. DNA replication ends when converging replication forks meet. During this process, which is known as replication termination, DNA synthesis is completed, the replication machinery is disassembled and daughter molecules are resolved. In this Review, we outline the steps that are likely to be common to replication termination in most organisms, namely, fork convergence, synthesis completion, replisome disassembly and decatenation. We briefly review the mechanism of termination in the bacterium Escherichia coli and in simian virus 40 (SV40) and also focus on recent advances in eukaryotic replication termination. In particular, we discuss the recently discovered E3 ubiquitin ligases that control replisome disassembly in yeast and higher eukaryotes, and how their activity is regulated to avoid genome instability.

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  10. Phosphatidic acid produced by phospholipase D promotes RNA replication of a plant RNA virus.

    Directory of Open Access Journals (Sweden)

    Kiwamu Hyodo

    2015-05-01

    Full Text Available Eukaryotic positive-strand RNA [(+RNA] viruses are intracellular obligate parasites replicate using the membrane-bound replicase complexes that contain multiple viral and host components. To replicate, (+RNA viruses exploit host resources and modify host metabolism and membrane organization. Phospholipase D (PLD is a phosphatidylcholine- and phosphatidylethanolamine-hydrolyzing enzyme that catalyzes the production of phosphatidic acid (PA, a lipid second messenger that modulates diverse intracellular signaling in various organisms. PA is normally present in small amounts (less than 1% of total phospholipids, but rapidly and transiently accumulates in lipid bilayers in response to different environmental cues such as biotic and abiotic stresses in plants. However, the precise functions of PLD and PA remain unknown. Here, we report the roles of PLD and PA in genomic RNA replication of a plant (+RNA virus, Red clover necrotic mosaic virus (RCNMV. We found that RCNMV RNA replication complexes formed in Nicotiana benthamiana contained PLDα and PLDβ. Gene-silencing and pharmacological inhibition approaches showed that PLDs and PLDs-derived PA are required for viral RNA replication. Consistent with this, exogenous application of PA enhanced viral RNA replication in plant cells and plant-derived cell-free extracts. We also found that a viral auxiliary replication protein bound to PA in vitro, and that the amount of PA increased in RCNMV-infected plant leaves. Together, our findings suggest that RCNMV hijacks host PA-producing enzymes to replicate.

  11. Rolling replication of UV-irradiated duplex DNA in the phi X174 replicative-form----single-strand replication system in vitro

    International Nuclear Information System (INIS)

    Shavitt, O.; Livneh, Z.

    1989-01-01

    Cloning of the phi X174 viral origin of replication into phage M13mp8 produced an M13-phi X174 chimera, the DNA of which directed efficient replicative-form----single-strand rolling replication in vitro. This replication assay was performed with purified phi X174-encoded gene A protein, Escherichia coli rep helicase, single-stranded DNA-binding protein, and DNA polymerase III holoenzyme. The nicking of replicative-form I (RFI) DNA by gene A protein was essentially unaffected by the presence of UV lesions in the DNA. However, unwinding of UV-irradiated DNA by the rep helicase was inhibited twofold as compared with unwinding of the unirradiated substrate. UV irradiation of the substrate DNA caused a strong inhibition in its ability to direct DNA synthesis. However, even DNA preparations that contained as many as 10 photodimers per molecule still supported the synthesis of progeny full-length single-stranded DNA. The appearance of full-length radiolabeled products implied at least two full rounds of replication, since the first round released the unlabeled plus viral strand of the duplex DNA. Pretreatment of the UV-irradiated DNA substrate with purified pyrimidine dimer endonuclease from Micrococcus luteus, which converted photodimer-containing supercoiled RFI DNA into relaxed, nicked RFII DNA and thus prevented its replication, reduced DNA synthesis by 70%. Analysis of radiolabeled replication products by agarose gel electrophoresis followed by autoradiography revealed that this decrease was due to a reduction in the synthesis of progeny full-length single-stranded DNA. This implies that 70 to 80% of the full-length DNA products produced in this system were synthesized on molecules that carried photodimers

  12. Mammalian RAD52 Functions in Break-Induced Replication Repair of Collapsed DNA Replication Forks

    DEFF Research Database (Denmark)

    Sotiriou, Sotirios K; Kamileri, Irene; Lugli, Natalia

    2016-01-01

    Human cancers are characterized by the presence of oncogene-induced DNA replication stress (DRS), making them dependent on repair pathways such as break-induced replication (BIR) for damaged DNA replication forks. To better understand BIR, we performed a targeted siRNA screen for genes whose...... RAD52 facilitates repair of collapsed DNA replication forks in cancer cells....

  13. REPLICATION TOOL AND METHOD OF PROVIDING A REPLICATION TOOL

    DEFF Research Database (Denmark)

    2016-01-01

    The invention relates to a replication tool (1, 1a, 1b) for producing a part (4) with a microscale textured replica surface (5a, 5b, 5c, 5d). The replication tool (1, 1a, 1b) comprises a tool surface (2a, 2b) defining a general shape of the item. The tool surface (2a, 2b) comprises a microscale...... energy directors on flange portions thereof uses the replication tool (1, 1a, 1b) to form an item (4) with a general shape as defined by the tool surface (2a, 2b). The formed item (4) comprises a microscale textured replica surface (5a, 5b, 5c, 5d) with a lateral arrangement of polydisperse microscale...

  14. The Impact of the Implementation Cost of Replication in Data Grid Job Scheduling

    Directory of Open Access Journals (Sweden)

    Babar Nazir

    2018-05-01

    Full Text Available Data Grids deal with geographically-distributed large-scale data-intensive applications. Schemes scheduled for data grids attempt to not only improve data access time, but also aim to improve the ratio of data availability to a node, where the data requests are generated. Data replication techniques manage large data by storing a number of data files efficiently. In this paper, we propose centralized dynamic scheduling strategy-replica placement strategies (CDSS-RPS. CDSS-RPS schedule the data and task so that it minimizes the implementation cost and data transfer time. CDSS-RPS consists of two algorithms, namely (a centralized dynamic scheduling (CDS and (b replica placement strategy (RPS. CDS considers the computing capacity of a node and finds an appropriate location for the job. RPS attempts to improve file access time by using replication on the basis of number of accesses, storage capacity of a computing node, and response time of a requested file. Extensive simulations are carried out to demonstrate the effectiveness of the proposed strategy. Simulation results demonstrate that the replication and scheduling strategies improve the implementation cost and average access time significantly.

  15. Recommendations for Replication Research in Special Education: A Framework of Systematic, Conceptual Replications

    Science.gov (United States)

    Coyne, Michael D.; Cook, Bryan G.; Therrien, William J.

    2016-01-01

    Special education researchers conduct studies that can be considered replications. However, they do not often refer to them as replication studies. The purpose of this article is to consider the potential benefits of conceptualizing special education intervention research within a framework of systematic, conceptual replication. Specifically, we…

  16. DNA replication and cancer

    DEFF Research Database (Denmark)

    Boyer, Anne-Sophie; Walter, David; Sørensen, Claus Storgaard

    2016-01-01

    A dividing cell has to duplicate its DNA precisely once during the cell cycle to preserve genome integrity avoiding the accumulation of genetic aberrations that promote diseases such as cancer. A large number of endogenous impacts can challenge DNA replication and cells harbor a battery of pathways...... causing DNA replication stress and genome instability. Further, we describe cellular and systemic responses to these insults with a focus on DNA replication restart pathways. Finally, we discuss the therapeutic potential of exploiting intrinsic replicative stress in cancer cells for targeted therapy....

  17. Late-replicating X-chromosome: replication patterns in mammalian females

    Directory of Open Access Journals (Sweden)

    Tunin Karen

    2002-01-01

    Full Text Available The GTG-banding and 5-BrdU incorporation patterns of the late-replicating X-chromosome were studied in female dogs and cattle, and compared to human female patterns. The replication patterns of the short arm of the X-chromosomes did not show any difference between human, dog and cattle females. As to the long arm, some bands showed differences among the three studied species regarding the replication kinetics pattern. These differences were observed in a restricted region of the X-chromosome, delimited by Xq11 -> q25 in humans, by Xq1 -> q8 in dogs, and by Xq12 -> q32 in cattle. In an attempt to find out if these differences in the replication kinetics could be a reflection of differences in the localization of genes in that region of the X-chromosome, we used the probe for the human androgen receptor gene (AR localized at Xq12, which is in the region where we observed differences among the three studied species. We did not, however, observe hybridization signals. Our study goes on, using other human probes for genes located in the region Xq11 -> Xq25.

  18. Exploiting short-term memory in soft body dynamics as a computational resource.

    Science.gov (United States)

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. LHCb: Self managing experiment resources

    CERN Multimedia

    Stagni, F

    2013-01-01

    Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System ( Resource Status System ) delivering real time informatio...

  20. A Replication by Any Other Name: A Systematic Review of Replicative Intervention Studies

    Science.gov (United States)

    Cook, Bryan G.; Collins, Lauren W.; Cook, Sara C.; Cook, Lysandra

    2016-01-01

    Replication research is essential to scientific knowledge. Reviews of replication studies often electronically search for "replicat*" as a textword, which does not identify studies that replicate previous research but do not self-identify as such. We examined whether the 83 intervention studies published in six non-categorical research…

  1. The scenario on the origin of translation in the RNA world: in principle of replication parsimony

    Directory of Open Access Journals (Sweden)

    Ma Wentao

    2010-11-01

    to aid the binding of proto-tRNAs and proto-mRNAs, allowing the reduction of base pairs between them (ultimately resulting in the triplet anticodon/codon pair, thus further saving the replication cost. In this context, the replication cost saved would allow the appearance of more and longer functional peptides and, finally, proteins. The hypothesis could be called "DRT-RP" ("RP" for "replication parsimony". Testing the hypothesis The scenario described here is open for experimental work at some key scenes, including the compact DRT mechanism, the development of adaptors from aa-aptamers, the synthesis of peptides by proto-tRNAs and proto-mRNAs without the participation of proto-rRNAs, etc. Interestingly, a recent computer simulation study has demonstrated the plausibility of one of the evolving processes driven by replication parsimony in the scenario. Implication of the hypothesis An RNA-based proto-translation system could arise gradually from the DRT mechanism according to the principle of "replication parsimony" --- to save the replication cost of RNA templates for functional peptides. A surprising side deduction along the logic of the hypothesis is that complex, biosynthetic amino acids might have entered the genetic code earlier than simple, prebiotic amino acids, which is opposite to the common sense. Overall, the present discussion clarifies the blurry scenario concerning the origin of translation with a major clue, which shows vividly how life could "manage" to exploit potential chemical resources in nature, eventually in an efficient way over evolution. Reviewers This article was reviewed by Eugene V. Koonin, Juergen Brosius, and Arcady Mushegian.

  2. Registered Replication Report

    DEFF Research Database (Denmark)

    Bouwmeester, S.; Verkoeijen, P. P.J.L.; Aczel, B.

    2017-01-01

    and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed...... the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned...

  3. The NILE system architecture: fault-tolerant, wide-area access to computing and data resources

    International Nuclear Information System (INIS)

    Ricciardi, Aleta; Ogg, Michael; Rothfus, Eric

    1996-01-01

    NILE is a multi-disciplinary project building a distributed computing environment for HEP. It provides wide-area, fault-tolerant, integrated access to processing and data resources for collaborators of the CLEO experiment, though the goals and principles are applicable to many domains. NILE has three main objectives: a realistic distributed system architecture design, the design of a robust data model, and a Fast-Track implementation providing a prototype design environment which will also be used by CLEO physicists. This paper focuses on the software and wide-area system architecture design and the computing issues involved in making NILE services highly-available. (author)

  4. International Expansion through Flexible Replication

    DEFF Research Database (Denmark)

    Jonsson, Anna; Foss, Nicolai Juul

    2011-01-01

    Business organizations may expand internationally by replicating a part of their value chain, such as a sales and marketing format, in other countries. However, little is known regarding how such “international replicators” build a format for replication, or how they can adjust it in order to ada......, etc.) are replicated in a uniform manner across stores, and change only very slowly (if at all) in response to learning (“flexible replication”). We conclude by discussing the factors that influence the approach to replication adopted by an international replicator.......Business organizations may expand internationally by replicating a part of their value chain, such as a sales and marketing format, in other countries. However, little is known regarding how such “international replicators” build a format for replication, or how they can adjust it in order to adapt...

  5. Enzyme-like replication de novo in a microcontroller environment.

    Science.gov (United States)

    Tangen, Uwe

    2010-01-01

    The desire to start evolution from scratch inside a computer memory is as old as computing. Here we demonstrate how viable computer programs can be established de novo in a Precambrian environment without supplying any specific instantiation, just starting with random bit sequences. These programs are not self-replicators, but act much more like catalysts. The microcontrollers used in the end are the result of a long series of simplifications. The objective of this simplification process was to produce universal machines with a human-readable interface, allowing software and/or hardware evolution to be studied. The power of the instruction set can be modified by introducing a secondary structure-folding mechanism, which is a state machine, allowing nontrivial replication to emerge with an instruction width of only a few bits. This state-machine approach not only attenuates the problems of brittleness and encoding functionality (too few bits available for coding, and too many instructions needed); it also enables the study of hardware evolution as such. Furthermore, the instruction set is sufficiently powerful to permit external signals to be processed. This information-theoretic approach forms one vertex of a triangle alongside artificial cell research and experimental research on the creation of life. Hopefully this work helps develop an understanding of how information—in a similar sense to the account of functional information described by Hazen et al.—is created by evolution and how this information interacts with or is embedded in its physico-chemical environment.

  6. Sterol Binding by the Tombusviral Replication Proteins Is Essential for Replication in Yeast and Plants.

    Science.gov (United States)

    Xu, Kai; Nagy, Peter D

    2017-04-01

    Membranous structures derived from various organelles are important for replication of plus-stranded RNA viruses. Although the important roles of co-opted host proteins in RNA virus replication have been appreciated for a decade, the equally important functions of cellular lipids in virus replication have been gaining full attention only recently. Previous work with Tomato bushy stunt tombusvirus (TBSV) in model host yeast has revealed essential roles for phosphatidylethanolamine and sterols in viral replication. To further our understanding of the role of sterols in tombusvirus replication, in this work we showed that the TBSV p33 and p92 replication proteins could bind to sterols in vitro The sterol binding by p33 is supported by cholesterol recognition/interaction amino acid consensus (CRAC) and CARC-like sequences within the two transmembrane domains of p33. Mutagenesis of the critical Y amino acids within the CRAC and CARC sequences blocked TBSV replication in yeast and plant cells. We also showed the enrichment of sterols in the detergent-resistant membrane (DRM) fractions obtained from yeast and plant cells replicating TBSV. The DRMs could support viral RNA synthesis on both the endogenous and exogenous templates. A lipidomic approach showed the lack of enhancement of sterol levels in yeast and plant cells replicating TBSV. The data support the notion that the TBSV replication proteins are associated with sterol-rich detergent-resistant membranes in yeast and plant cells. Together, the results obtained in this study and the previously published results support the local enrichment of sterols around the viral replication proteins that is critical for TBSV replication. IMPORTANCE One intriguing aspect of viral infections is their dependence on efficient subcellular assembly platforms serving replication, virion assembly, or virus egress via budding out of infected cells. These assembly platforms might involve sterol-rich membrane microdomains, which are

  7. Selecting, Evaluating and Creating Policies for Computer-Based Resources in the Behavioral Sciences and Education.

    Science.gov (United States)

    Richardson, Linda B., Comp.; And Others

    This collection includes four handouts: (1) "Selection Critria Considerations for Computer-Based Resources" (Linda B. Richardson); (2) "Software Collection Policies in Academic Libraries" (a 24-item bibliography, Jane W. Johnson); (3) "Circulation and Security of Software" (a 19-item bibliography, Sara Elizabeth Williams); and (4) "Bibliography of…

  8. Logical and physical resource management in the common node of a distributed function laboratory computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-01-01

    A scheme for managing resources required for transaction processing in the common node of a distributed function computer system has been given. The scheme has been found to be satisfactory for all common node services provided so far

  9. Mcm10 regulates DNA replication elongation by stimulating the CMG replicative helicase.

    Science.gov (United States)

    Lõoke, Marko; Maloney, Michael F; Bell, Stephen P

    2017-02-01

    Activation of the Mcm2-7 replicative DNA helicase is the committed step in eukaryotic DNA replication initiation. Although Mcm2-7 activation requires binding of the helicase-activating proteins Cdc45 and GINS (forming the CMG complex), an additional protein, Mcm10, drives initial origin DNA unwinding by an unknown mechanism. We show that Mcm10 binds a conserved motif located between the oligonucleotide/oligosaccharide fold (OB-fold) and A subdomain of Mcm2. Although buried in the interface between these domains in Mcm2-7 structures, mutations predicted to separate the domains and expose this motif restore growth to conditional-lethal MCM10 mutant cells. We found that, in addition to stimulating initial DNA unwinding, Mcm10 stabilizes Cdc45 and GINS association with Mcm2-7 and stimulates replication elongation in vivo and in vitro. Furthermore, we identified a lethal allele of MCM10 that stimulates initial DNA unwinding but is defective in replication elongation and CMG binding. Our findings expand the roles of Mcm10 during DNA replication and suggest a new model for Mcm10 function as an activator of the CMG complex throughout DNA replication. © 2017 Lõoke et al.; Published by Cold Spring Harbor Laboratory Press.

  10. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  11. A Replication Study on the Multi-Dimensionality of Online Social Presence

    Science.gov (United States)

    Mykota, David B.

    2015-01-01

    The purpose of the present study is to conduct an external replication into the multi-dimensionality of social presence as measured by the Computer-Mediated Communication Questionnaire (Tu, 2005). Online social presence is one of the more important constructs for determining the level of interaction and effectiveness of learning in an online…

  12. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  13. Asynchronous broadcast for ordered delivery between compute nodes in a parallel computing system where packet header space is limited

    Science.gov (United States)

    Kumar, Sameer

    2010-06-15

    Disclosed is a mechanism on receiving processors in a parallel computing system for providing order to data packets received from a broadcast call and to distinguish data packets received at nodes from several incoming asynchronous broadcast messages where header space is limited. In the present invention, processors at lower leafs of a tree do not need to obtain a broadcast message by directly accessing the data in a root processor's buffer. Instead, each subsequent intermediate node's rank id information is squeezed into the software header of packet headers. In turn, the entire broadcast message is not transferred from the root processor to each processor in a communicator but instead is replicated on several intermediate nodes which then replicated the message to nodes in lower leafs. Hence, the intermediate compute nodes become "virtual root compute nodes" for the purpose of replicating the broadcast message to lower levels of a tree.

  14. Controlling user access to electronic resources without password

    Science.gov (United States)

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  15. Who Needs Replication?

    Science.gov (United States)

    Porte, Graeme

    2013-01-01

    In this paper, the editor of a recent Cambridge University Press book on research methods discusses replicating previous key studies to throw more light on their reliability and generalizability. Replication research is presented as an accepted method of validating previous research by providing comparability between the original and replicated…

  16. Mechanisms of bacterial DNA replication restart

    Science.gov (United States)

    Windgassen, Tricia A; Wessel, Sarah R; Bhattacharyya, Basudeb

    2018-01-01

    Abstract Multi-protein DNA replication complexes called replisomes perform the essential process of copying cellular genetic information prior to cell division. Under ideal conditions, replisomes dissociate only after the entire genome has been duplicated. However, DNA replication rarely occurs without interruptions that can dislodge replisomes from DNA. Such events produce incompletely replicated chromosomes that, if left unrepaired, prevent the segregation of full genomes to daughter cells. To mitigate this threat, cells have evolved ‘DNA replication restart’ pathways that have been best defined in bacteria. Replication restart requires recognition and remodeling of abandoned replication forks by DNA replication restart proteins followed by reloading of the replicative DNA helicase, which subsequently directs assembly of the remaining replisome subunits. This review summarizes our current understanding of the mechanisms underlying replication restart and the proteins that drive the process in Escherichia coli (PriA, PriB, PriC and DnaT). PMID:29202195

  17. Self managing experiment resources

    International Nuclear Information System (INIS)

    Stagni, F; Ubeda, M; Charpentier, P; Tsaregorodtsev, A; Romanovskiy, V; Roiser, S; Graciani, R

    2014-01-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  18. What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science.

    Science.gov (United States)

    Patil, Prasad; Peng, Roger D; Leek, Jeffrey T

    2016-07-01

    A recent study of the replicability of key psychological findings is a major contribution toward understanding the human side of the scientific process. Despite the careful and nuanced analysis reported, the simple narrative disseminated by the mass, social, and scientific media was that in only 36% of the studies were the original results replicated. In the current study, however, we showed that 77% of the replication effect sizes reported were within a 95% prediction interval calculated using the original effect size. Our analysis suggests two critical issues in understanding replication of psychological studies. First, researchers' intuitive expectations for what a replication should show do not always match with statistical estimates of replication. Second, when the results of original studies are very imprecise, they create wide prediction intervals-and a broad range of replication effects that are consistent with the original estimates. This may lead to effects that replicate successfully, in that replication results are consistent with statistical expectations, but do not provide much information about the size (or existence) of the true effect. In this light, the results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment. © The Author(s) 2016.

  19. Selective recruitment of nuclear factors to productively replicating herpes simplex virus genomes.

    Science.gov (United States)

    Dembowski, Jill A; DeLuca, Neal A

    2015-05-01

    Much of the HSV-1 life cycle is carried out in the cell nucleus, including the expression, replication, repair, and packaging of viral genomes. Viral proteins, as well as cellular factors, play essential roles in these processes. Isolation of proteins on nascent DNA (iPOND) was developed to label and purify cellular replication forks. We adapted aspects of this method to label viral genomes to both image, and purify replicating HSV-1 genomes for the identification of associated proteins. Many viral and cellular factors were enriched on viral genomes, including factors that mediate DNA replication, repair, chromatin remodeling, transcription, and RNA processing. As infection proceeded, packaging and structural components were enriched to a greater extent. Among the more abundant proteins that copurified with genomes were the viral transcription factor ICP4 and the replication protein ICP8. Furthermore, all seven viral replication proteins were enriched on viral genomes, along with cellular PCNA and topoisomerases, while other cellular replication proteins were not detected. The chromatin-remodeling complexes present on viral genomes included the INO80, SWI/SNF, NURD, and FACT complexes, which may prevent chromatinization of the genome. Consistent with this conclusion, histones were not readily recovered with purified viral genomes, and imaging studies revealed an underrepresentation of histones on viral genomes. RNA polymerase II, the mediator complex, TFIID, TFIIH, and several other transcriptional activators and repressors were also affinity purified with viral DNA. The presence of INO80, NURD, SWI/SNF, mediator, TFIID, and TFIIH components is consistent with previous studies in which these complexes copurified with ICP4. Therefore, ICP4 is likely involved in the recruitment of these key cellular chromatin remodeling and transcription factors to viral genomes. Taken together, iPOND is a valuable method for the study of viral genome dynamics during infection and

  20. Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.

    Science.gov (United States)

    Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J

    2008-06-18

    correlation coefficient and the SD-weighted correlation coefficient, and is particularly useful for clustering replicated microarray data. This computational approach should be generally useful for proteomic data or other high-throughput analysis methodology.

  1. Education: DNA replication using microscale natural convection.

    Science.gov (United States)

    Priye, Aashish; Hassan, Yassin A; Ugaz, Victor M

    2012-12-07

    There is a need for innovative educational experiences that unify and reinforce fundamental principles at the interface between the physical, chemical, and life sciences. These experiences empower and excite students by helping them recognize how interdisciplinary knowledge can be applied to develop new products and technologies that benefit society. Microfluidics offers an incredibly versatile tool to address this need. Here we describe our efforts to create innovative hands-on activities that introduce chemical engineering students to molecular biology by challenging them to harness microscale natural convection phenomena to perform DNA replication via the polymerase chain reaction (PCR). Experimentally, we have constructed convective PCR stations incorporating a simple design for loading and mounting cylindrical microfluidic reactors between independently controlled thermal plates. A portable motion analysis microscope enables flow patterns inside the convective reactors to be directly visualized using fluorescent bead tracers. We have also developed a hands-on computational fluid dynamics (CFD) exercise based on modeling microscale thermal convection to identify optimal geometries for DNA replication. A cognitive assessment reveals that these activities strongly impact student learning in a positive way.

  2. The Impact of Message Replication on the Performance of Opportunistic Networks for Sensed Data Collection

    Directory of Open Access Journals (Sweden)

    Tekenate E. Amah

    2017-11-01

    Full Text Available Opportunistic networks (OppNets provide a scalable solution for collecting delay‑tolerant data from sensors for their respective gateways. Portable handheld user devices contribute significantly to the scalability of OppNets since their number increases according to user population and they closely follow human movement patterns. Hence, OppNets for sensed data collection are characterised by high node population and degrees of spatial locality inherent to user movement. We study the impact of these characteristics on the performance of existing OppNet message replication techniques. Our findings reveal that the existing replication techniques are not specifically designed to cope with these characteristics. This raises concerns regarding excessive message transmission overhead and throughput degradations due to resource constraints and technological limitations associated with portable handheld user devices. Based on concepts derived from the study, we suggest design guidelines to augment existing message replication techniques. We also follow our design guidelines to propose a message replication technique, namely Locality Aware Replication (LARep. Simulation results show that LARep achieves better network performance under high node population and degrees of spatial locality as compared with existing techniques.

  3. DNA Replication Profiling Using Deep Sequencing.

    Science.gov (United States)

    Saayman, Xanita; Ramos-Pérez, Cristina; Brown, Grant W

    2018-01-01

    Profiling of DNA replication during progression through S phase allows a quantitative snap-shot of replication origin usage and DNA replication fork progression. We present a method for using deep sequencing data to profile DNA replication in S. cerevisiae.

  4. A Temporal Proteomic Map of Epstein-Barr Virus Lytic Replication in B Cells

    Directory of Open Access Journals (Sweden)

    Ina Ersing

    2017-05-01

    Full Text Available Epstein-Barr virus (EBV replication contributes to multiple human diseases, including infectious mononucleosis, nasopharyngeal carcinoma, B cell lymphomas, and oral hairy leukoplakia. We performed systematic quantitative analyses of temporal changes in host and EBV proteins during lytic replication to gain insights into virus-host interactions, using conditional Burkitt lymphoma models of type I and II EBV infection. We quantified profiles of >8,000 cellular and 69 EBV proteins, including >500 plasma membrane proteins, providing temporal views of the lytic B cell proteome and EBV virome. Our approach revealed EBV-induced remodeling of cell cycle, innate and adaptive immune pathways, including upregulation of the complement cascade and proteasomal degradation of the B cell receptor complex, conserved between EBV types I and II. Cross-comparison with proteomic analyses of human cytomegalovirus infection and of a Kaposi-sarcoma-associated herpesvirus immunoevasin identified host factors targeted by multiple herpesviruses. Our results provide an important resource for studies of EBV replication.

  5. DrugSig: A resource for computational drug repositioning utilizing gene expression signatures.

    Directory of Open Access Journals (Sweden)

    Hongyu Wu

    Full Text Available Computational drug repositioning has been proved as an effective approach to develop new drug uses. However, currently existing strategies strongly rely on drug response gene signatures which scattered in separated or individual experimental data, and resulted in low efficient outputs. So, a fully drug response gene signatures database will be very helpful to these methods. We collected drug response microarray data and annotated related drug and targets information from public databases and scientific literature. By selecting top 500 up-regulated and down-regulated genes as drug signatures, we manually established the DrugSig database. Currently DrugSig contains more than 1300 drugs, 7000 microarray and 800 targets. Moreover, we developed the signature based and target based functions to aid drug repositioning. The constructed database can serve as a resource to quicken computational drug repositioning. Database URL: http://biotechlab.fudan.edu.cn/database/drugsig/.

  6. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  7. Induction of UV-resistant DNA replication in Escherichia coli: Induced stable DNA replication as an SOS function

    International Nuclear Information System (INIS)

    Kogoma, T.; Torrey, T.A.; Connaughton, M.J.

    1979-01-01

    The striking similarity between the treatments that induce SOS functions and those that result in stable DNA replication (continuous DNA replication in the absence of protein synthesis) prompted us to examine the possibility of stable DNA replication being a recA + lexA + -dependent SOS function. In addition to the treatments previously reported, ultraviolet (UV) irradiation or treatment with mitomycin C was also found to induce stable DNA replication. The thermal treatment of tif-1 strains did not result in detectable levels of stable DNA replication, but nalidixic acid readily induced the activity in these strains. The induction of stable DNA replication with nalidixic acid was severely suppressed in tif-1 lex A mutant strains. The inhibitory activity of lexA3 was negated by the presence of the spr-5l mutation, an intragenic suppressor of lexA3. Induced stable DNA replication was found to be considerably more resistant to UV irradiation than normal replication both in a uvr A6 strain and a uvr + strain. The UV-resistant replication occurred mostly in the semiconservative manner. The possible roles of stable DNA replication in repair of damaged DNA are discussed. (orig.)

  8. Chromatin replication and epigenome maintenance

    DEFF Research Database (Denmark)

    Alabert, Constance; Groth, Anja

    2012-01-01

    Stability and function of eukaryotic genomes are closely linked to chromatin structure and organization. During cell division the entire genome must be accurately replicated and the chromatin landscape reproduced on new DNA. Chromatin and nuclear structure influence where and when DNA replication...... initiates, whereas the replication process itself disrupts chromatin and challenges established patterns of genome regulation. Specialized replication-coupled mechanisms assemble new DNA into chromatin, but epigenome maintenance is a continuous process taking place throughout the cell cycle. If DNA...

  9. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    International Nuclear Information System (INIS)

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered

  10. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Bigeleisen, Jacob; Berne, Bruce J.; Coton, F. Albert; Scheraga, Harold A.; Simmons, Howard E.; Snyder, Lawrence C.; Wiberg, Kenneth B.; Wipke, W. Todd

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered.

  11. 36 CFR 910.64 - Replication.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Replication. 910.64 Section 910.64 Parks, Forests, and Public Property PENNSYLVANIA AVENUE DEVELOPMENT CORPORATION GENERAL... DEVELOPMENT AREA Glossary of Terms § 910.64 Replication. Replication means the process of using modern methods...

  12. Replication stress-induced chromosome breakage is correlated with replication fork progression and is preceded by single-stranded DNA formation.

    Science.gov (United States)

    Feng, Wenyi; Di Rienzi, Sara C; Raghuraman, M K; Brewer, Bonita J

    2011-10-01

    Chromosome breakage as a result of replication stress has been hypothesized to be the direct consequence of defective replication fork progression, or "collapsed" replication forks. However, direct and genome-wide evidence that collapsed replication forks give rise to chromosome breakage is still lacking. Previously we showed that a yeast replication checkpoint mutant mec1-1, after transient exposure to replication impediment imposed by hydroxyurea (HU), failed to complete DNA replication, accumulated single-stranded DNA (ssDNA) at the replication forks, and fragmented its chromosomes. In this study, by following replication fork progression genome-wide via ssDNA detection and by direct mapping of chromosome breakage after HU exposure, we have tested the hypothesis that the chromosome breakage in mec1 cells occurs at collapsed replication forks. We demonstrate that sites of chromosome breakage indeed correlate with replication fork locations. Moreover, ssDNA can be detected prior to chromosome breakage, suggesting that ssDNA accumulation is the common precursor to double strand breaks at collapsed replication forks.

  13. Replication Research and Special Education

    Science.gov (United States)

    Travers, Jason C.; Cook, Bryan G.; Therrien, William J.; Coyne, Michael D.

    2016-01-01

    Replicating previously reported empirical research is a necessary aspect of an evidence-based field of special education, but little formal investigation into the prevalence of replication research in the special education research literature has been conducted. Various factors may explain the lack of attention to replication of special education…

  14. Eukaryotic DNA Replication Fork.

    Science.gov (United States)

    Burgers, Peter M J; Kunkel, Thomas A

    2017-06-20

    This review focuses on the biogenesis and composition of the eukaryotic DNA replication fork, with an emphasis on the enzymes that synthesize DNA and repair discontinuities on the lagging strand of the replication fork. Physical and genetic methodologies aimed at understanding these processes are discussed. The preponderance of evidence supports a model in which DNA polymerase ε (Pol ε) carries out the bulk of leading strand DNA synthesis at an undisturbed replication fork. DNA polymerases α and δ carry out the initiation of Okazaki fragment synthesis and its elongation and maturation, respectively. This review also discusses alternative proposals, including cellular processes during which alternative forks may be utilized, and new biochemical studies with purified proteins that are aimed at reconstituting leading and lagging strand DNA synthesis separately and as an integrated replication fork.

  15. The progression of replication forks at natural replication barriers in live bacteria

    NARCIS (Netherlands)

    Moolman, M.C.; Tiruvadi Krishnan, S; Kerssemakers, J.W.J.; de Leeuw, R.; Lorent, V.J.F.; Sherratt, David J.; Dekker, N.H.

    2016-01-01

    Protein-DNA complexes are one of the principal barriers the replisome encounters during replication. One such barrier is the Tus-ter complex, which is a direction dependent barrier for replication fork progression. The details concerning the dynamics of the replisome when encountering these

  16. Flock House virus subgenomic RNA3 is replicated and its replication correlates with transactivation of RNA2

    International Nuclear Information System (INIS)

    Eckerle, Lance D.; Albarino, Cesar G.; Ball, L. Andrew.

    2003-01-01

    The nodavirus Flock House virus has a bipartite genome composed of RNAs 1 and 2, which encode the catalytic component of the RNA-dependent RNA polymerase (RdRp) and the capsid protein precursor, respectively. In addition to catalyzing replication of the viral genome, the RdRp also transcribes from RNA1 a subgenomic RNA3, which is both required for and suppressed by RNA2 replication. Here, we show that in the absence of RNA1 replication, FHV RdRp replicated positive-sense RNA3 transcripts fully and copied negative-sense RNA3 transcripts into positive strands. The two nonstructural proteins encoded by RNA3 were dispensable for replication, but sequences in the 3'-terminal 58 nucleotides were required. RNA3 variants that failed to replicate also failed to transactivate RNA2. These results imply that RNA3 is naturally produced both by transcription from RNA1 and by subsequent RNA1-independent replication and that RNA3 replication may be necessary for transactivation of RNA2

  17. DNA Replication Control During Drosophila Development: Insights into the Onset of S Phase, Replication Initiation, and Fork Progression

    Science.gov (United States)

    Hua, Brian L.; Orr-Weaver, Terry L.

    2017-01-01

    Proper control of DNA replication is critical to ensure genomic integrity during cell proliferation. In addition, differential regulation of the DNA replication program during development can change gene copy number to influence cell size and gene expression. Drosophila melanogaster serves as a powerful organism to study the developmental control of DNA replication in various cell cycle contexts in a variety of differentiated cell and tissue types. Additionally, Drosophila has provided several developmentally regulated replication models to dissect the molecular mechanisms that underlie replication-based copy number changes in the genome, which include differential underreplication and gene amplification. Here, we review key findings and our current understanding of the developmental control of DNA replication in the contexts of the archetypal replication program as well as of underreplication and differential gene amplification. We focus on the use of these latter two replication systems to delineate many of the molecular mechanisms that underlie the developmental control of replication initiation and fork elongation. PMID:28874453

  18. DNA replication is an integral part of the mouse oocyte's reprogramming machinery.

    Directory of Open Access Journals (Sweden)

    Bingyuan Wang

    Full Text Available Many of the structural and mechanistic requirements of oocyte-mediated nuclear reprogramming remain elusive. Previous accounts that transcriptional reprogramming of somatic nuclei in mouse zygotes may be complete in 24-36 hours, far more rapidly than in other reprogramming systems, raise the question of whether the mere exposure to the activated mouse ooplasm is sufficient to enact reprogramming in a nucleus. We therefore prevented DNA replication and cytokinesis, which ensue after nuclear transfer, in order to assess their requirement for transcriptional reprogramming of the key pluripotency genes Oct4 (Pou5f1 and Nanog in cloned mouse embryos. Using transcriptome and allele-specific analysis, we observed that hundreds of mRNAs, but not Oct4 and Nanog, became elevated in nucleus-transplanted oocytes without DNA replication. Progression through the first round of DNA replication was essential but not sufficient for transcriptional reprogramming of Oct4 and Nanog, whereas cytokinesis and thereby cell-cell interactions were dispensable for transcriptional reprogramming. Responses similar to clones also were observed in embryos produced by fertilization in vitro. Our results link the occurrence of reprogramming to a previously unappreciated requirement of oocyte-mediated nuclear reprogramming, namely DNA replication. Nuclear transfer alone affords no immediate transition from a somatic to a pluripotent gene expression pattern unless DNA replication is also in place. This study is therefore a resource to appreciate that the quest for always faster reprogramming methods may collide with a limit that is dictated by the cell cycle.

  19. Rapid and accurate species tree estimation for phylogeographic investigations using replicated subsampling.

    Science.gov (United States)

    Hird, Sarah; Kubatko, Laura; Carstens, Bryan

    2010-11-01

    We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    International Nuclear Information System (INIS)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-01-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi–Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources. (paper)

  1. Resource management in utility and cloud computing

    CERN Document Server

    Zhao, Han

    2013-01-01

    This SpringerBrief reviews the existing market-oriented strategies for economically managing resource allocation in distributed systems. It describes three new schemes that address cost-efficiency, user incentives, and allocation fairness with regard to different scheduling contexts. The first scheme, taking the Amazon EC2? market as a case of study, investigates the optimal resource rental planning models based on linear integer programming and stochastic optimization techniques. This model is useful to explore the interaction between the cloud infrastructure provider and the cloud resource c

  2. Effective ANT based Routing Algorithm for Data Replication in MANETs

    Directory of Open Access Journals (Sweden)

    N.J. Nithya Nandhini

    2013-12-01

    Full Text Available In mobile ad hoc network, the nodes often move and keep on change its topology. Data packets can be forwarded from one node to another on demand. To increase the data accessibility data are replicated at nodes and made as sharable to other nodes. Assuming that all mobile host cooperative to share their memory and allow forwarding the data packets. But in reality, all nodes do not share the resources for the benefits of others. These nodes may act selfishly to share memory and to forward the data packets. This paper focuses on selfishness of mobile nodes in replica allocation and routing protocol based on Ant colony algorithm to improve the efficiency. The Ant colony algorithm is used to reduce the overhead in the mobile network, so that it is more efficient to access the data than with other routing protocols. This result shows the efficiency of ant based routing algorithm in the replication allocation.

  3. The Usage of informal computer based communication in the context of organization’s technological resources

    OpenAIRE

    Raišienė, Agota Giedrė; Jonušauskas, Steponas

    2011-01-01

    Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization's technological resources. Methodology - meta analysis, survey and descriptive analysis. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the ...

  4. Virtual Replication of IoT Hubs in the Cloud: A Flexible Approach to Smart Object Management

    Directory of Open Access Journals (Sweden)

    Simone Cirani

    2018-03-01

    Full Text Available In future years, the Internet of Things is expected to interconnect billions of highly heterogeneous devices, denoted as “smart objects”, enabling the development of innovative distributed applications. Smart objects are constrained sensor/actuator-equipped devices, in terms of computational power and available memory. In order to cope with the diverse physical connectivity technologies of smart objects, the Internet Protocol is foreseen as the common “language” for full interoperability and as a unifying factor for integration with the Internet. Large-scale platforms for interconnected devices are required to effectively manage resources provided by smart objects. In this work, we present a novel architecture for the management of large numbers of resources in a scalable, seamless, and secure way. The proposed architecture is based on a network element, denoted as IoT Hub, placed at the border of the constrained network, which implements the following functions: service discovery; border router; HTTP/Constrained Application Protocol (CoAP and CoAP/CoAP proxy; cache; and resource directory. In order to protect smart objects (which cannot, because of their constrained nature, serve a large number of concurrent requests and the IoT Hub (which serves as a gateway to the constrained network, we introduce the concept of virtual IoT Hub replica: a Cloud-based “entity” replicating all the functions of a physical IoT Hub, which external clients will query to access resources. IoT Hub replicas are constantly synchronized with the physical IoT Hub through a low-overhead protocol based on Message Queue Telemetry Transport (MQTT. An experimental evaluation, proving the feasibility and advantages of the proposed architecture, is presented.

  5. Resource-adaptive cognitive processes

    CERN Document Server

    Crocker, Matthew W

    2010-01-01

    This book investigates the adaptation of cognitive processes to limited resources. The central topics of this book are heuristics considered as results of the adaptation to resource limitations, through natural evolution in the case of humans, or through artificial construction in the case of computational systems; the construction and analysis of resource control in cognitive processes; and an analysis of resource-adaptivity within the paradigm of concurrent computation. The editors integrated the results of a collaborative 5-year research project that involved over 50 scientists. After a mot

  6. The progression of replication forks at natural replication barriers in live bacteria.

    Science.gov (United States)

    Moolman, M Charl; Tiruvadi Krishnan, Sriram; Kerssemakers, Jacob W J; de Leeuw, Roy; Lorent, Vincent; Sherratt, David J; Dekker, Nynke H

    2016-07-27

    Protein-DNA complexes are one of the principal barriers the replisome encounters during replication. One such barrier is the Tus-ter complex, which is a direction dependent barrier for replication fork progression. The details concerning the dynamics of the replisome when encountering these Tus-ter barriers in the cell are poorly understood. By performing quantitative fluorescence microscopy with microfuidics, we investigate the effect on the replisome when encountering these barriers in live Escherichia coli cells. We make use of an E. coli variant that includes only an ectopic origin of replication that is positioned such that one of the two replisomes encounters a Tus-ter barrier before the other replisome. This enables us to single out the effect of encountering a Tus-ter roadblock on an individual replisome. We demonstrate that the replisome remains stably bound after encountering a Tus-ter complex from the non-permissive direction. Furthermore, the replisome is only transiently blocked, and continues replication beyond the barrier. Additionally, we demonstrate that these barriers affect sister chromosome segregation by visualizing specific chromosomal loci in the presence and absence of the Tus protein. These observations demonstrate the resilience of the replication fork to natural barriers and the sensitivity of chromosome alignment to fork progression. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Self-Replication of Localized Vegetation Patches in Scarce Environments

    Science.gov (United States)

    Bordeu, Ignacio; Clerc, Marcel G.; Couteron, Piere; Lefever, René; Tlidi, Mustapha

    2016-09-01

    Desertification due to climate change and increasing drought periods is a worldwide problem for both ecology and economy. Our ability to understand how vegetation manages to survive and propagate through arid and semiarid ecosystems may be useful in the development of future strategies to prevent desertification, preserve flora—and fauna within—or even make use of scarce resources soils. In this paper, we study a robust phenomena observed in semi-arid ecosystems, by which localized vegetation patches split in a process called self-replication. Localized patches of vegetation are visible in nature at various spatial scales. Even though they have been described in literature, their growth mechanisms remain largely unexplored. Here, we develop an innovative statistical analysis based on real field observations to show that patches may exhibit deformation and splitting. This growth mechanism is opposite to the desertification since it allows to repopulate territories devoid of vegetation. We investigate these aspects by characterizing quantitatively, with a simple mathematical model, a new class of instabilities that lead to the self-replication phenomenon observed.

  8. Selective recruitment of nuclear factors to productively replicating herpes simplex virus genomes.

    Directory of Open Access Journals (Sweden)

    Jill A Dembowski

    2015-05-01

    Full Text Available Much of the HSV-1 life cycle is carried out in the cell nucleus, including the expression, replication, repair, and packaging of viral genomes. Viral proteins, as well as cellular factors, play essential roles in these processes. Isolation of proteins on nascent DNA (iPOND was developed to label and purify cellular replication forks. We adapted aspects of this method to label viral genomes to both image, and purify replicating HSV-1 genomes for the identification of associated proteins. Many viral and cellular factors were enriched on viral genomes, including factors that mediate DNA replication, repair, chromatin remodeling, transcription, and RNA processing. As infection proceeded, packaging and structural components were enriched to a greater extent. Among the more abundant proteins that copurified with genomes were the viral transcription factor ICP4 and the replication protein ICP8. Furthermore, all seven viral replication proteins were enriched on viral genomes, along with cellular PCNA and topoisomerases, while other cellular replication proteins were not detected. The chromatin-remodeling complexes present on viral genomes included the INO80, SWI/SNF, NURD, and FACT complexes, which may prevent chromatinization of the genome. Consistent with this conclusion, histones were not readily recovered with purified viral genomes, and imaging studies revealed an underrepresentation of histones on viral genomes. RNA polymerase II, the mediator complex, TFIID, TFIIH, and several other transcriptional activators and repressors were also affinity purified with viral DNA. The presence of INO80, NURD, SWI/SNF, mediator, TFIID, and TFIIH components is consistent with previous studies in which these complexes copurified with ICP4. Therefore, ICP4 is likely involved in the recruitment of these key cellular chromatin remodeling and transcription factors to viral genomes. Taken together, iPOND is a valuable method for the study of viral genome dynamics

  9. Replication of micro and nano surface geometries

    DEFF Research Database (Denmark)

    Hansen, Hans Nørgaard; Hocken, R.J.; Tosello, Guido

    2011-01-01

    The paper describes the state-of-the-art in replication of surface texture and topography at micro and nano scale. The description includes replication of surfaces in polymers, metals and glass. Three different main technological areas enabled by surface replication processes are presented......: manufacture of net-shape micro/nano surfaces, tooling (i.e. master making), and surface quality control (metrology, inspection). Replication processes and methods as well as the metrology of surfaces to determine the degree of replication are presented and classified. Examples from various application areas...... are given including replication for surface texture measurements, surface roughness standards, manufacture of micro and nano structured functional surfaces, replicated surfaces for optical applications (e.g. optical gratings), and process chains based on combinations of repeated surface replication steps....

  10. A dynamic replication management strategy in distributed GIS

    Science.gov (United States)

    Pan, Shaoming; Xiong, Lian; Xu, Zhengquan; Chong, Yanwen; Meng, Qingxiang

    2018-03-01

    Replication strategy is one of effective solutions to meet the requirement of service response time by preparing data in advance to avoid the delay of reading data from disks. This paper presents a brand-new method to create copies considering the selection of replicas set, the number of copies for each replica and the placement strategy of all copies. First, the popularities of all data are computed considering both the historical access records and the timeliness of the records. Then, replica set can be selected based on their recent popularities. Also, an enhanced Q-value scheme is proposed to assign the number of copies for each replica. Finally, a reasonable copies placement strategy is designed to meet the requirement of load balance. In addition, we present several experiments that compare the proposed method with techniques that use other replication management strategies. The results show that the proposed model has better performance than other algorithms in all respects. Moreover, the experiments based on different parameters also demonstrated the effectiveness and adaptability of the proposed algorithm.

  11. A New Replication Norm for Psychology

    Directory of Open Access Journals (Sweden)

    Etienne P LeBel

    2015-10-01

    Full Text Available In recent years, there has been a growing concern regarding the replicability of findings in psychology, including a mounting number of prominent findings that have failed to replicate via high-powered independent replication attempts. In the face of this replicability “crisis of confidence”, several initiatives have been implemented to increase the reliability of empirical findings. In the current article, I propose a new replication norm that aims to further boost the dependability of findings in psychology. Paralleling the extant social norm that researchers should peer review about three times as many articles that they themselves publish per year, the new replication norm states that researchers should aim to independently replicate important findings in their own research areas in proportion to the number of original studies they themselves publish per year (e.g., a 4:1 original-to-replication studies ratio. I argue this simple approach could significantly advance our science by increasing the reliability and cumulative nature of our empirical knowledge base, accelerating our theoretical understanding of psychological phenomena, instilling a focus on quality rather than quantity, and by facilitating our transformation toward a research culture where executing and reporting independent direct replications is viewed as an ordinary part of the research process. To help promote the new norm, I delineate (1 how each of the major constituencies of the research process (i.e., funders, journals, professional societies, departments, and individual researchers can incentivize replications and promote the new norm and (2 any obstacles each constituency faces in supporting the new norm.

  12. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  13. Centromere replication timing determines different forms of genomic instability in Saccharomyces cerevisiae checkpoint mutants during replication stress.

    Science.gov (United States)

    Feng, Wenyi; Bachant, Jeff; Collingwood, David; Raghuraman, M K; Brewer, Bonita J

    2009-12-01

    Yeast replication checkpoint mutants lose viability following transient exposure to hydroxyurea, a replication-impeding drug. In an effort to understand the basis for this lethality, we discovered that different events are responsible for inviability in checkpoint-deficient cells harboring mutations in the mec1 and rad53 genes. By monitoring genomewide replication dynamics of cells exposed to hydroxyurea, we show that cells with a checkpoint deficient allele of RAD53, rad53K227A, fail to duplicate centromeres. Following removal of the drug, however, rad53K227A cells recover substantial DNA replication, including replication through centromeres. Despite this recovery, the rad53K227A mutant fails to achieve biorientation of sister centromeres during recovery from hydroxyurea, leading to secondary activation of the spindle assembly checkpoint (SAC), aneuploidy, and lethal chromosome segregation errors. We demonstrate that cell lethality from this segregation defect could be partially remedied by reinforcing bipolar attachment. In contrast, cells with the mec1-1 sml1-1 mutations suffer from severely impaired replication resumption upon removal of hydroxyurea. mec1-1 sml1-1 cells can, however, duplicate at least some of their centromeres and achieve bipolar attachment, leading to abortive segregation and fragmentation of incompletely replicated chromosomes. Our results highlight the importance of replicating yeast centromeres early and reveal different mechanisms of cell death due to differences in replication fork progression.

  14. Using Mosix for Wide-Area Compuational Resources

    Science.gov (United States)

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  15. A URI-based approach for addressing fragments of media resources on the Web

    NARCIS (Netherlands)

    E. Mannens; D. van Deursen; R. Troncy (Raphael); S. Pfeiffer; C. Parker (Conrad); Y. Lafon; A.J. Jansen (Jack); M. Hausenblas; R. van de Walle

    2011-01-01

    htmlabstractTo make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g., from a word or an image within a Web page) as well as hyperlinking

  16. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  17. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  18. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2011-12-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources.Methodology—meta analysis, survey and descriptive analysis.Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, thatsignificant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  19. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Agota Giedrė Raišienė

    2013-08-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, that significant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  20. The Inherent Asymmetry of DNA Replication.

    Science.gov (United States)

    Snedeker, Jonathan; Wooten, Matthew; Chen, Xin

    2017-10-06

    Semiconservative DNA replication has provided an elegant solution to the fundamental problem of how life is able to proliferate in a way that allows cells, organisms, and populations to survive and replicate many times over. Somewhat lost, however, in our admiration for this mechanism is an appreciation for the asymmetries that occur in the process of DNA replication. As we discuss in this review, these asymmetries arise as a consequence of the structure of the DNA molecule and the enzymatic mechanism of DNA synthesis. Increasing evidence suggests that asymmetries in DNA replication are able to play a central role in the processes of adaptation and evolution by shaping the mutagenic landscape of cells. Additionally, in eukaryotes, recent work has demonstrated that the inherent asymmetries in DNA replication may play an important role in the process of chromatin replication. As chromatin plays an essential role in defining cell identity, asymmetries generated during the process of DNA replication may play critical roles in cell fate decisions related to patterning and development.

  1. Replication dynamics of the yeast genome.

    Science.gov (United States)

    Raghuraman, M K; Winzeler, E A; Collingwood, D; Hunt, S; Wodicka, L; Conway, A; Lockhart, D J; Davis, R W; Brewer, B J; Fangman, W L

    2001-10-05

    Oligonucleotide microarrays were used to map the detailed topography of chromosome replication in the budding yeast Saccharomyces cerevisiae. The times of replication of thousands of sites across the genome were determined by hybridizing replicated and unreplicated DNAs, isolated at different times in S phase, to the microarrays. Origin activations take place continuously throughout S phase but with most firings near mid-S phase. Rates of replication fork movement vary greatly from region to region in the genome. The two ends of each of the 16 chromosomes are highly correlated in their times of replication. This microarray approach is readily applicable to other organisms, including humans.

  2. How job demands, resources, and burnout predict objective performance: a constructive replication.

    Science.gov (United States)

    Bakker, Arnold B; Van Emmerik, Hetty; Van Riet, Pim

    2008-07-01

    The present study uses the Job Demands-Resources model (Bakker & Demerouti, 2007) to examine how job characteristics and burnout (exhaustion and cynicism) contribute to explaining variance in objective team performance. A central assumption in the model is that working characteristics evoke two psychologically different processes. In the first process, job demands lead to constant psychological overtaxing and in the long run to exhaustion. In the second process, a lack of job resources precludes actual goal accomplishment, leading to cynicism. In the present study these two processes were used to predict objective team performance. A total of 176 employees from a temporary employment agency completed questionnaires on job characteristics and burnout. These self-reports were linked to information from the company's management information system about teams' (N=71) objective sales performance (actual sales divided by the stated objectives) during the 3 months after the questionnaire data collection period. The results of structural equation modeling analyses did not support the hypothesis that exhaustion mediates the relationship between job demands and performance, but confirmed that cynicism mediates the relationship between job resources and performance suggesting that work conditions influence performance particularly through the attitudinal component of burnout.

  3. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  4. High-Resolution Replication Profiles Define the Stochastic Nature of Genome Replication Initiation and Termination

    Directory of Open Access Journals (Sweden)

    Michelle Hawkins

    2013-11-01

    Full Text Available Eukaryotic genome replication is stochastic, and each cell uses a different cohort of replication origins. We demonstrate that interpreting high-resolution Saccharomyces cerevisiae genome replication data with a mathematical model allows quantification of the stochastic nature of genome replication, including the efficiency of each origin and the distribution of termination events. Single-cell measurements support the inferred values for stochastic origin activation time. A strain, in which three origins were inactivated, confirmed that the distribution of termination events is primarily dictated by the stochastic activation time of origins. Cell-to-cell variability in origin activity ensures that termination events are widely distributed across virtually the whole genome. We propose that the heterogeneity in origin usage contributes to genome stability by limiting potentially deleterious events from accumulating at particular loci.

  5. Energy efficiency models and optimization algoruthm to enhance on-demand resource delivery in a cloud computing environment / Thusoyaone Joseph Moemi

    OpenAIRE

    Moemi, Thusoyaone Joseph

    2013-01-01

    Online hosed services are what is referred to as Cloud Computing. Access to these services is via the internet. h shifts the traditional IT resource ownership model to renting. Thus, high cost of infrastructure cannot limit the less privileged from experiencing the benefits that this new paradigm brings. Therefore, c loud computing provides flexible services to cloud user in the form o f software, platform and infrastructure as services. The goal behind cloud computing is to provi...

  6. DNA replication and post-replication repair in U.V.-sensitive mouse neuroblastoma cells

    International Nuclear Information System (INIS)

    Lavin, M.F.; McCombe, P.; Kidson, C.

    1976-01-01

    Mouse neuroblastoma cells differentiated when grown in the absence of serum; differentiation was reversed on the addition of serum. Differentiated cells were more sensitive to U.V.-radiation than proliferating cells. Whereas addition of serum to differentiated neuroblastoma cells normally resulted in immediate, synchronous entry into S phase, irradiation just before the addition of serum resulted in a long delay in the onset of DNA replication. During this lag period, incorporated 3 H-thymidine appeared in the light density region of CsCl gradients, reflecting either repair synthesis or abortive replication. Post-replication repair (gap-filling) was found to be present in proliferating cells and at certain times in differentiated cells. It is suggested that the sensitivity of differentiated neuroblastoma cells to U.V.-radiation may have been due to ineffective post-replication repair or to deficiencies in more than one repair mechanism, with reduction in repair capacity beyond a critical threshold. (author)

  7. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  8. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  9. Overcoming natural replication barriers: differential helicase requirements.

    Science.gov (United States)

    Anand, Ranjith P; Shah, Kartik A; Niu, Hengyao; Sung, Patrick; Mirkin, Sergei M; Freudenreich, Catherine H

    2012-02-01

    DNA sequences that form secondary structures or bind protein complexes are known barriers to replication and potential inducers of genome instability. In order to determine which helicases facilitate DNA replication across these barriers, we analyzed fork progression through them in wild-type and mutant yeast cells, using 2-dimensional gel-electrophoretic analysis of the replication intermediates. We show that the Srs2 protein facilitates replication of hairpin-forming CGG/CCG repeats and prevents chromosome fragility at the repeat, whereas it does not affect replication of G-quadruplex forming sequences or a protein-bound repeat. Srs2 helicase activity is required for hairpin unwinding and fork progression. Also, the PCNA binding domain of Srs2 is required for its in vivo role of replication through hairpins. In contrast, the absence of Sgs1 or Pif1 helicases did not inhibit replication through structural barriers, though Pif1 did facilitate replication of a telomeric protein barrier. Interestingly, replication through a protein barrier but not a DNA structure barrier was modulated by nucleotide pool levels, illuminating a different mechanism by which cells can regulate fork progression through protein-mediated stall sites. Our analyses reveal fundamental differences in the replication of DNA structural versus protein barriers, with Srs2 helicase activity exclusively required for fork progression through hairpin structures.

  10. Exploiting replicative stress to treat cancer

    DEFF Research Database (Denmark)

    Dobbelstein, Matthias; Sørensen, Claus Storgaard

    2015-01-01

    DNA replication in cancer cells is accompanied by stalling and collapse of the replication fork and signalling in response to DNA damage and/or premature mitosis; these processes are collectively known as 'replicative stress'. Progress is being made to increase our understanding of the mechanisms...

  11. Chromatin maturation depends on continued DNA-replication

    International Nuclear Information System (INIS)

    Schlaeger, E.J.; Puelm, W.; Knippers, R.

    1983-01-01

    The structure of [ 3 H]thymidine pulse-labeled chromatin in lymphocytes differs from that of non-replicating chromatin by several operational criteria which are related to the higher nuclease sensitivity of replicating chromatin. These structural features of replicating chromatin rapidly disappear when the [ 3 H]thymidine pulse is followed by a chase in the presence of an excess of non-radioactive thymidine. However, when the rate of DNA replication is reduced, as in cycloheximide-treated lymphocytes, chromatin maturation is retarded. No chromatin maturation is observed when nuclei from pulse-labeled lymphocytes are incubated in vitro in the absence of DNA precursors. In contrast, when these nuclei are incubated under conditions known to be optimal for DNA replication, the structure of replicating chromatin is efficiently converted to that of 'mature', non-replicating chromatin. The authors conclude that the properties of nascent DNA and/or the distance from the replication fork are important factors in chromatin maturation. (Auth.)

  12. rMATS: robust and flexible detection of differential alternative splicing from replicate RNA-Seq data.

    Science.gov (United States)

    Shen, Shihao; Park, Juw Won; Lu, Zhi-xiang; Lin, Lan; Henry, Michael D; Wu, Ying Nian; Zhou, Qing; Xing, Yi

    2014-12-23

    Ultra-deep RNA sequencing (RNA-Seq) has become a powerful approach for genome-wide analysis of pre-mRNA alternative splicing. We previously developed multivariate analysis of transcript splicing (MATS), a statistical method for detecting differential alternative splicing between two RNA-Seq samples. Here we describe a new statistical model and computer program, replicate MATS (rMATS), designed for detection of differential alternative splicing from replicate RNA-Seq data. rMATS uses a hierarchical model to simultaneously account for sampling uncertainty in individual replicates and variability among replicates. In addition to the analysis of unpaired replicates, rMATS also includes a model specifically designed for paired replicates between sample groups. The hypothesis-testing framework of rMATS is flexible and can assess the statistical significance over any user-defined magnitude of splicing change. The performance of rMATS is evaluated by the analysis of simulated and real RNA-Seq data. rMATS outperformed two existing methods for replicate RNA-Seq data in all simulation settings, and RT-PCR yielded a high validation rate (94%) in an RNA-Seq dataset of prostate cancer cell lines. Our data also provide guiding principles for designing RNA-Seq studies of alternative splicing. We demonstrate that it is essential to incorporate biological replicates in the study design. Of note, pooling RNAs or merging RNA-Seq data from multiple replicates is not an effective approach to account for variability, and the result is particularly sensitive to outliers. The rMATS source code is freely available at rnaseq-mats.sourceforge.net/. As the popularity of RNA-Seq continues to grow, we expect rMATS will be useful for studies of alternative splicing in diverse RNA-Seq projects.

  13. The Escherichia coli Tus-Ter replication fork barrier causes site-specific DNA replication perturbation in yeast

    DEFF Research Database (Denmark)

    Larsen, Nicolai B; Sass, Ehud; Suski, Catherine

    2014-01-01

    Replication fork (RF) pausing occurs at both 'programmed' sites and non-physiological barriers (for example, DNA adducts). Programmed RF pausing is required for site-specific DNA replication termination in Escherichia coli, and this process requires the binding of the polar terminator protein, Tus...... as a versatile, site-specific, heterologous DNA replication-perturbing system, with a variety of potential applications....

  14. Open Educational Resources in Canada 2015

    Science.gov (United States)

    McGreal, Rory; Anderson, Terry; Conrad, Dianne

    2015-01-01

    Canada's important areas of expertise in open educational resources (OER) are beginning to be built upon or replicated more broadly in all education and training sectors. This paper provides an overview of the state of the art in OER initiatives and open higher education in general in Canada, providing insights into what is happening nationally…

  15. Piping data bank and erection system of Angra 2: structure, computational resources and systems

    International Nuclear Information System (INIS)

    Abud, P.R.; Court, E.G.; Rosette, A.C.

    1992-01-01

    The Piping Data Bank of Angra 2 called - Erection Management System - Was developed to manage the piping erection of the Nuclear Power Plant of Angra 2. Beyond the erection follow-up of piping and supports, it manages: the piping design, the material procurement, the flow of the fabrication documents, testing of welds and material stocks at the Warehouse. The works developed in the sense of defining the structure of the Data Bank, Computational Resources and System are here described. (author)

  16. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  17. Enzymatic recognition of DNA replication origins

    International Nuclear Information System (INIS)

    Stayton, M.M.; Bertsch, L.; Biswas, S.

    1983-01-01

    In this paper we discuss the process of recognition of the complementary-strand origin with emphasis on RNA polymerase action in priming M13 DNA replication, the role of primase in G4 DNA replication, and the function of protein n, a priming protein, during primosome assembly. These phage systems do not require several of the bacterial DNA replication enzymes, particularly those involved in the regulation of chromosome copy number of the initiatiion of replication of duplex DNA. 51 references, 13 figures, 1 table

  18. Surface Microstructure Replication in Injection Moulding

    DEFF Research Database (Denmark)

    Hansen, Hans Nørgaard; Arlø, Uffe Rolf

    2005-01-01

    topography is transcribed onto the plastic part through complex mechanisms. This replication however, is not perfect, and the replication quality depends on the plastic material properties, the topography itself, and the process conditions. This paper describes and discusses an investigation of injection...... moulding of surface microstructures. Emphasis is put on the ability to replicate surface microstructures under normal injection moulding conditions, notably with low cost materials at low mould temperatures. The replication of surface microstructures in injection moulding has been explored...... for Polypropylene at low mould temperatures. The process conditions were varied over the recommended process window for the material. The geometry of the obtained structures was analyzed. Evidence suggests that step height replication quality depends linearly on structure width in a certain range. Further...

  19. A Resource Service Model in the Industrial IoT System Based on Transparent Computing.

    Science.gov (United States)

    Li, Weimin; Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-03-26

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system.

  20. Replication Protein A (RPA) Phosphorylation Prevents RPA Association with Replication Centers

    OpenAIRE

    Vassin, Vitaly M.; Wold, Marc S.; Borowiec, James A.

    2004-01-01

    Mammalian replication protein A (RPA) undergoes DNA damage-dependent phosphorylation at numerous sites on the N terminus of the RPA2 subunit. To understand the functional significance of RPA phosphorylation, we expressed RPA2 variants in which the phosphorylation sites were converted to aspartate (RPA2D) or alanine (RPA2A). Although RPA2D was incorporated into RPA heterotrimers and supported simian virus 40 DNA replication in vitro, the RPA2D mutant was selectively unable to associate with re...

  1. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  2. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  3. Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

    2012-07-14

    The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.

  4. Surface microstructure replication in injection molding

    DEFF Research Database (Denmark)

    Theilade, Uffe Arlø; Hansen, Hans Nørgaard

    2006-01-01

    topography is transcribed onto the plastic part through complex mechanisms. This replication, however, is not perfect, and the replication quality depends on the plastic material properties, the topography itself, and the process conditions. This paper describes and discusses an investigation of injection...... molding of surface microstructures. The fundamental problem of surface microstructure replication has been studied. The research is based on specific microstructures as found in lab-on-a-chip products and on rough surfaces generated from EDM (electro discharge machining) mold cavities. Emphasis is put...... on the ability to replicate surface microstructures under normal injection-molding conditions, i.e., with commodity materials within typical process windows. It was found that within typical process windows the replication quality depends significantly on several process parameters, and especially the mold...

  5. Activation of human herpesvirus replication by apoptosis.

    Science.gov (United States)

    Prasad, Alka; Remick, Jill; Zeichner, Steven L

    2013-10-01

    A central feature of herpesvirus biology is the ability of herpesviruses to remain latent within host cells. Classically, exposure to inducing agents, like activating cytokines or phorbol esters that stimulate host cell signal transduction events, and epigenetic agents (e.g., butyrate) was thought to end latency. We recently showed that Kaposi's sarcoma-associated herpesvirus (KSHV, or human herpesvirus-8 [HHV-8]) has another, alternative emergency escape replication pathway that is triggered when KSHV's host cell undergoes apoptosis, characterized by the lack of a requirement for the replication and transcription activator (RTA) protein, accelerated late gene kinetics, and production of virus with decreased infectivity. Caspase-3 is necessary and sufficient to initiate the alternative replication program. HSV-1 was also recently shown to initiate replication in response to host cell apoptosis. These observations suggested that an alternative apoptosis-triggered replication program might be a general feature of herpesvirus biology and that apoptosis-initiated herpesvirus replication may have clinical implications, particularly for herpesviruses that almost universally infect humans. To explore whether an alternative apoptosis-initiated replication program is a common feature of herpesvirus biology, we studied cell lines latently infected with Epstein-Barr virus/HHV-4, HHV-6A, HHV-6B, HHV-7, and KSHV. We found that apoptosis triggers replication for each HHV studied, with caspase-3 being necessary and sufficient for HHV replication. An alternative apoptosis-initiated replication program appears to be a common feature of HHV biology. We also found that commonly used cytotoxic chemotherapeutic agents activate HHV replication, which suggests that treatments that promote apoptosis may lead to activation of latent herpesviruses, with potential clinical significance.

  6. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  7. The Development of an Individualized Instructional Program in Beginning College Mathematics Utilizing Computer Based Resource Units. Final Report.

    Science.gov (United States)

    Rockhill, Theron D.

    Reported is an attempt to develop and evaluate an individualized instructional program in pre-calculus college mathematics. Four computer based resource units were developed in the areas of set theory, relations and function, algebra, trigonometry, and analytic geometry. Objectives were determined by experienced calculus teachers, and…

  8. Evolution of Replication Machines

    Science.gov (United States)

    Yao, Nina Y.; O'Donnell, Mike E.

    2016-01-01

    The machines that decode and regulate genetic information require the translation, transcription and replication pathways essential to all living cells. Thus, it might be expected that all cells share the same basic machinery for these pathways that were inherited from the primordial ancestor cell from which they evolved. A clear example of this is found in the translation machinery that converts RNA sequence to protein. The translation process requires numerous structural and catalytic RNAs and proteins, the central factors of which are homologous in all three domains of life, bacteria, archaea and eukarya. Likewise, the central actor in transcription, RNA polymerase, shows homology among the catalytic subunits in bacteria, archaea and eukarya. In contrast, while some “gears” of the genome replication machinery are homologous in all domains of life, most components of the replication machine appear to be unrelated between bacteria and those of archaea and eukarya. This review will compare and contrast the central proteins of the “replisome” machines that duplicate DNA in bacteria, archaea and eukarya, with an eye to understanding the issues surrounding the evolution of the DNA replication apparatus. PMID:27160337

  9. DNA replication origins—where do we begin?

    Science.gov (United States)

    Prioleau, Marie-Noëlle; MacAlpine, David M.

    2016-01-01

    For more than three decades, investigators have sought to identify the precise locations where DNA replication initiates in mammalian genomes. The development of molecular and biochemical approaches to identify start sites of DNA replication (origins) based on the presence of defining and characteristic replication intermediates at specific loci led to the identification of only a handful of mammalian replication origins. The limited number of identified origins prevented a comprehensive and exhaustive search for conserved genomic features that were capable of specifying origins of DNA replication. More recently, the adaptation of origin-mapping assays to genome-wide approaches has led to the identification of tens of thousands of replication origins throughout mammalian genomes, providing an unprecedented opportunity to identify both genetic and epigenetic features that define and regulate their distribution and utilization. Here we summarize recent advances in our understanding of how primary sequence, chromatin environment, and nuclear architecture contribute to the dynamic selection and activation of replication origins across diverse cell types and developmental stages. PMID:27542827

  10. Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact

    Science.gov (United States)

    Frank, Jeremy

    2004-01-01

    We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.

  11. Computer modelling of the UK wind energy resource. Phase 2. Application of the methodology

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Makari, M; Newton, K; Ravenscroft, F; Whittaker, J

    1993-12-31

    This report presents the results of the second phase of a programme to estimate the UK wind energy resource. The overall objective of the programme is to provide quantitative resource estimates using a mesoscale (resolution about 1km) numerical model for the prediction of wind flow over complex terrain, in conjunction with digitised terrain data and wind data from surface meteorological stations. A network of suitable meteorological stations has been established and long term wind data obtained. Digitised terrain data for the whole UK were obtained, and wind flow modelling using the NOABL computer program has been performed. Maps of extractable wind power have been derived for various assumptions about wind turbine characteristics. Validation of the methodology indicates that the results are internally consistent, and in good agreement with available comparison data. Existing isovent maps, based on standard meteorological data which take no account of terrain effects, indicate that 10m annual mean wind speeds vary between about 4.5 and 7 m/s over the UK with only a few coastal areas over 6 m/s. The present study indicates that 28% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. The results will be useful for broad resource studies and initial site screening. Detailed resource evaluation for local sites will require more detailed local modelling or ideally long term field measurements. (12 figures, 14 tables, 21 references). (Author)

  12. Chromosomal DNA replication of Vicia faba cells

    International Nuclear Information System (INIS)

    Ikushima, Takaji

    1976-01-01

    The chromosomal DNA replication of higher plant cells has been investigated by DNA fiber autoradiography. The nuclear DNA fibers of Vicia root meristematic cells are organized into many tandem arrays of replication units or replicons which exist as clusters with respect to replication. DNA is replicated bidirectionally from the initiation points at the average rate of 0.15 μm/min at 20 0 C, and the average interinitiation interval is about 16 μm. The manner of chromosomal DNA replication in this higher plant is similar to that found in other eukaryotic cells at a subchromosomal level. (auth.)

  13. Performance analysis of cloud computing services for many-tasks scientific computing

    NARCIS (Netherlands)

    Iosup, A.; Ostermann, S.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.

    2011-01-01

    Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a

  14. Surface micro topography replication in injection moulding

    DEFF Research Database (Denmark)

    Arlø, Uffe Rolf

    Thermoplastic injection moulding is a widely used industrial process that involves surface generation by replication. The surface topography of injection moulded plastic parts can be important for aesthetical or technical reasons. With the emergence of microengineering and nanotechnology additional...... importance of surface topography follows. In general the replication is not perfect and the topography of the plastic part differs from the inverse topography of the mould cavity. It is desirable to be able to control the degree of replication perfection or replication quality. This requires an understanding...... of the physical mechanisms of replication. Such understanding can lead to improved process design and facilitate in-line process quality control with respect to surface properties. The purpose of the project is to identify critical factors that affect topography replication quality and to obtain an understanding...

  15. DNA replication in ultraviolet light irradiated Chinese hamster cells: the nature of replicon inhibition and post-replication repair

    International Nuclear Information System (INIS)

    Doniger, J.

    1978-01-01

    DNA replication in ultraviolet light irradiated Chinese hamster cells was studied using techniques of DNA fiber autoradiography and alkaline sucrose sedimentation. Bidirectionally growing replicons were observed in the autoradiograms independent of the irradiation conditions. After a dose of 5 J/m 2 at 254 nm the rate of fork progression was the same as in unirradiated cells, while the rate of replication was reduced by 50%. After a dose of 10J/m 2 the rate of fork progression was reduced 40%, while the replication rate was only 25% of normal. Therefore, at low doses of ultraviolet light irradiation, the inhibition of DNA replication is due to reduction in the number of functioning replicons, while at higher doses the rate of fork progression is also slowed. Those replicons which no longer function after irradiation are blocked in fork movement rather than replicon initiation. After irradiation, pulse label was first incorporated into short nascent strands, the average size of which was approximately equal to the distance between pyrimidine dimers. Under conditions where post-replication repair occurs these short strands were eventually joined into larger pieces. Finally, the data show that slowing post-replication repair with caffeine does not slow fork movement. The results presented here support the post-replication repair model of 'gapped synthesis' and rule out a major role for 'replicative bypass'. (author)

  16. Computational modeling as a tool for water resources management: an alternative approach to problems of multiple uses

    Directory of Open Access Journals (Sweden)

    Haydda Manolla Chaves da Hora

    2012-04-01

    Full Text Available Today in Brazil there are many cases of incompatibility regarding use of water and its availability. Due to the increase in required variety and volume, the concept of multiple uses was created, as stated by Pinheiro et al. (2007. The use of the same resource to satisfy different needs with several restrictions (qualitative and quantitative creates conflicts. Aiming to minimize these conflicts, this work was applied to the particular cases of Hydrographic Regions VI and VIII of Rio de Janeiro State, using computational modeling techniques (based on MOHID software – Water Modeling System as a tool for water resources management.

  17. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  18. Dynamics of Escherichia coli Chromosome Segregation during Multifork Replication

    DEFF Research Database (Denmark)

    Nielsen, Henrik Jørck; Youngren, Brenda; Hansen, Flemming G.

    2007-01-01

    Slowly growing Escherichia coli cells have a simple cell cycle, with replication and progressive segregation of the chromosome completed before cell division. In rapidly growing cells, initiation of replication occurs before the previous replication rounds are complete. At cell division, the chro......Slowly growing Escherichia coli cells have a simple cell cycle, with replication and progressive segregation of the chromosome completed before cell division. In rapidly growing cells, initiation of replication occurs before the previous replication rounds are complete. At cell division......, the chromosomes contain multiple replication forks and must be segregated while this complex pattern of replication is still ongoing. Here, we show that replication and segregation continue in step, starting at the origin and progressing to the replication terminus. Thus, early-replicated markers on the multiple......-branched chromosomes continue to separate soon after replication to form separate protonucleoids, even though they are not segregated into different daughter cells until later generations. The segregation pattern follows the pattern of chromosome replication and does not follow the cell division cycle. No extensive...

  19. Nonequilibrium Entropic Bounds for Darwinian Replicators

    Directory of Open Access Journals (Sweden)

    Jordi Piñero

    2018-01-01

    Full Text Available Life evolved on our planet by means of a combination of Darwinian selection and innovations leading to higher levels of complexity. The emergence and selection of replicating entities is a central problem in prebiotic evolution. Theoretical models have shown how populations of different types of replicating entities exclude or coexist with other classes of replicators. Models are typically kinetic, based on standard replicator equations. On the other hand, the presence of thermodynamical constraints for these systems remain an open question. This is largely due to the lack of a general theory of statistical methods for systems far from equilibrium. Nonetheless, a first approach to this problem has been put forward in a series of novel developements falling under the rubric of the extended second law of thermodynamics. The work presented here is twofold: firstly, we review this theoretical framework and provide a brief description of the three fundamental replicator types in prebiotic evolution: parabolic, malthusian and hyperbolic. Secondly, we employ these previously mentioned techinques to explore how replicators are constrained by thermodynamics. Finally, we comment and discuss where further research should be focused on.

  20. Computational system to create an entry file for replicating I-125 seeds simulating brachytherapy case studies using the MCNPX code

    Directory of Open Access Journals (Sweden)

    Leonardo da Silva Boia

    2014-03-01

    decline for short distances.------------------------------Cite this article as: Boia LS, Junior J, Menezes AF, Silva AX. Computational system to create an entry file for replicating I-125 seeds simulating brachytherapy case studies using the MCNPX code. Int J Cancer Ther Oncol 2014; 2(2:02023.DOI: http://dx.doi.org/10.14319/ijcto.0202.3

  1. Rescue from replication stress during mitosis.

    Science.gov (United States)

    Fragkos, Michalis; Naim, Valeria

    2017-04-03

    Genomic instability is a hallmark of cancer and a common feature of human disorders, characterized by growth defects, neurodegeneration, cancer predisposition, and aging. Recent evidence has shown that DNA replication stress is a major driver of genomic instability and tumorigenesis. Cells can undergo mitosis with under-replicated DNA or unresolved DNA structures, and specific pathways are dedicated to resolving these structures during mitosis, suggesting that mitotic rescue from replication stress (MRRS) is a key process influencing genome stability and cellular homeostasis. Deregulation of MRRS following oncogene activation or loss-of-function of caretaker genes may be the cause of chromosomal aberrations that promote cancer initiation and progression. In this review, we discuss the causes and consequences of replication stress, focusing on its persistence in mitosis as well as the mechanisms and factors involved in its resolution, and the potential impact of incomplete replication or aberrant MRRS on tumorigenesis, aging and disease.

  2. Personality and Academic Motivation: Replication, Extension, and Replication

    Science.gov (United States)

    Jones, Martin H.; McMichael, Stephanie N.

    2015-01-01

    Previous work examines the relationships between personality traits and intrinsic/extrinsic motivation. We replicate and extend previous work to examine how personality may relate to achievement goals, efficacious beliefs, and mindset about intelligence. Approximately 200 undergraduates responded to the survey with a 150 participants replicating…

  3. DNA replication origins-where do we begin?

    Science.gov (United States)

    Prioleau, Marie-Noëlle; MacAlpine, David M

    2016-08-01

    For more than three decades, investigators have sought to identify the precise locations where DNA replication initiates in mammalian genomes. The development of molecular and biochemical approaches to identify start sites of DNA replication (origins) based on the presence of defining and characteristic replication intermediates at specific loci led to the identification of only a handful of mammalian replication origins. The limited number of identified origins prevented a comprehensive and exhaustive search for conserved genomic features that were capable of specifying origins of DNA replication. More recently, the adaptation of origin-mapping assays to genome-wide approaches has led to the identification of tens of thousands of replication origins throughout mammalian genomes, providing an unprecedented opportunity to identify both genetic and epigenetic features that define and regulate their distribution and utilization. Here we summarize recent advances in our understanding of how primary sequence, chromatin environment, and nuclear architecture contribute to the dynamic selection and activation of replication origins across diverse cell types and developmental stages. © 2016 Prioleau and MacAlpine; Published by Cold Spring Harbor Laboratory Press.

  4. Pattern replication by confined dewetting

    NARCIS (Netherlands)

    Harkema, S.; Schäffer, E.; Morariu, M.D.; Steiner, U

    2003-01-01

    The dewetting of a polymer film in a confined geometry was employed in a pattern-replication process. The instability of dewetting films is pinned by a structured confining surface, thereby replicating its topographic pattern. Depending on the surface energy of the confining surface, two different

  5. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  6. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Tom; Yang, Xi

    2018-01-16

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyberinfrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum of compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyberinfrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate

  7. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  8. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  9. Adenovirus sequences required for replication in vivo.

    OpenAIRE

    Wang, K; Pearson, G D

    1985-01-01

    We have studied the in vivo replication properties of plasmids carrying deletion mutations within cloned adenovirus terminal sequences. Deletion mapping located the adenovirus DNA replication origin entirely within the first 67 bp of the adenovirus inverted terminal repeat. This region could be further subdivided into two functional domains: a minimal replication origin and an adjacent auxillary region which boosted the efficiency of replication by more than 100-fold. The minimal origin occup...

  10. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    OpenAIRE

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  11. Mammalian RAD52 Functions in Break-Induced Replication Repair of Collapsed DNA Replication Forks.

    Science.gov (United States)

    Sotiriou, Sotirios K; Kamileri, Irene; Lugli, Natalia; Evangelou, Konstantinos; Da-Ré, Caterina; Huber, Florian; Padayachy, Laura; Tardy, Sebastien; Nicati, Noemie L; Barriot, Samia; Ochs, Fena; Lukas, Claudia; Lukas, Jiri; Gorgoulis, Vassilis G; Scapozza, Leonardo; Halazonetis, Thanos D

    2016-12-15

    Human cancers are characterized by the presence of oncogene-induced DNA replication stress (DRS), making them dependent on repair pathways such as break-induced replication (BIR) for damaged DNA replication forks. To better understand BIR, we performed a targeted siRNA screen for genes whose depletion inhibited G1 to S phase progression when oncogenic cyclin E was overexpressed. RAD52, a gene dispensable for normal development in mice, was among the top hits. In cells in which fork collapse was induced by oncogenes or chemicals, the Rad52 protein localized to DRS foci. Depletion of Rad52 by siRNA or knockout of the gene by CRISPR/Cas9 compromised restart of collapsed forks and led to DNA damage in cells experiencing DRS. Furthermore, in cancer-prone, heterozygous APC mutant mice, homozygous deletion of the Rad52 gene suppressed tumor growth and prolonged lifespan. We therefore propose that mammalian RAD52 facilitates repair of collapsed DNA replication forks in cancer cells. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  13. Suppression of Poxvirus Replication by Resveratrol.

    Science.gov (United States)

    Cao, Shuai; Realegeno, Susan; Pant, Anil; Satheshkumar, Panayampalli S; Yang, Zhilong

    2017-01-01

    Poxviruses continue to cause serious diseases even after eradication of the historically deadly infectious human disease, smallpox. Poxviruses are currently being developed as vaccine vectors and cancer therapeutic agents. Resveratrol is a natural polyphenol stilbenoid found in plants that has been shown to inhibit or enhance replication of a number of viruses, but the effect of resveratrol on poxvirus replication is unknown. In the present study, we found that resveratrol dramatically suppressed the replication of vaccinia virus (VACV), the prototypic member of poxviruses, in various cell types. Resveratrol also significantly reduced the replication of monkeypox virus, a zoonotic virus that is endemic in Western and Central Africa and causes human mortality. The inhibitory effect of resveratrol on poxviruses is independent of VACV N1 protein, a potential resveratrol binding target. Further experiments demonstrated that resveratrol had little effect on VACV early gene expression, while it suppressed VACV DNA synthesis, and subsequently post-replicative gene expression.

  14. Suppression of Poxvirus Replication by Resveratrol

    Directory of Open Access Journals (Sweden)

    Shuai Cao

    2017-11-01

    Full Text Available Poxviruses continue to cause serious diseases even after eradication of the historically deadly infectious human disease, smallpox. Poxviruses are currently being developed as vaccine vectors and cancer therapeutic agents. Resveratrol is a natural polyphenol stilbenoid found in plants that has been shown to inhibit or enhance replication of a number of viruses, but the effect of resveratrol on poxvirus replication is unknown. In the present study, we found that resveratrol dramatically suppressed the replication of vaccinia virus (VACV, the prototypic member of poxviruses, in various cell types. Resveratrol also significantly reduced the replication of monkeypox virus, a zoonotic virus that is endemic in Western and Central Africa and causes human mortality. The inhibitory effect of resveratrol on poxviruses is independent of VACV N1 protein, a potential resveratrol binding target. Further experiments demonstrated that resveratrol had little effect on VACV early gene expression, while it suppressed VACV DNA synthesis, and subsequently post-replicative gene expression.

  15. Crinivirus replication and host interactions

    Directory of Open Access Journals (Sweden)

    Zsofia A Kiss

    2013-05-01

    Full Text Available Criniviruses comprise one of the genera within the family Closteroviridae. Members in this family are restricted to the phloem and rely on whitefly vectors of the genera Bemisia and/or Trialeurodes for plant-to-plant transmission. All criniviruses have bipartite, positive-sense ssRNA genomes, although there is an unconfirmed report of one having a tripartite genome. Lettuce infectious yellows virus (LIYV is the type species of the genus, the best studied so far of the criniviruses and the first for which a reverse genetics system was available. LIYV RNA 1 encodes for proteins predicted to be involved in replication, and alone is competent for replication in protoplasts. Replication results in accumulation of cytoplasmic vesiculated membranous structures which are characteristic of most studied members of the Closteroviridae. These membranous structures, often referred to as BYV-type vesicles, are likely sites of RNA replication. LIYV RNA 2 is replicated in trans when co-infecting cells with RNA 1, but is temporally delayed relative to RNA1. Efficient RNA 2 replication also is dependent on the RNA 1-encoded RNA binding protein, P34. No LIYV RNA 2-encoded proteins have been shown to affect RNA replication, but at least four, CP, CPm, Hsp70h, and p59 are virion structural components and CPm is a determinant of whitefly transmissibility. Roles of other LIYV RNA 2-encoded proteins are largely as yet unknown, but P26 is a non-virion protein that accumulates in cells as characteristic plasmalemma deposits which in plants are localized within phloem parenchyma and companion cells over plasmodesmata connections to sieve elements. The two remaining crinivirus-conserved RNA 2-encoded proteins are P5 and P9. P5 is 39 amino acid protein and is encoded at the 5’ end of RNA 2 as ORF1 and is part of the hallmark closterovirus gene array. The orthologous gene in BYV has been shown to play a role in cell-to-cell movement and indicated to be localized to the

  16. Effector-Triggered Self-Replication in Coupled Subsystems.

    Science.gov (United States)

    Komáromy, Dávid; Tezcan, Meniz; Schaeffer, Gaël; Marić, Ivana; Otto, Sijbren

    2017-11-13

    In living systems processes like genome duplication and cell division are carefully synchronized through subsystem coupling. If we are to create life de novo, similar control over essential processes such as self-replication need to be developed. Here we report that coupling two dynamic combinatorial subsystems, featuring two separate building blocks, enables effector-mediated control over self-replication. The subsystem based on the first building block shows only self-replication, whereas that based on the second one is solely responsive toward a specific external effector molecule. Mixing the subsystems arrests replication until the effector molecule is added, resulting in the formation of a host-effector complex and the liberation of the building block that subsequently engages in self-replication. The onset, rate and extent of self-replication is controlled by the amount of effector present. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Addressing the "Replication Crisis": Using Original Studies to Design Replication Studies with Appropriate Statistical Power.

    Science.gov (United States)

    Anderson, Samantha F; Maxwell, Scott E

    2017-01-01

    Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.

  18. Proteome-wide analysis of SUMO2 targets in response to pathological DNA replication stress in human cells.

    Science.gov (United States)

    Bursomanno, Sara; Beli, Petra; Khan, Asif M; Minocherhomji, Sheroy; Wagner, Sebastian A; Bekker-Jensen, Simon; Mailand, Niels; Choudhary, Chunaram; Hickson, Ian D; Liu, Ying

    2015-01-01

    SUMOylation is a form of post-translational modification involving covalent attachment of SUMO (Small Ubiquitin-like Modifier) polypeptides to specific lysine residues in the target protein. In human cells, there are four SUMO proteins, SUMO1-4, with SUMO2 and SUMO3 forming a closely related subfamily. SUMO2/3, in contrast to SUMO1, are predominantly involved in the cellular response to certain stresses, including heat shock. Substantial evidence from studies in yeast has shown that SUMOylation plays an important role in the regulation of DNA replication and repair. Here, we report a proteomic analysis of proteins modified by SUMO2 in response to DNA replication stress in S phase in human cells. We have identified a panel of 22 SUMO2 targets with increased SUMOylation during DNA replication stress, many of which play key functions within the DNA replication machinery and/or in the cellular response to DNA damage. Interestingly, POLD3 was found modified most significantly in response to a low dose aphidicolin treatment protocol that promotes common fragile site (CFS) breakage. POLD3 is the human ortholog of POL32 in budding yeast, and has been shown to act during break-induced recombinational repair. We have also shown that deficiency of POLD3 leads to an increase in RPA-bound ssDNA when cells are under replication stress, suggesting that POLD3 plays a role in the cellular response to DNA replication stress. Considering that DNA replication stress is a source of genome instability, and that excessive replication stress is a hallmark of pre-neoplastic and tumor cells, our characterization of SUMO2 targets during a perturbed S-phase should provide a valuable resource for future functional studies in the fields of DNA metabolism and cancer biology. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Replication and robustness in developmental research.

    Science.gov (United States)

    Duncan, Greg J; Engel, Mimi; Claessens, Amy; Dowsett, Chantelle J

    2014-11-01

    Replications and robustness checks are key elements of the scientific method and a staple in many disciplines. However, leading journals in developmental psychology rarely include explicit replications of prior research conducted by different investigators, and few require authors to establish in their articles or online appendices that their key results are robust across estimation methods, data sets, and demographic subgroups. This article makes the case for prioritizing both explicit replications and, especially, within-study robustness checks in developmental psychology. It provides evidence on variation in effect sizes in developmental studies and documents strikingly different replication and robustness-checking practices in a sample of journals in developmental psychology and a sister behavioral science-applied economics. Our goal is not to show that any one behavioral science has a monopoly on best practices, but rather to show how journals from a related discipline address vital concerns of replication and generalizability shared by all social and behavioral sciences. We provide recommendations for promoting graduate training in replication and robustness-checking methods and for editorial policies that encourage these practices. Although some of our recommendations may shift the form and substance of developmental research articles, we argue that they would generate considerable scientific benefits for the field. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  20. A computational model for telomere-dependent cell-replicative aging.

    Science.gov (United States)

    Portugal, R D; Land, M G P; Svaiter, B F

    2008-01-01

    Telomere shortening provides a molecular basis for the Hayflick limit. Recent data suggest that telomere shortening also influence mitotic rate. We propose a stochastic growth model of this phenomena, assuming that cell division in each time interval is a random process which probability decreases linearly with telomere shortening. Computer simulations of the proposed stochastic telomere-regulated model provides good approximation of the qualitative growth of cultured human mesenchymal stem cells.

  1. Non‐Canonical Replication Initiation: You’re Fired!

    Directory of Open Access Journals (Sweden)

    Bazilė Ravoitytė

    2017-01-01

    Full Text Available The division of prokaryotic and eukaryotic cells produces two cells that inherit a perfect copy of the genetic material originally derived from the mother cell. The initiation of canonical DNA replication must be coordinated to the cell cycle to ensure the accuracy of genome duplication. Controlled replication initiation depends on a complex interplay of cis‐acting DNA sequences, the so‐called origins of replication (ori, with trans‐acting factors involved in the onset of DNA synthesis. The interplay of cis‐acting elements and trans‐acting factors ensures that cells initiate replication at sequence‐specific sites only once, and in a timely order, to avoid chromosomal endoreplication. However, chromosome breakage and excessive RNA:DNA hybrid formation can cause breakinduced (BIR or transcription‐initiated replication (TIR, respectively. These non‐canonical replication events are expected to affect eukaryotic genome function and maintenance, and could be important for genome evolution and disease development. In this review, we describe the difference between canonical and non‐canonical DNA replication, and focus on mechanistic differences and common features between BIR and TIR. Finally, we discuss open issues on the factors and molecular mechanisms involved in TIR.

  2. Factors influencing microinjection molding replication quality

    Science.gov (United States)

    Vera, Julie; Brulez, Anne-Catherine; Contraires, Elise; Larochette, Mathieu; Trannoy-Orban, Nathalie; Pignon, Maxime; Mauclair, Cyril; Valette, Stéphane; Benayoun, Stéphane

    2018-01-01

    In recent years, there has been increased interest in producing and providing high-precision plastic parts that can be manufactured by microinjection molding: gears, pumps, optical grating elements, and so on. For all of these applications, the replication quality is essential. This study has two goals: (1) fabrication of high-precision parts using the conventional injection molding machine; (2) identification of robust parameters that ensure production quality. Thus, different technological solutions have been used: cavity vacuuming and the use of a mold coated with DLC or CrN deposits. AFM and SEM analyses were carried out to characterize the replication profile. The replication quality was studied in terms of the process parameters, coated and uncoated molds and crystallinity of the polymer. Specific studies were processed to quantify the replicability of injection molded parts (ABS, PC and PP). Analysis of the Taguchi experimental designs permits prioritization of the impact of each parameter on the replication quality. A discussion taking into account these new parameters and the thermal and spreading properties on the coatings is proposed. It appeared that, in general, increasing the mold temperature improves the molten polymer fill in submicron features except for the steel insert (for which the presence of a vacuum is the most important factor). Moreover, the DLC coating was the best coating to increase the quality of the replication. This result could be explained by the lower thermal diffusivity of this coating. We noted that the viscosity of the polymers is not a primordial factor of the replication quality.

  3. Realistic Vascular Replicator for TAVR Procedures.

    Science.gov (United States)

    Rotman, Oren M; Kovarovic, Brandon; Sadasivan, Chander; Gruberg, Luis; Lieber, Baruch B; Bluestein, Danny

    2018-04-13

    Transcatheter aortic valve replacement (TAVR) is an over-the-wire procedure for treatment of severe aortic stenosis (AS). TAVR valves are conventionally tested using simplified left heart simulators (LHS). While those provide baseline performance reliably, their aortic root geometries are far from the anatomical in situ configuration, often overestimating the valves' performance. We report on a novel benchtop patient-specific arterial replicator designed for testing TAVR and training interventional cardiologists in the procedure. The Replicator is an accurate model of the human upper body vasculature for training physicians in percutaneous interventions. It comprises of fully-automated Windkessel mechanism to recreate physiological flow conditions. Calcified aortic valve models were fabricated and incorporated into the Replicator, then tested for performing TAVR procedure by an experienced cardiologist using the Inovare valve. EOA, pressures, and angiograms were monitored pre- and post-TAVR. A St. Jude mechanical valve was tested as a reference that is less affected by the AS anatomy. Results in the Replicator of both valves were compared to the performance in a commercial ISO-compliant LHS. The AS anatomy in the Replicator resulted in a significant decrease of the TAVR valve performance relative to the simplified LHS, with EOA and transvalvular pressures comparable to clinical data. Minor change was seen in the mechanical valve performance. The Replicator showed to be an effective platform for TAVR testing. Unlike a simplified geometric anatomy LHS, it conservatively provides clinically-relevant outcomes and complement it. The Replicator can be most valuable for testing new valves under challenging patient anatomies, physicians training, and procedural planning.

  4. Dynamic remodeling of lipids coincides with dengue virus replication in the midgut of Aedes aegypti mosquitoes.

    Directory of Open Access Journals (Sweden)

    Nunya Chotiwan

    2018-02-01

    Full Text Available We describe the first comprehensive analysis of the midgut metabolome of Aedes aegypti, the primary mosquito vector for arboviruses such as dengue, Zika, chikungunya and yellow fever viruses. Transmission of these viruses depends on their ability to infect, replicate and disseminate from several tissues in the mosquito vector. The metabolic environments within these tissues play crucial roles in these processes. Since these viruses are enveloped, viral replication, assembly and release occur on cellular membranes primed through the manipulation of host metabolism. Interference with this virus infection-induced metabolic environment is detrimental to viral replication in human and mosquito cell culture models. Here we present the first insight into the metabolic environment induced during arbovirus replication in Aedes aegypti. Using high-resolution mass spectrometry, we have analyzed the temporal metabolic perturbations that occur following dengue virus infection of the midgut tissue. This is the primary site of infection and replication, preceding systemic viral dissemination and transmission. We identified metabolites that exhibited a dynamic-profile across early-, mid- and late-infection time points. We observed a marked increase in the lipid content. An increase in glycerophospholipids, sphingolipids and fatty acyls was coincident with the kinetics of viral replication. Elevation of glycerolipid levels suggested a diversion of resources during infection from energy storage to synthetic pathways. Elevated levels of acyl-carnitines were observed, signaling disruptions in mitochondrial function and possible diversion of energy production. A central hub in the sphingolipid pathway that influenced dihydroceramide to ceramide ratios was identified as critical for the virus life cycle. This study also resulted in the first reconstruction of the sphingolipid pathway in Aedes aegypti. Given conservation in the replication mechanisms of several

  5. Dynamic remodeling of lipids coincides with dengue virus replication in the midgut of Aedes aegypti mosquitoes.

    Science.gov (United States)

    Chotiwan, Nunya; Andre, Barbara G; Sanchez-Vargas, Irma; Islam, M Nurul; Grabowski, Jeffrey M; Hopf-Jannasch, Amber; Gough, Erik; Nakayasu, Ernesto; Blair, Carol D; Belisle, John T; Hill, Catherine A; Kuhn, Richard J; Perera, Rushika

    2018-02-01

    We describe the first comprehensive analysis of the midgut metabolome of Aedes aegypti, the primary mosquito vector for arboviruses such as dengue, Zika, chikungunya and yellow fever viruses. Transmission of these viruses depends on their ability to infect, replicate and disseminate from several tissues in the mosquito vector. The metabolic environments within these tissues play crucial roles in these processes. Since these viruses are enveloped, viral replication, assembly and release occur on cellular membranes primed through the manipulation of host metabolism. Interference with this virus infection-induced metabolic environment is detrimental to viral replication in human and mosquito cell culture models. Here we present the first insight into the metabolic environment induced during arbovirus replication in Aedes aegypti. Using high-resolution mass spectrometry, we have analyzed the temporal metabolic perturbations that occur following dengue virus infection of the midgut tissue. This is the primary site of infection and replication, preceding systemic viral dissemination and transmission. We identified metabolites that exhibited a dynamic-profile across early-, mid- and late-infection time points. We observed a marked increase in the lipid content. An increase in glycerophospholipids, sphingolipids and fatty acyls was coincident with the kinetics of viral replication. Elevation of glycerolipid levels suggested a diversion of resources during infection from energy storage to synthetic pathways. Elevated levels of acyl-carnitines were observed, signaling disruptions in mitochondrial function and possible diversion of energy production. A central hub in the sphingolipid pathway that influenced dihydroceramide to ceramide ratios was identified as critical for the virus life cycle. This study also resulted in the first reconstruction of the sphingolipid pathway in Aedes aegypti. Given conservation in the replication mechanisms of several flaviviruses transmitted

  6. Cloud Computing:Strategies for Cloud Computing Adoption

    OpenAIRE

    Shimba, Faith

    2010-01-01

    The advent of cloud computing in recent years has sparked an interest from different organisations, institutions and users to take advantage of web applications. This is a result of the new economic model for the Information Technology (IT) department that cloud computing promises. The model promises a shift from an organisation required to invest heavily for limited IT resources that are internally managed, to a model where the organisation can buy or rent resources that are managed by a clo...

  7. Information resource management concepts for records managers

    Energy Technology Data Exchange (ETDEWEB)

    Seesing, P.R.

    1992-10-01

    Information Resource Management (ERM) is the label given to the various approaches used to foster greater accountability for the use of computing resources. It is a corporate philosophy that treats information as it would its other resources. There is a reorientation from simply expenditures to considering the value of the data stored on that hardware. Accountability for computing resources is expanding beyond just the data processing (DP) or management information systems (MIS) manager to include senior organization management and user management. Management's goal for office automation is being refocused from saving money to improving productivity. A model developed by Richard Nolan (1982) illustrates the basic evolution of computer use in organizations. Computer Era: (1) Initiation (computer acquisition), (2) Contagion (intense system development), (3) Control (proliferation of management controls). Data Resource Era: (4) Integration (user service orientation), (5) Data Administration (corporate value of information), (6) Maturity (strategic approach to information technology). The first three stages mark the growth of traditional data processing and management information systems departments. The development of the IRM philosophy in an organization involves the restructuring of the DP organization and new management techniques. The three stages of the Data Resource Era represent the evolution of IRM. This paper examines each of them in greater detail.

  8. Information resource management concepts for records managers

    Energy Technology Data Exchange (ETDEWEB)

    Seesing, P.R.

    1992-10-01

    Information Resource Management (ERM) is the label given to the various approaches used to foster greater accountability for the use of computing resources. It is a corporate philosophy that treats information as it would its other resources. There is a reorientation from simply expenditures to considering the value of the data stored on that hardware. Accountability for computing resources is expanding beyond just the data processing (DP) or management information systems (MIS) manager to include senior organization management and user management. Management`s goal for office automation is being refocused from saving money to improving productivity. A model developed by Richard Nolan (1982) illustrates the basic evolution of computer use in organizations. Computer Era: (1) Initiation (computer acquisition), (2) Contagion (intense system development), (3) Control (proliferation of management controls). Data Resource Era: (4) Integration (user service orientation), (5) Data Administration (corporate value of information), (6) Maturity (strategic approach to information technology). The first three stages mark the growth of traditional data processing and management information systems departments. The development of the IRM philosophy in an organization involves the restructuring of the DP organization and new management techniques. The three stages of the Data Resource Era represent the evolution of IRM. This paper examines each of them in greater detail.

  9. Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities

    OpenAIRE

    Buyya, Rajkumar; Yeo, Chee Shin; Venugopal, Srikumar

    2008-01-01

    This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents...

  10. Public Library Training Program for Older Adults Addresses Their Computer and Health Literacy Needs. A Review of: Xie, B. (2011. Improving older adults’ e-health literacy through computer training using NIH online resources. Library & Information Science Research, 34, 63-71. doi: /10.1016/j.lisr.2011.07.006

    Directory of Open Access Journals (Sweden)

    Cari Merkley

    2012-12-01

    – Participants showed significant decreases in their levels of computer anxiety, and significant increases in their interest in computers at the end of the program (p>0.01. Computer and web knowledge also increased among those completing the knowledge tests. Most participants (78% indicated that something they had learned in the program impacted their health decision making, and just over half of respondents (55% changed how they took medication as a result of the program. Participants were also very satisfied with the program’s delivery and format, with 97% indicating that they had learned a lot from the course. Most (68% participants said that they wished the class had been longer, and there was full support for similar programming to be offered at public libraries. Participants also reported that they found the NIHSeniorHealth website more useful, but not significantly more usable, than MedlinePlus.Conclusion – The intervention as designed successfully addressed issues of computer and health literacy with older adult participants. By using existing resources, such as public library computer facilities and curricula developed by the National Institutes of Health, the intervention also provides a model that could be easily replicated in other locations without the need for significant financial resources.

  11. Replicating chromatin: a tale of histones

    DEFF Research Database (Denmark)

    Groth, Anja

    2009-01-01

    Chromatin serves structural and functional roles crucial for genome stability and correct gene expression. This organization must be reproduced on daughter strands during replication to maintain proper overlay of epigenetic fabric onto genetic sequence. Nucleosomes constitute the structural...... framework of chromatin and carry information to specify higher-order organization and gene expression. When replication forks traverse the chromosomes, nucleosomes are transiently disrupted, allowing the replication machinery to gain access to DNA. Histone recycling, together with new deposition, ensures...

  12. Varicella-zoster virus (VZV) origin of DNA replication oriS influences origin-dependent DNA replication and flanking gene transcription.

    Science.gov (United States)

    Khalil, Mohamed I; Sommer, Marvin H; Hay, John; Ruyechan, William T; Arvin, Ann M

    2015-07-01

    The VZV genome has two origins of DNA replication (oriS), each of which consists of an AT-rich sequence and three origin binding protein (OBP) sites called Box A, C and B. In these experiments, the mutation in the core sequence CGC of the Box A and C not only inhibited DNA replication but also inhibited both ORF62 and ORF63 expression in reporter gene assays. In contrast the Box B mutation did not influence DNA replication or flanking gene transcription. These results suggest that efficient DNA replication enhances ORF62 and ORF63 transcription. Recombinant viruses carrying these mutations in both sites and one with a deletion of the whole oriS were constructed. Surprisingly, the recombinant virus lacking both copies of oriS retained the capacity to replicate in melanoma and HELF cells suggesting that VZV has another origin of DNA replication. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Saturday Institute for Manhood, Brotherhood Actualization. Replication Manual [and] Blueprint Resource Manual.

    Science.gov (United States)

    Wholistic Stress Control Inst., Atlanta, GA.

    The Saturday Institute for Manhood, Brotherhood Actualization (SIMBA) is a collaborative effort of 12 community organizations that combine resources and ideas to reduce risk factors and increase resilience for young African American males. The program offers youth, aged 9 to 16, who reside at the Lorenzo Benn Youth Development Campus, training…

  14. EDDA: An Efficient Distributed Data Replication Algorithm in VANETs.

    Science.gov (United States)

    Zhu, Junyu; Huang, Chuanhe; Fan, Xiying; Guo, Sipei; Fu, Bin

    2018-02-10

    Efficient data dissemination in vehicular ad hoc networks (VANETs) is a challenging issue due to the dynamic nature of the network. To improve the performance of data dissemination, we study distributed data replication algorithms in VANETs for exchanging information and computing in an arbitrarily-connected network of vehicle nodes. To achieve low dissemination delay and improve the network performance, we control the number of message copies that can be disseminated in the network and then propose an efficient distributed data replication algorithm (EDDA). The key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process. We calculate the number of communication stages for the network to enter into a balanced status and show that the proposed distributed algorithm can converge to a consensus in a small number of communication stages. Most of the theoretical results described in this paper are to study the complexity of network convergence. The lower bound and upper bound are also provided in the analysis of the algorithm. Simulation results show that the proposed EDDA can efficiently disseminate messages to vehicles in a specific area with low dissemination delay and system overhead.

  15. EDDA: An Efficient Distributed Data Replication Algorithm in VANETs

    Science.gov (United States)

    Zhu, Junyu; Huang, Chuanhe; Fan, Xiying; Guo, Sipei; Fu, Bin

    2018-01-01

    Efficient data dissemination in vehicular ad hoc networks (VANETs) is a challenging issue due to the dynamic nature of the network. To improve the performance of data dissemination, we study distributed data replication algorithms in VANETs for exchanging information and computing in an arbitrarily-connected network of vehicle nodes. To achieve low dissemination delay and improve the network performance, we control the number of message copies that can be disseminated in the network and then propose an efficient distributed data replication algorithm (EDDA). The key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process. We calculate the number of communication stages for the network to enter into a balanced status and show that the proposed distributed algorithm can converge to a consensus in a small number of communication stages. Most of the theoretical results described in this paper are to study the complexity of network convergence. The lower bound and upper bound are also provided in the analysis of the algorithm. Simulation results show that the proposed EDDA can efficiently disseminate messages to vehicles in a specific area with low dissemination delay and system overhead. PMID:29439443

  16. Modes of DNA repair and replication

    International Nuclear Information System (INIS)

    Hanawalt, P.; Kondo, S.

    1979-01-01

    Modes of DNA repair and replication require close coordination as well as some overlap of enzyme functions. Some classes of recovery deficient mutants may have defects in replication rather than repair modes. Lesions such as the pyrimidine dimers produced by ultraviolet light irradiation are the blocks to normal DNA replication in vivo and in vitro. The DNA synthesis by the DNA polymerase 1 of E. coli is blocked at one nucleotide away from the dimerized pyrimidines in template strands. Thus, some DNA polymerases seem to be unable to incorporate nucleotides opposite to the non-pairing lesions in template DNA strands. The lesions in template DNA strands may block the sequential addition of nucleotides in the synthesis of daughter strands. Normal replication utilizes a constitutive ''error-free'' mode that copies DNA templates with high fidelity, but which may be totally blocked at a lesion that obscures the appropriate base pairing specificity. It might be expected that modified replication system exhibits generally high error frequency. The error rate of DNA polymerases may be controlled by the degree of phosphorylation of the enzyme. Inducible SOS system is controlled by recA genes that also control the pathways for recombination. It is possible that SOS system involves some process other than the modification of a blocked replication apparatus to permit error-prone transdimer synthesis. (Yamashita, S.)

  17. Quantitative analysis of replication-related mutation and selection pressures in bacterial chromosomes and plasmids using generalised GC skew index

    Directory of Open Access Journals (Sweden)

    Suzuki Haruo

    2009-12-01

    Full Text Available Abstract Background Due to their bi-directional replication machinery starting from a single finite origin, bacterial genomes show characteristic nucleotide compositional bias between the two replichores, which can be visualised through GC skew or (C-G/(C+G. Although this polarisation is used for computational prediction of replication origins in many bacterial genomes, the degree of GC skew visibility varies widely among different species, necessitating a quantitative measurement of GC skew strength in order to provide confidence measures for GC skew-based predictions of replication origins. Results Here we discuss a quantitative index for the measurement of GC skew strength, named the generalised GC skew index (gGCSI, which is applicable to genomes of any length, including bacterial chromosomes and plasmids. We demonstrate that gGCSI is independent of the window size and can thus be used to compare genomes with different sizes, such as bacterial chromosomes and plasmids. It can suggest the existence of different replication mechanisms in archaea and of rolling-circle replication in plasmids. Correlation of gGCSI values between plasmids and their corresponding host chromosomes suggests that within the same strain, these replicons have reproduced using the same replication machinery and thus exhibit similar strengths of replication strand skew. Conclusions gGCSI can be applied to genomes of any length and thus allows comparative study of replication-related mutation and selection pressures in genomes of different lengths such as bacterial chromosomes and plasmids. Using gGCSI, we showed that replication-related mutation or selection pressure is similar for replicons with similar machinery.

  18. Charter School Replication. Policy Guide

    Science.gov (United States)

    Rhim, Lauren Morando

    2009-01-01

    "Replication" is the practice of a single charter school board or management organization opening several more schools that are each based on the same school model. The most rapid strategy to increase the number of new high-quality charter schools available to children is to encourage the replication of existing quality schools. This policy guide…

  19. "Replicability and other features of a high-quality science: Toward a balanced and empirical approach": Correction to Finkel et al. (2017).

    Science.gov (United States)

    2017-11-01

    of commandeering resources that would have been better invested in other studies. In their critique of FER2015, LeBel, Campbell, and Loving (2016) concluded, based on simulated data, that ever-larger samples are better for the efficiency of scientific discovery (i.e., that there are no tradeoffs). As demonstrated here, however, this conclusion holds only when the replicator's resources are considered in isolation. If we widen the assumptions to include the original researcher's resources as well, which is necessary if the goal is to consider resource investment for the field as a whole, the conclusion changes radically-and strongly supports a tradeoff-based analysis. In general, as psychologists seek to strengthen our science, we must complement our much-needed work on increasing replicability with careful attention to the other features of a high-quality science. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. The pilot way to Grid resources using glideinWMS

    CERN Document Server

    Sfiligoi, Igor; Holzman, Burt; Mhashilkar, Parag; Padhi, Sanjay; Wurthwrin, Frank

    Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of grid middleware shared between them. This deployment model has proven to be very convenient for computing resource providers, but has introduced several problems for the users of the system, the three major being the complexity of job scheduling, the non-uniformity of compute resources, and the lack of good job monitoring. Pilot jobs address all the above problems by creating a virtual private computing pool on top of grid resources. This paper presents both the general pilot concept, as well as a concrete implementation, called glideinWMS, deployed in the Open Science Grid.

  1. National Uranium Resource Evaluation Program. Hydrogeochemical and Stream Sediment Reconnaissance Basic Data Reports Computer Program Requests Manual

    International Nuclear Information System (INIS)

    1980-01-01

    This manual is intended to aid those who are unfamiliar with ordering computer output for verification and preparation of Uranium Resource Evaluation (URE) Project reconnaissance basic data reports. The manual is also intended to help standardize the procedures for preparing the reports. Each section describes a program or group of related programs. The sections are divided into three parts: Purpose, Request Forms, and Requested Information

  2. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    International Nuclear Information System (INIS)

    Capone, V; Esposito, R; Pardi, S; Taurino, F; Tortone, G

    2012-01-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  3. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  4. Replication of kinetoplast minicircle DNA

    International Nuclear Information System (INIS)

    Sheline, C.T.

    1989-01-01

    These studies describe the isolation and characterization of early minicircle replication intermediates from Crithidia fasciculata, and Leishmania tarentolae, the mitochondrial localization of a type II topoisomerase (TIImt) in C. fasciculata, and the implication of the aforementioned TIImt in minicircle replication in L. tarentolae. Early minicircle replication intermediates from C. fasciculata were identified and characterized using isolated kinetoplasts to incorporate radiolabeled nucleotides into its DNA. The pulse-label in an apparent theta-type intermediate chase into two daughter molecules. A uniquely gapped, ribonucleotide primed, knotted molecule represents the leading strand in the model proposed, and a highly gapped molecule represents the lagging strand. This theta intermediate is repaired in vitro to a doubly nicked catenated dimer which was shown to result from the replication of a single parental molecule. Very similar intermediates were found in the heterogeneous population of minicircles of L. tarentolae. The sites of the Leishmania specific discontinuities were mapped and shown to lie within the universally conserved sequence blocks in identical positions as compared to C. fasciculata and Trypanosoma equiperdum

  5. Manual of Cupule Replication Technology

    Directory of Open Access Journals (Sweden)

    Giriraj Kumar

    2015-09-01

    Full Text Available Throughout the world, iconic rock art is preceded by non-iconic rock art. Cupules (manmade, roughly semi-hemispherical depressions on rocks form the major bulk of the early non-iconic rock art globally. The antiquity of cupules extends back to the Lower Paleolithic in Asia and Africa, hundreds of thousand years ago. When one observes these cupules, the inquisitive mind poses so many questions with regard to understanding their technology, reasons for selecting the site, which rocks were used to make the hammer stones used, the skill and cognitive abilities employed to create the different types of cupules, the objective of their creation, their age, and so on. Replication of the cupules can provide satisfactory answers to some of these questions. Comparison of the hammer stones and cupules produced by the replication process with those obtained from excavation can provide support to observations. This paper presents a manual of cupule replication technology based on our experience of cupule replication on hard quartzite rock near Daraki-Chattan in the Chambal Basin, India.

  6. Targeting DNA Replication Stress for Cancer Therapy

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2016-08-01

    Full Text Available The human cellular genome is under constant stress from extrinsic and intrinsic factors, which can lead to DNA damage and defective replication. In normal cells, DNA damage response (DDR mediated by various checkpoints will either activate the DNA repair system or induce cellular apoptosis/senescence, therefore maintaining overall genomic integrity. Cancer cells, however, due to constitutive growth signaling and defective DDR, may exhibit “replication stress” —a phenomenon unique to cancer cells that is described as the perturbation of error-free DNA replication and slow-down of DNA synthesis. Although replication stress has been proven to induce genomic instability and tumorigenesis, recent studies have counterintuitively shown that enhancing replicative stress through further loosening of the remaining checkpoints in cancer cells to induce their catastrophic failure of proliferation may provide an alternative therapeutic approach. In this review, we discuss the rationale to enhance replicative stress in cancer cells, past approaches using traditional radiation and chemotherapy, and emerging approaches targeting the signaling cascades induced by DNA damage. We also summarize current clinical trials exploring these strategies and propose future research directions including the use of combination therapies, and the identification of potential new targets and biomarkers to track and predict treatment responses to targeting DNA replication stress.

  7. Global profiling of DNA replication timing and efficiency reveals that efficient replication/firing occurs late during S-phase in S. pombe.

    Directory of Open Access Journals (Sweden)

    Majid Eshaghi

    Full Text Available BACKGROUND: During S. pombe S-phase, initiation of DNA replication occurs at multiple sites (origins that are enriched with AT-rich sequences, at various times. Current studies of genome-wide DNA replication profiles have focused on the DNA replication timing and origin location. However, the replication and/or firing efficiency of the individual origins on the genomic scale remain unclear. METHODOLOGY/PRINCIPAL FINDINGS: Using the genome-wide ORF-specific DNA microarray analysis, we show that in S. pombe, individual origins fire with varying efficiencies and at different times during S-phase. The increase in DNA copy number plotted as a function of time is approximated to the near-sigmoidal model, when considering the replication start and end timings at individual loci in cells released from HU-arrest. Replication efficiencies differ from origin to origin, depending on the origin's firing efficiency. We have found that DNA replication is inefficient early in S-phase, due to inefficient firing at origins. Efficient replication occurs later, attributed to efficient but late-firing origins. Furthermore, profiles of replication timing in cds1Delta cells are abnormal, due to the failure in resuming replication at the collapsed forks. The majority of the inefficient origins, but not the efficient ones, are found to fire in cds1Delta cells after HU removal, owing to the firing at the remaining unused (inefficient origins during HU treatment. CONCLUSIONS/SIGNIFICANCE: Taken together, our results indicate that efficient DNA replication/firing occurs late in S-phase progression in cells after HU removal, due to efficient late-firing origins. Additionally, checkpoint kinase Cds1p is required for maintaining the efficient replication/firing late in S-phase. We further propose that efficient late-firing origins are essential for ensuring completion of DNA duplication by the end of S-phase.

  8. Framework for Computation Offloading in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dejan Kovachev

    2012-12-01

    Full Text Available The inherently limited processing power and battery lifetime of mobile phones hinder the possible execution of computationally intensive applications like content-based video analysis or 3D modeling. Offloading of computationally intensive application parts from the mobile platform into a remote cloud infrastructure or nearby idle computers addresses this problem. This paper presents our Mobile Augmentation Cloud Services (MACS middleware which enables adaptive extension of Android application execution from a mobile client into the cloud. Applications are developed by using the standard Android development pattern. The middleware does the heavy lifting of adaptive application partitioning, resource monitoring and computation offloading. These elastic mobile applications can run as usual mobile application, but they can also use remote computing resources transparently. Two prototype applications using the MACS middleware demonstrate the benefits of the approach. The evaluation shows that applications, which involve costly computations, can benefit from offloading with around 95% energy savings and significant performance gains compared to local execution only.

  9. pUL34 binding near the human cytomegalovirus origin of lytic replication enhances DNA replication and viral growth.

    Science.gov (United States)

    Slayton, Mark; Hossain, Tanvir; Biegalke, Bonita J

    2018-05-01

    The human cytomegalovirus (HCMV) UL34 gene encodes sequence-specific DNA-binding proteins (pUL34) which are required for viral replication. Interactions of pUL34 with DNA binding sites represses transcription of two viral immune evasion genes, US3 and US9. 12 additional predicted pUL34-binding sites are present in the HCMV genome (strain AD169) with three binding sites concentrated near the HCMV origin of lytic replication (oriLyt). We used ChIP-seq analysis of pUL34-DNA interactions to confirm that pUL34 binds to the oriLyt region during infection. Mutagenesis of the UL34-binding sites in an oriLyt-containing plasmid significantly reduced viral-mediated oriLyt-dependent DNA replication. Mutagenesis of these sites in the HCMV genome reduced the replication efficiencies of the resulting viruses. Protein-protein interaction analyses demonstrated that pUL34 interacts with the viral proteins IE2, UL44, and UL84, that are essential for viral DNA replication, suggesting that pUL34-DNA interactions in the oriLyt region are involved in the DNA replication cascade. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  11. Replication of bacteriophage lambda DNA

    International Nuclear Information System (INIS)

    Tsurimoto, T.; Matsubara, K.

    1983-01-01

    In this paper results of studies on the mechanism of bacteriophage lambda replication using molecular biological and biochemical approaches are reported. The purification of the initiator proteins, O and P, and the role of the O and P proteins in the initiation of lambda DNA replication through interactions with specific DNA sequences are described. 47 references, 15 figures

  12. Direct Visualization of DNA Replication Dynamics in Zebrafish Cells.

    Science.gov (United States)

    Kuriya, Kenji; Higashiyama, Eriko; Avşar-Ban, Eriko; Tamaru, Yutaka; Ogata, Shin; Takebayashi, Shin-ichiro; Ogata, Masato; Okumura, Katsuzumi

    2015-12-01

    Spatiotemporal regulation of DNA replication in the S-phase nucleus has been extensively studied in mammalian cells because it is tightly coupled with the regulation of other nuclear processes such as transcription. However, little is known about the replication dynamics in nonmammalian cells. Here, we analyzed the DNA replication processes of zebrafish (Danio rerio) cells through the direct visualization of replicating DNA in the nucleus and on DNA fiber molecules isolated from the nucleus. We found that zebrafish chromosomal DNA at the nuclear interior was replicated first, followed by replication of DNA at the nuclear periphery, which is reminiscent of the spatiotemporal regulation of mammalian DNA replication. However, the relative duration of interior DNA replication in zebrafish cells was longer compared to mammalian cells, possibly reflecting zebrafish-specific genomic organization. The rate of replication fork progression and ori-to-ori distance measured by the DNA combing technique were ∼ 1.4 kb/min and 100 kb, respectively, which are comparable to those in mammalian cells. To our knowledge, this is a first report that measures replication dynamics in zebrafish cells.

  13. Zinc Salts Block Hepatitis E Virus Replication by Inhibiting the Activity of Viral RNA-Dependent RNA Polymerase.

    Science.gov (United States)

    Kaushik, Nidhi; Subramani, Chandru; Anang, Saumya; Muthumohan, Rajagopalan; Shalimar; Nayak, Baibaswata; Ranjith-Kumar, C T; Surjit, Milan

    2017-11-01

    Hepatitis E virus (HEV) causes an acute, self-limiting hepatitis in healthy individuals and leads to chronic disease in immunocompromised individuals. HEV infection in pregnant women results in a more severe outcome, with the mortality rate going up to 30%. Though the virus usually causes sporadic infection, epidemics have been reported in developing and resource-starved countries. No specific antiviral exists against HEV. A combination of interferon and ribavirin therapy has been used to control the disease with some success. Zinc is an essential micronutrient that plays crucial roles in multiple cellular processes. Zinc salts are known to be effective in reducing infections caused by few viruses. Here, we investigated the effect of zinc salts on HEV replication. In a human hepatoma cell (Huh7) culture model, zinc salts inhibited the replication of genotype 1 (g-1) and g-3 HEV replicons and g-1 HEV infectious genomic RNA in a dose-dependent manner. Analysis of a replication-defective mutant of g-1 HEV genomic RNA under similar conditions ruled out the possibility of zinc salts acting on replication-independent processes. An ORF4-Huh7 cell line-based infection model of g-1 HEV further confirmed the above observations. Zinc salts did not show any effect on the entry of g-1 HEV into the host cell. Furthermore, our data reveal that zinc salts directly inhibit the activity of viral RNA-dependent RNA polymerase (RdRp), leading to inhibition of viral replication. Taken together, these studies unravel the ability of zinc salts in inhibiting HEV replication, suggesting their possible therapeutic value in controlling HEV infection. IMPORTANCE Hepatitis E virus (HEV) is a public health concern in resource-starved countries due to frequent outbreaks. It is also emerging as a health concern in developed countries owing to its ability to cause acute and chronic infection in organ transplant and immunocompromised individuals. Although antivirals such as ribavirin have been used

  14. Insulated hsp70B' promoter: stringent heat-inducible activity in replication-deficient, but not replication-competent adenoviruses.

    Science.gov (United States)

    Rohmer, Stanimira; Mainka, Astrid; Knippertz, Ilka; Hesse, Andrea; Nettelbeck, Dirk M

    2008-04-01

    Key to the realization of gene therapy is the development of efficient and targeted gene transfer vectors. Therapeutic gene transfer by replication-deficient or more recently by conditionally replication-competent/oncolytic adenoviruses has shown much promise. For specific applications, however, it will be advantageous to provide vectors that allow for external control of gene expression. The efficient cellular heat shock system in combination with available technology for focused and controlled hyperthermia suggests heat-regulated transcription control as a promising tool for this purpose. We investigated the feasibility of a short fragment of the human hsp70B' promoter, with and without upstream insulator elements, for the regulation of transgene expression by replication-deficient or oncolytic adenoviruses. Two novel adenoviral vectors with an insulated hsp70B' promoter were developed and showed stringent heat-inducible gene expression with induction ratios up to 8000-fold. In contrast, regulation of gene expression from the hsp70B' promoter without insulation was suboptimal. In replication-competent/oncolytic adenoviruses regulation of the hsp70B' promoter was lost specifically during late replication in permissive cells and could not be restored by the insulators. We developed novel adenovirus gene transfer vectors that feature improved and stringent regulation of transgene expression from the hsp70B' promoter using promoter insulation. These vectors have potential for gene therapy applications that benefit from external modulation of therapeutic gene expression or for combination therapy with hyperthermia. Furthermore, our study reveals that vector replication can deregulate inserted cellular promoters, an observation which is of relevance for the development of replication-competent/oncolytic gene transfer vectors. (c) 2008 John Wiley & Sons, Ltd.

  15. Replication assessment of surface texture at sub-micrometre scale

    DEFF Research Database (Denmark)

    Quagliotti, Danilo; Tosello, Guido; Hansen, Hans Nørgaard

    2017-01-01

    [2]. A replication process requires reproducing a master geometry by conveying it to a substrate material. It is typically induced by means of different energy sources (usually heat and force) and a direct physical contact between the master and the substrate. Furthermore, concepts of advanced......, because of the replication nature of molding processes, the required specifications for the manufacture of micro molded components must be ensured by means of a metrological approach to surface replication and dimensional control of both master geometry and replicated substrate [3]-[4]. Therefore...... replication was assessed by the replication fidelity, i.e., comparing the produced parts with the tool used to replicate the geometry. Furthermore, the uncertainty of the replication fidelity was achieved by propagating the uncertainties evaluated for both masters and replicas. Finally, despite the specimens...

  16. Mechanisms and regulation of DNA replication initiation in eukaryotes.

    Science.gov (United States)

    Parker, Matthew W; Botchan, Michael R; Berger, James M

    2017-04-01

    Cellular DNA replication is initiated through the action of multiprotein complexes that recognize replication start sites in the chromosome (termed origins) and facilitate duplex DNA melting within these regions. In a typical cell cycle, initiation occurs only once per origin and each round of replication is tightly coupled to cell division. To avoid aberrant origin firing and re-replication, eukaryotes tightly regulate two events in the initiation process: loading of the replicative helicase, MCM2-7, onto chromatin by the origin recognition complex (ORC), and subsequent activation of the helicase by its incorporation into a complex known as the CMG. Recent work has begun to reveal the details of an orchestrated and sequential exchange of initiation factors on DNA that give rise to a replication-competent complex, the replisome. Here, we review the molecular mechanisms that underpin eukaryotic DNA replication initiation - from selecting replication start sites to replicative helicase loading and activation - and describe how these events are often distinctly regulated across different eukaryotic model organisms.

  17. Updates and solution to the 21st century computer virus sourge ...

    African Journals Online (AJOL)

    The computer virus scourge continues to be a problem the Information Technology (IT), industries must address. Computer virus is a malicious program codes, which can replicate and spread infections into large number of possible hosts and cause damage to computer programs, files, databases and data, in general.

  18. Security of fixed and wireless computer networks

    NARCIS (Netherlands)

    Verschuren, J.; Degen, A.J.G.; Veugen, P.J.M.

    2003-01-01

    A few decades ago, most computers were stand-alone machines: they were able to process information using their own resources. Later, computer systems were connected to each other enabling a computer system to exchange data with another computer and to use resources of another computer. With the

  19. Protection of Mission-Critical Applications from Untrusted Execution Environment: Resource Efficient Replication and Migration of Virtual Machines

    Science.gov (United States)

    2015-09-28

    in the same LAN ; this setup resembles the typical setup in a virtualized datacenter where protected and backup hosts are connected by an internal LAN ... Virtual Machines 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0393 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Kang G. Shin 5d. PROJECT...Distribution A - Approved for Public Release 13. SUPPLEMENTARY NOTES None 14. ABSTRACT Continuous replication and live migration of Virtual Machines (VMs

  20. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  1. Identifying the impact of G-quadruplexes on Affymetrix 3' arrays using cloud computing.

    Science.gov (United States)

    Memon, Farhat N; Owen, Anne M; Sanchez-Graillet, Olivia; Upton, Graham J G; Harrison, Andrew P

    2010-01-15

    A tetramer quadruplex structure is formed by four parallel strands of DNA/ RNA containing runs of guanine. These quadruplexes are able to form because guanine can Hoogsteen hydrogen bond to other guanines, and a tetrad of guanines can form a stable arrangement. Recently we have discovered that probes on Affymetrix GeneChips that contain runs of guanine do not measure gene expression reliably. We associate this finding with the likelihood that quadruplexes are forming on the surface of GeneChips. In order to cope with the rapidly expanding size of GeneChip array datasets in the public domain, we are exploring the use of cloud computing to replicate our experiments on 3' arrays to look at the effect of the location of G-spots (runs of guanines). Cloud computing is a recently introduced high-performance solution that takes advantage of the computational infrastructure of large organisations such as Amazon and Google. We expect that cloud computing will become widely adopted because it enables bioinformaticians to avoid capital expenditure on expensive computing resources and to only pay a cloud computing provider for what is used. Moreover, as well as financial efficiency, cloud computing is an ecologically-friendly technology, it enables efficient data-sharing and we expect it to be faster for development purposes. Here we propose the advantageous use of cloud computing to perform a large data-mining analysis of public domain 3' arrays.

  2. Commercial Building Partnerships Replication and Diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Antonopoulos, Chrissi A.; Dillon, Heather E.; Baechler, Michael C.

    2013-09-16

    This study presents findings from survey and interview data investigating replication efforts of Commercial Building Partnership (CBP) partners that worked directly with the Pacific Northwest National Laboratory (PNNL). PNNL partnered directly with 12 organizations on new and retrofit construction projects, which represented approximately 28 percent of the entire U.S. Department of Energy (DOE) CBP program. Through a feedback survey mechanism, along with personal interviews, PNNL gathered quantitative and qualitative data relating to replication efforts by each organization. These data were analyzed to provide insight into two primary research areas: 1) CBP partners’ replication efforts of technologies and approaches used in the CBP project to the rest of the organization’s building portfolio (including replication verification), and, 2) the market potential for technology diffusion into the total U.S. commercial building stock, as a direct result of the CBP program. The first area of this research focused specifically on replication efforts underway or planned by each CBP program participant. Factors that impact replication include motivation, organizational structure and objectives firms have for implementation of energy efficient technologies. Comparing these factors between different CBP partners revealed patterns in motivation for constructing energy efficient buildings, along with better insight into market trends for green building practices. The second area of this research develops a diffusion of innovations model to analyze potential broad market impacts of the CBP program on the commercial building industry in the United States.

  3. CLOUD COMPUTING OVERVIEW AND CHALLENGES: A REVIEW PAPER

    OpenAIRE

    Satish Kumar*, Vishal Thakur, Payal Thakur, Ashok Kumar Kashyap

    2017-01-01

    Cloud computing era is the most resourceful, elastic, utilized and scalable period for internet technology to use the computing resources over the internet successfully. Cloud computing did not provide only the speed, accuracy, storage capacity and efficiency for computing but it also lead to propagate the green computing and resource utilization. In this research paper, a brief description of cloud computing, cloud services and cloud security challenges is given. Also the literature review o...

  4. The Genomic Replication of the Crenarchaeal Virus SIRV2

    DEFF Research Database (Denmark)

    Martinez Alvarez, Laura

    reinitiation events may partially explain the branched topology of the viral replication intermediates. We also analyzed the intracellular location of viral replication, showing the formation of viral peripheral replication centers in SIRV2-infected cells, where viral DNA synthesis and replication...

  5. The Role of the Transcriptional Response to DNA Replication Stress.

    Science.gov (United States)

    Herlihy, Anna E; de Bruin, Robertus A M

    2017-03-02

    During DNA replication many factors can result in DNA replication stress. The DNA replication stress checkpoint prevents the accumulation of replication stress-induced DNA damage and the potential ensuing genome instability. A critical role for post-translational modifications, such as phosphorylation, in the replication stress checkpoint response has been well established. However, recent work has revealed an important role for transcription in the cellular response to DNA replication stress. In this review, we will provide an overview of current knowledge of the cellular response to DNA replication stress with a specific focus on the DNA replication stress checkpoint transcriptional response and its role in the prevention of replication stress-induced DNA damage.

  6. The Role of the Transcriptional Response to DNA Replication Stress

    Science.gov (United States)

    Herlihy, Anna E.; de Bruin, Robertus A.M.

    2017-01-01

    During DNA replication many factors can result in DNA replication stress. The DNA replication stress checkpoint prevents the accumulation of replication stress-induced DNA damage and the potential ensuing genome instability. A critical role for post-translational modifications, such as phosphorylation, in the replication stress checkpoint response has been well established. However, recent work has revealed an important role for transcription in the cellular response to DNA replication stress. In this review, we will provide an overview of current knowledge of the cellular response to DNA replication stress with a specific focus on the DNA replication stress checkpoint transcriptional response and its role in the prevention of replication stress-induced DNA damage. PMID:28257104

  7. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  8. DNA Copy-Number Control through Inhibition of Replication Fork Progression

    Directory of Open Access Journals (Sweden)

    Jared T. Nordman

    2014-11-01

    Full Text Available Proper control of DNA replication is essential to ensure faithful transmission of genetic material and prevent chromosomal aberrations that can drive cancer progression and developmental disorders. DNA replication is regulated primarily at the level of initiation and is under strict cell-cycle regulation. Importantly, DNA replication is highly influenced by developmental cues. In Drosophila, specific regions of the genome are repressed for DNA replication during differentiation by the SNF2 domain-containing protein SUUR through an unknown mechanism. We demonstrate that SUUR is recruited to active replication forks and mediates the repression of DNA replication by directly inhibiting replication fork progression instead of functioning as a replication fork barrier. Mass spectrometry identification of SUUR-associated proteins identified the replicative helicase member CDC45 as a SUUR-associated protein, supporting a role for SUUR directly at replication forks. Our results reveal that control of eukaryotic DNA copy number can occur through the inhibition of replication fork progression.

  9. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    Science.gov (United States)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  10. An Evaluation of Copy Cover and Compare Spelling Intervention for an Elementary Student with Learning Disabilities: A Replication

    Science.gov (United States)

    Breach, Celena; McLaughlin, T. F.; Derby, K. Mark

    2016-01-01

    The purpose of this study was to increase the spelling performance for a 4th grade student with learning disabilities. The second objective was to replicate the document with the efficacy of Copy, Cover, and Compare (CCC) in spelling. The study was conducted in a resource room in a low socio-economic school in the Pacific Northwest. The skill…

  11. Initiation of Replication in Escherichia coli

    DEFF Research Database (Denmark)

    Frimodt-Møller, Jakob

    The circular chromosome of Escherichia coli is replicated by two replisomes assembled at the unique origin and moving in the opposite direction until they meet in the less well defined terminus. The key protein in initiation of replication, DnaA, facilitates the unwinding of double-stranded DNA...... to single-stranded DNA in oriC. Although DnaA is able to bind both ADP and ATP, DnaA is only active in initiation when bound to ATP. Although initiation of replication, and the regulation of this, is thoroughly investigated it is still not fully understood. The overall aim of the thesis was to investigate...... the regulation of initiation, the effect on the cell when regulation fails, and if regulation was interlinked to chromosomal organization. This thesis uncovers that there exists a subtle balance between chromosome replication and reactive oxygen species (ROS) inflicted DNA damage. Thus, failure in regulation...

  12. A resource management architecture for metacomputing systems.

    Energy Technology Data Exchange (ETDEWEB)

    Czajkowski, K.; Foster, I.; Karonis, N.; Kesselman, C.; Martin, S.; Smith, W.; Tuecke, S.

    1999-08-24

    Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.

  13. COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...

    Science.gov (United States)

    This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).

  14. Ultrastructural Characterization of Zika Virus Replication Factories

    Directory of Open Access Journals (Sweden)

    Mirko Cortese

    2017-02-01

    Full Text Available Summary: A global concern has emerged with the pandemic spread of Zika virus (ZIKV infections that can cause severe neurological symptoms in adults and newborns. ZIKV is a positive-strand RNA virus replicating in virus-induced membranous replication factories (RFs. Here we used various imaging techniques to investigate the ultrastructural details of ZIKV RFs and their relationship with host cell organelles. Analyses of human hepatic cells and neural progenitor cells infected with ZIKV revealed endoplasmic reticulum (ER membrane invaginations containing pore-like openings toward the cytosol, reminiscent to RFs in Dengue virus-infected cells. Both the MR766 African strain and the H/PF/2013 Asian strain, the latter linked to neurological diseases, induce RFs of similar architecture. Importantly, ZIKV infection causes a drastic reorganization of microtubules and intermediate filaments forming cage-like structures surrounding the viral RF. Consistently, ZIKV replication is suppressed by cytoskeleton-targeting drugs. Thus, ZIKV RFs are tightly linked to rearrangements of the host cell cytoskeleton. : Cortese et al. show that ZIKV infection in both human hepatoma and neuronal progenitor cells induces drastic structural modification of the cellular architecture. Microtubules and intermediate filaments surround the viral replication factory composed of vesicles corresponding to ER membrane invagination toward the ER lumen. Importantly, alteration of microtubule flexibility impairs ZIKV replication. Keywords: Zika virus, flavivirus, human neural progenitor cells, replication factories, replication organelles, microtubules, intermediate filaments, electron microscopy, electron tomography, live-cell imaging

  15. Tombusviruses upregulate phospholipid biosynthesis via interaction between p33 replication protein and yeast lipid sensor proteins during virus replication in yeast

    International Nuclear Information System (INIS)

    Barajas, Daniel; Xu, Kai; Sharma, Monika; Wu, Cheng-Yu; Nagy, Peter D.

    2014-01-01

    Positive-stranded RNA viruses induce new membranous structures and promote membrane proliferation in infected cells to facilitate viral replication. In this paper, the authors show that a plant-infecting tombusvirus upregulates transcription of phospholipid biosynthesis genes, such as INO1, OPI3 and CHO1, and increases phospholipid levels in yeast model host. This is accomplished by the viral p33 replication protein, which interacts with Opi1p FFAT domain protein and Scs2p VAP protein. Opi1p and Scs2p are phospholipid sensor proteins and they repress the expression of phospholipid genes. Accordingly, deletion of OPI1 transcription repressor in yeast has a stimulatory effect on TBSV RNA accumulation and enhanced tombusvirus replicase activity in an in vitro assay. Altogether, the presented data convincingly demonstrate that de novo lipid biosynthesis is required for optimal TBSV replication. Overall, this work reveals that a (+)RNA virus reprograms the phospholipid biosynthesis pathway in a unique way to facilitate its replication in yeast cells. - Highlights: • Tombusvirus p33 replication protein interacts with FFAT-domain host protein. • Tombusvirus replication leads to upregulation of phospholipids. • Tombusvirus replication depends on de novo lipid synthesis. • Deletion of FFAT-domain host protein enhances TBSV replication. • TBSV rewires host phospholipid synthesis

  16. Regulation of beta cell replication

    DEFF Research Database (Denmark)

    Lee, Ying C; Nielsen, Jens Høiriis

    2008-01-01

    Beta cell mass, at any given time, is governed by cell differentiation, neogenesis, increased or decreased cell size (cell hypertrophy or atrophy), cell death (apoptosis), and beta cell proliferation. Nutrients, hormones and growth factors coupled with their signalling intermediates have been...... suggested to play a role in beta cell mass regulation. In addition, genetic mouse model studies have indicated that cyclins and cyclin-dependent kinases that determine cell cycle progression are involved in beta cell replication, and more recently, menin in association with cyclin-dependent kinase...... inhibitors has been demonstrated to be important in beta cell growth. In this review, we consider and highlight some aspects of cell cycle regulation in relation to beta cell replication. The role of cell cycle regulation in beta cell replication is mostly from studies in rodent models, but whether...

  17. Phosphorylation of NS5A Serine-235 is essential to hepatitis C virus RNA replication and normal replication compartment formation

    Energy Technology Data Exchange (ETDEWEB)

    Eyre, Nicholas S., E-mail: nicholas.eyre@adelaide.edu.au [School of Biological Sciences and Research Centre for Infectious Diseases, University of Adelaide, Adelaide (Australia); Centre for Cancer Biology, SA Pathology, Adelaide (Australia); Hampton-Smith, Rachel J.; Aloia, Amanda L. [School of Biological Sciences and Research Centre for Infectious Diseases, University of Adelaide, Adelaide (Australia); Centre for Cancer Biology, SA Pathology, Adelaide (Australia); Eddes, James S. [Adelaide Proteomics Centre, School of Biological Sciences, University of Adelaide, Adelaide (Australia); Simpson, Kaylene J. [Victorian Centre for Functional Genomics, Peter MacCallum Cancer Centre, East Melbourne (Australia); The Sir Peter MacCallum Department of Oncology, University of Melbourne, Parkville (Australia); Hoffmann, Peter [Adelaide Proteomics Centre, School of Biological Sciences, University of Adelaide, Adelaide (Australia); Institute for Photonics and Advanced Sensing (IPAS), University of Adelaide, Adelaide (Australia); Beard, Michael R. [School of Biological Sciences and Research Centre for Infectious Diseases, University of Adelaide, Adelaide (Australia); Centre for Cancer Biology, SA Pathology, Adelaide (Australia)

    2016-04-15

    Hepatitis C virus (HCV) NS5A protein is essential for HCV RNA replication and virus assembly. Here we report the identification of NS5A phosphorylation sites Ser-222, Ser-235 and Thr-348 during an infectious HCV replication cycle and demonstrate that Ser-235 phosphorylation is essential for HCV RNA replication. Confocal microscopy revealed that both phosphoablatant (S235A) and phosphomimetic (S235D) mutants redistribute NS5A to large juxta-nuclear foci that display altered colocalization with known replication complex components. Using electron microscopy (EM) we found that S235D alters virus-induced membrane rearrangements while EM using ‘APEX2’-tagged viruses demonstrated S235D-mediated enrichment of NS5A in irregular membranous foci. Finally, using a customized siRNA screen of candidate NS5A kinases and subsequent analysis using a phospho-specific antibody, we show that phosphatidylinositol-4 kinase III alpha (PI4KIIIα) is important for Ser-235 phosphorylation. We conclude that Ser-235 phosphorylation of NS5A is essential for HCV RNA replication and normal replication complex formation and is regulated by PI4KIIIα. - Highlights: • NS5A residues Ser-222, Ser-235 and Thr-348 are phosphorylated during HCV infection. • Phosphorylation of Ser-235 is essential to HCV RNA replication. • Mutation of Ser-235 alters replication compartment localization and morphology. • Phosphatidylinositol-4 kinase III alpha is important for Ser-235 phosphorylation.

  18. COMPARATIVE STUDY OF CLOUD COMPUTING AND MOBILE CLOUD COMPUTING

    OpenAIRE

    Nidhi Rajak*, Diwakar Shukla

    2018-01-01

    Present era is of Information and Communication Technology (ICT) and there are number of researches are going on Cloud Computing and Mobile Cloud Computing such security issues, data management, load balancing and so on. Cloud computing provides the services to the end user over Internet and the primary objectives of this computing are resource sharing and pooling among the end users. Mobile Cloud Computing is a combination of Cloud Computing and Mobile Computing. Here, data is stored in...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  20. Uncoupling of Sister Replisomes during Eukaryotic DNA Replication

    NARCIS (Netherlands)

    Yardimci, Hasan; Loveland, Anna B.; Habuchi, Satoshi; van Oijen, Antoine M.; Walter, Johannes C.

    2010-01-01

    The duplication of eukaryotic genomes involves the replication of DNA from multiple origins of replication. In S phase, two sister replisomes assemble at each active origin, and they replicate DNA in opposite directions. Little is known about the functional relationship between sister replisomes.

  1. Visualizing Single-molecule DNA Replication with Fluorescence Microscopy

    NARCIS (Netherlands)

    Tanner, Nathan A.; Loparo, Joseph J.; Oijen, Antoine M. van

    2009-01-01

    We describe a simple fluorescence microscopy-based real-time method for observing DNA replication at the single-molecule level. A circular, forked DNA template is attached to a functionalized glass coverslip and replicated extensively after introduction of replication proteins and nucleotides. The

  2. Dynamic behavior of DNA replication domains

    NARCIS (Netherlands)

    Manders, E. M.; Stap, J.; Strackee, J.; van Driel, R.; Aten, J. A.

    1996-01-01

    Like many nuclear processes, DNA replication takes place in distinct domains that are scattered throughout the S-phase nucleus. Recently we have developed a fluorescent double-labeling procedure that allows us to visualize nascent DNA simultaneously with "newborn" DNA that had replicated earlier in

  3. Using Multiple Seasonal Holt-Winters Exponential Smoothing to Predict Cloud Resource Provisioning

    OpenAIRE

    Ashraf A. Shahin

    2016-01-01

    Elasticity is one of the key features of cloud computing that attracts many SaaS providers to minimize their services' cost. Cost is minimized by automatically provision and release computational resources depend on actual computational needs. However, delay of starting up new virtual resources can cause Service Level Agreement violation. Consequently, predicting cloud resources provisioning gains a lot of attention to scale computational resources in advance. However, most of current approac...

  4. The evolutionary ecology of molecular replicators.

    Science.gov (United States)

    Nee, Sean

    2016-08-01

    By reasonable criteria, life on the Earth consists mainly of molecular replicators. These include viruses, transposons, transpovirons, coviruses and many more, with continuous new discoveries like Sputnik Virophage. Their study is inherently multidisciplinary, spanning microbiology, genetics, immunology and evolutionary theory, and the current view is that taking a unified approach has great power and promise. We support this with a new, unified, model of their evolutionary ecology, using contemporary evolutionary theory coupling the Price equation with game theory, studying the consequences of the molecular replicators' promiscuous use of each others' gene products for their natural history and evolutionary ecology. Even at this simple expository level, we can make a firm prediction of a new class of replicators exploiting viruses such as lentiviruses like SIVs, a family which includes HIV: these have been explicitly stated in the primary literature to be non-existent. Closely connected to this departure is the view that multicellular organism immunology is more about the management of chronic infections rather than the elimination of acute ones and new understandings emerging are changing our view of the kind of theatre we ourselves provide for the evolutionary play of molecular replicators. This study adds molecular replicators to bacteria in the emerging field of sociomicrobiology.

  5. Sustainable computational science: the ReScience initiative

    Directory of Open Access Journals (Sweden)

    Nicolas P. Rougier

    2017-12-01

    Full Text Available Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.

  6. SOCR: Statistics Online Computational Resource

    OpenAIRE

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis...

  7. A Fuzzy Modeling Approach for Replicated Response Measures Based on Fuzzification of Replications with Descriptive Statistics and Golden Ratio

    Directory of Open Access Journals (Sweden)

    Özlem TÜRKŞEN

    2018-03-01

    Full Text Available Some of the experimental designs can be composed of replicated response measures in which the replications cannot be identified exactly and may have uncertainty different than randomness. Then, the classical regression analysis may not be proper to model the designed data because of the violation of probabilistic modeling assumptions. In this case, fuzzy regression analysis can be used as a modeling tool. In this study, the replicated response values are newly formed to fuzzy numbers by using descriptive statistics of replications and golden ratio. The main aim of the study is obtaining the most suitable fuzzy model for replicated response measures through fuzzification of the replicated values by taking into account the data structure of the replications in statistical framework. Here, the response and unknown model coefficients are considered as triangular type-1 fuzzy numbers (TT1FNs whereas the inputs are crisp. Predicted fuzzy models are obtained according to the proposed fuzzification rules by using Fuzzy Least Squares (FLS approach. The performances of the predicted fuzzy models are compared by using Root Mean Squared Error (RMSE criteria. A data set from the literature, called wheel cover component data set, is used to illustrate the performance of the proposed approach and the obtained results are discussed. The calculation results show that the combined formulation of the descriptive statistics and the golden ratio is the most preferable fuzzification rule according to the well-known decision making method, called TOPSIS, for the data set.

  8. Rapid transient production in plants by replicating and non-replicating vectors yields high quality functional anti-HIV antibody.

    Directory of Open Access Journals (Sweden)

    Frank Sainsbury

    2010-11-01

    Full Text Available The capacity of plants and plant cells to produce large amounts of recombinant protein has been well established. Due to advantages in terms of speed and yield, attention has recently turned towards the use of transient expression systems, including viral vectors, to produce proteins of pharmaceutical interest in plants. However, the effects of such high level expression from viral vectors and concomitant effects on host cells may affect the quality of the recombinant product.To assess the quality of antibodies transiently expressed to high levels in plants, we have expressed and characterised the human anti-HIV monoclonal antibody, 2G12, using both replicating and non-replicating systems based on deleted versions of Cowpea mosaic virus (CPMV RNA-2. The highest yield (approximately 100 mg/kg wet weight leaf tissue of affinity purified 2G12 was obtained when the non-replicating CPMV-HT system was used and the antibody was retained in the endoplasmic reticulum (ER. Glycan analysis by mass-spectrometry showed that the glycosylation pattern was determined exclusively by whether the antibody was retained in the ER and did not depend on whether a replicating or non-replicating system was used. Characterisation of the binding and neutralisation properties of all the purified 2G12 variants from plants showed that these were generally similar to those of the Chinese hamster ovary (CHO cell-produced 2G12.Overall, the results demonstrate that replicating and non-replicating CPMV-based vectors are able to direct the production of a recombinant IgG similar in activity to the CHO-produced control. Thus, a complex recombinant protein was produced with no apparent effect on its biochemical properties using either high-level expression or viral replication. The speed with which a recombinant pharmaceutical with excellent biochemical characteristics can be produced transiently in plants makes CPMV-based expression vectors an attractive option for

  9. Dynamics of picornavirus RNA replication within infected cells

    DEFF Research Database (Denmark)

    Belsham, Graham; Normann, Preben

    2008-01-01

    Replication of many picornaviruses is inhibited by low concentrations of guanidine. Guanidine-resistant mutants are readily isolated and the mutations map to the coding region for the 2C protein. Using in vitro replication assays it has been determined previously that guanidine blocks the initiat......Replication of many picornaviruses is inhibited by low concentrations of guanidine. Guanidine-resistant mutants are readily isolated and the mutations map to the coding region for the 2C protein. Using in vitro replication assays it has been determined previously that guanidine blocks...... the initiation of negative-strand synthesis. We have now examined the dynamics of RNA replication, measured by quantitative RT-PCR, within cells infected with either swine vesicular disease virus (an enterovirus) or foot-and-mouth disease virus as regulated by the presence or absence of guanidine. Following...... the removal of guanidine from the infected cells, RNA replication occurs after a significant lag phase. This restoration of RNA synthesis requires de novo protein synthesis. Viral RNA can be maintained for at least 72 h within cells in the absence of apparent replication but guanidine-resistant virus can...

  10. Pyrimidine dimers block simian virus 40 replication forks

    International Nuclear Information System (INIS)

    Berger, C.A.; Edenberg, H.J.

    1986-01-01

    UV light produces lesions, predominantly pyrimidine dimers, which inhibit DNA replication in mammalian cells. The mechanism of inhibition is controversial: is synthesis of a daughter strand halted at a lesion while the replication fork moves on and reinitiates downstream, or is fork progression itself blocked for some time at the site of a lesion? We directly addressed this question by using electron microscopy to examine the distances of replication forks from the origin in unirradiated and UV-irradiated simian virus 40 chromosomes. If UV lesions block replication fork progression, the forks should be asymmetrically located in a large fraction of the irradiated molecules; if replication forks move rapidly past lesions, the forks should be symmetrically located. A large fraction of the simian virus 40 replication forks in irradiated molecules were asymmetrically located, demonstrating that UV lesions present at the frequency of pyrimidine dimers block replication forks. As a mechanism for this fork blockage, we propose that polymerization of the leading strand makes a significant contribution to the energetics of fork movement, so any lesion in the template for the leading strand which blocks polymerization should also block fork movement

  11. Autonomous model protocell division driven by molecular replication.

    Science.gov (United States)

    Taylor, J W; Eghtesadi, S A; Points, L J; Liu, T; Cronin, L

    2017-08-10

    The coupling of compartmentalisation with molecular replication is thought to be crucial for the emergence of the first evolvable chemical systems. Minimal artificial replicators have been designed based on molecular recognition, inspired by the template copying of DNA, but none yet have been coupled to compartmentalisation. Here, we present an oil-in-water droplet system comprising an amphiphilic imine dissolved in chloroform that catalyses its own formation by bringing together a hydrophilic and a hydrophobic precursor, which leads to repeated droplet division. We demonstrate that the presence of the amphiphilic replicator, by lowering the interfacial tension between droplets of the reaction mixture and the aqueous phase, causes them to divide. Periodic sampling by a droplet-robot demonstrates that the extent of fission is increased as the reaction progresses, producing more compartments with increased self-replication. This bridges a divide, showing how replication at the molecular level can be used to drive macroscale droplet fission.Coupling compartmentalisation and molecular replication is essential for the development of evolving chemical systems. Here the authors show an oil-in-water droplet containing a self-replicating amphiphilic imine that can undergo repeated droplet division.

  12. Initiation preference at a yeast origin of replication.

    Science.gov (United States)

    Brewer, B J; Fangman, W L

    1994-04-12

    Replication origins in the yeast Saccharomyces cerevisiae are identified as autonomous replication sequence (ARS) elements. To examine the effect of origin density on replication initiation, we have analyzed the replication of a plasmid that contains two copies of the same origin, ARS1. The activation of origins and the direction that replication forks move through flanking sequences can be physically determined by analyzing replication intermediates on two-dimensional agarose gels. We find that only one of the two identical ARSs on the plasmid initiates replication on any given plasmid molecule; that is, this close spacing of ARSs results in an apparent interference between the potential origins. Moreover, in the particular plasmid that we constructed, one of the two identical copies of ARS1 is used four times more frequently than the other one. These results show that the plasmid context is critical for determining the preferred origin. This origin preference is also exhibited when the tandem copies of ARS1 are introduced into a yeast chromosome. The sequences responsible for establishing the origin preference have been identified by deletion analysis and are found to reside in a portion of the yeast URA3 gene.

  13. Gene organization inside replication domains in mammalian genomes

    Science.gov (United States)

    Zaghloul, Lamia; Baker, Antoine; Audit, Benjamin; Arneodo, Alain

    2012-11-01

    We investigate the large-scale organization of human genes with respect to "master" replication origins that were previously identified as bordering nucleotide compositional skew domains. We separate genes in two categories depending on their CpG enrichment at the promoter which can be considered as a marker of germline DNA methylation. Using expression data in mouse, we confirm that CpG-rich genes are highly expressed in germline whereas CpG-poor genes are in a silent state. We further show that, whether tissue-specific or broadly expressed (housekeeping genes), the CpG-rich genes are over-represented close to the replication skew domain borders suggesting some coordination of replication and transcription. We also reveal that the transcription of the longest CpG-rich genes is co-oriented with replication fork progression so that the promoter of these transcriptionally active genes be located into the accessible open chromatin environment surrounding the master replication origins that border the replication skew domains. The observation of a similar gene organization in the mouse genome confirms the interplay of replication, transcription and chromatin structure as the cornerstone of mammalian genome architecture.

  14. Mapping replication origins in yeast chromosomes.

    Science.gov (United States)

    Brewer, B J; Fangman, W L

    1991-07-01

    The replicon hypothesis, first proposed in 1963 by Jacob and Brenner, states that DNA replication is controlled at sites called origins. Replication origins have been well studied in prokaryotes. However, the study of eukaryotic chromosomal origins has lagged behind, because until recently there has been no method for reliably determining the identity and location of origins from eukaryotic chromosomes. Here, we review a technique we developed with the yeast Saccharomyces cerevisiae that allows both the mapping of replication origins and an assessment of their activity. Two-dimensional agarose gel electrophoresis and Southern hybridization with total genomic DNA are used to determine whether a particular restriction fragment acquires the branched structure diagnostic of replication initiation. The technique has been used to localize origins in yeast chromosomes and assess their initiation efficiency. In some cases, origin activation is dependent upon the surrounding context. The technique is also being applied to a variety of eukaryotic organisms.

  15. Assembly of Slx4 signaling complexes behind DNA replication forks.

    Science.gov (United States)

    Balint, Attila; Kim, TaeHyung; Gallo, David; Cussiol, Jose Renato; Bastos de Oliveira, Francisco M; Yimit, Askar; Ou, Jiongwen; Nakato, Ryuichiro; Gurevich, Alexey; Shirahige, Katsuhiko; Smolka, Marcus B; Zhang, Zhaolei; Brown, Grant W

    2015-08-13

    Obstructions to replication fork progression, referred to collectively as DNA replication stress, challenge genome stability. In Saccharomyces cerevisiae, cells lacking RTT107 or SLX4 show genome instability and sensitivity to DNA replication stress and are defective in the completion of DNA replication during recovery from replication stress. We demonstrate that Slx4 is recruited to chromatin behind stressed replication forks, in a region that is spatially distinct from that occupied by the replication machinery. Slx4 complex formation is nucleated by Mec1 phosphorylation of histone H2A, which is recognized by the constitutive Slx4 binding partner Rtt107. Slx4 is essential for recruiting the Mec1 activator Dpb11 behind stressed replication forks, and Slx4 complexes are important for full activity of Mec1. We propose that Slx4 complexes promote robust checkpoint signaling by Mec1 by stably recruiting Dpb11 within a discrete domain behind the replication fork, during DNA replication stress. © 2015 The Authors.

  16. Checkpoint responses to replication stalling: inducing tolerance and preventing mutagenesis

    Energy Technology Data Exchange (ETDEWEB)

    Kai, Mihoko; Wang, Teresa S.-F

    2003-11-27

    Replication mutants often exhibit a mutator phenotype characterized by point mutations, single base frameshifts, and the deletion or duplication of sequences flanked by homologous repeats. Mutation in genes encoding checkpoint proteins can significantly affect the mutator phenotype. Here, we use fission yeast (Schizosaccharomyces pombe) as a model system to discuss the checkpoint responses to replication perturbations induced by replication mutants. Checkpoint activation induced by a DNA polymerase mutant, aside from delay of mitotic entry, up-regulates the translesion polymerase DinB (Pol{kappa}). Checkpoint Rad9-Rad1-Hus1 (9-1-1) complex, which is loaded onto chromatin by the Rad17-Rfc2-5 checkpoint complex in response to replication perturbation, recruits DinB onto chromatin to generate the point mutations and single nucleotide frameshifts in the replication mutator. This chain of events reveals a novel checkpoint-induced tolerance mechanism that allows cells to cope with replication perturbation, presumably to make possible restarting stalled replication forks. Fission yeast Cds1 kinase plays an essential role in maintaining DNA replication fork stability in the face of DNA damage and replication fork stalling. Cds1 kinase is known to regulate three proteins that are implicated in maintaining replication fork stability: Mus81-Eme1, a hetero-dimeric structure-specific endonuclease complex; Rqh1, a RecQ-family helicase involved in suppressing inappropriate recombination during replication; and Rad60, a protein required for recombinational repair during replication. These Cds1-regulated proteins are thought to cooperatively prevent mutagenesis and maintain replication fork stability in cells under replication stress. These checkpoint-regulated processes allow cells to survive replication perturbation by preventing stalled replication forks from degenerating into deleterious DNA structures resulting in genomic instability and cancer development.

  17. Checkpoint responses to replication stalling: inducing tolerance and preventing mutagenesis

    International Nuclear Information System (INIS)

    Kai, Mihoko; Wang, Teresa S.-F.

    2003-01-01

    Replication mutants often exhibit a mutator phenotype characterized by point mutations, single base frameshifts, and the deletion or duplication of sequences flanked by homologous repeats. Mutation in genes encoding checkpoint proteins can significantly affect the mutator phenotype. Here, we use fission yeast (Schizosaccharomyces pombe) as a model system to discuss the checkpoint responses to replication perturbations induced by replication mutants. Checkpoint activation induced by a DNA polymerase mutant, aside from delay of mitotic entry, up-regulates the translesion polymerase DinB (Polκ). Checkpoint Rad9-Rad1-Hus1 (9-1-1) complex, which is loaded onto chromatin by the Rad17-Rfc2-5 checkpoint complex in response to replication perturbation, recruits DinB onto chromatin to generate the point mutations and single nucleotide frameshifts in the replication mutator. This chain of events reveals a novel checkpoint-induced tolerance mechanism that allows cells to cope with replication perturbation, presumably to make possible restarting stalled replication forks. Fission yeast Cds1 kinase plays an essential role in maintaining DNA replication fork stability in the face of DNA damage and replication fork stalling. Cds1 kinase is known to regulate three proteins that are implicated in maintaining replication fork stability: Mus81-Eme1, a hetero-dimeric structure-specific endonuclease complex; Rqh1, a RecQ-family helicase involved in suppressing inappropriate recombination during replication; and Rad60, a protein required for recombinational repair during replication. These Cds1-regulated proteins are thought to cooperatively prevent mutagenesis and maintain replication fork stability in cells under replication stress. These checkpoint-regulated processes allow cells to survive replication perturbation by preventing stalled replication forks from degenerating into deleterious DNA structures resulting in genomic instability and cancer development

  18. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  19. Security in a Replicated Metadata Catalogue

    CERN Document Server

    Koblitz, B

    2007-01-01

    The gLite-AMGA metadata has been developed by NA4 to provide simple relational metadata access for the EGEE user community. As advanced features, which will be the focus of this presentation, AMGA provides very fine-grained security also in connection with the built-in support for replication and federation of metadata. AMGA is extensively used by the biomedical community to store medical images metadata, digital libraries, in HEP for logging and bookkeeping data and in the climate community. The biomedical community intends to deploy a distributed metadata system for medical images consisting of various sites, which range from hospitals to computing centres. Only safe sharing of the highly sensitive metadata as provided in AMGA makes such a scenario possible. Other scenarios are digital libraries, which federate copyright protected (meta-) data into a common catalogue. The biomedical and digital libraries have been deployed using a centralized structure already for some time. They now intend to decentralize ...

  20. Hyperthermia stimulates HIV-1 replication.

    Directory of Open Access Journals (Sweden)

    Ferdinand Roesch

    Full Text Available HIV-infected individuals may experience fever episodes. Fever is an elevation of the body temperature accompanied by inflammation. It is usually beneficial for the host through enhancement of immunological defenses. In cultures, transient non-physiological heat shock (42-45°C and Heat Shock Proteins (HSPs modulate HIV-1 replication, through poorly defined mechanisms. The effect of physiological hyperthermia (38-40°C on HIV-1 infection has not been extensively investigated. Here, we show that culturing primary CD4+ T lymphocytes and cell lines at a fever-like temperature (39.5°C increased the efficiency of HIV-1 replication by 2 to 7 fold. Hyperthermia did not facilitate viral entry nor reverse transcription, but increased Tat transactivation of the LTR viral promoter. Hyperthermia also boosted HIV-1 reactivation in a model of latently-infected cells. By imaging HIV-1 transcription, we further show that Hsp90 co-localized with actively transcribing provirus, and this phenomenon was enhanced at 39.5°C. The Hsp90 inhibitor 17-AAG abrogated the increase of HIV-1 replication in hyperthermic cells. Altogether, our results indicate that fever may directly stimulate HIV-1 replication, in a process involving Hsp90 and facilitation of Tat-mediated LTR activity.

  1. EDDA: An Efficient Distributed Data Replication Algorithm in VANETs

    Directory of Open Access Journals (Sweden)

    Junyu Zhu

    2018-02-01

    Full Text Available Efficient data dissemination in vehicular ad hoc networks (VANETs is a challenging issue due to the dynamic nature of the network. To improve the performance of data dissemination, we study distributed data replication algorithms in VANETs for exchanging information and computing in an arbitrarily-connected network of vehicle nodes. To achieve low dissemination delay and improve the network performance, we control the number of message copies that can be disseminated in the network and then propose an efficient distributed data replication algorithm (EDDA. The key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process. We calculate the number of communication stages for the network to enter into a balanced status and show that the proposed distributed algorithm can converge to a consensus in a small number of communication stages. Most of the theoretical results described in this paper are to study the complexity of network convergence. The lower bound and upper bound are also provided in the analysis of the algorithm. Simulation results show that the proposed EDDA can efficiently disseminate messages to vehicles in a specific area with low dissemination delay and system overhead.

  2. Replication and Robustness in Developmental Research

    Science.gov (United States)

    Duncan, Greg J.; Engel, Mimi; Claessens, Amy; Dowsett, Chantelle J.

    2014-01-01

    Replications and robustness checks are key elements of the scientific method and a staple in many disciplines. However, leading journals in developmental psychology rarely include explicit replications of prior research conducted by different investigators, and few require authors to establish in their articles or online appendices that their key…

  3. Biomarkers of replicative senescence revisited

    DEFF Research Database (Denmark)

    Nehlin, Jan

    2016-01-01

    Biomarkers of replicative senescence can be defined as those ultrastructural and physiological variations as well as molecules whose changes in expression, activity or function correlate with aging, as a result of the gradual exhaustion of replicative potential and a state of permanent cell cycle...... arrest. The biomarkers that characterize the path to an irreversible state of cell cycle arrest due to proliferative exhaustion may also be shared by other forms of senescence-inducing mechanisms. Validation of senescence markers is crucial in circumstances where quiescence or temporary growth arrest may...... be triggered or is thought to be induced. Pre-senescence biomarkers are also important to consider as their presence indicate that induction of aging processes is taking place. The bona fide pathway leading to replicative senescence that has been extensively characterized is a consequence of gradual reduction...

  4. Computer-aided proofs for multiparty computation with active security

    DEFF Research Database (Denmark)

    Haagh, Helene; Karbyshev, Aleksandr; Oechsner, Sabine

    2018-01-01

    Secure multi-party computation (MPC) is a general cryptographic technique that allows distrusting parties to compute a function of their individual inputs, while only revealing the output of the function. It has found applications in areas such as auctioning, email filtering, and secure...... teleconference. Given its importance, it is crucial that the protocols are specified and implemented correctly. In the programming language community it has become good practice to use computer proof assistants to verify correctness proofs. In the field of cryptography, EasyCrypt is the state of the art proof...... public-key encryption, signatures, garbled circuits and differential privacy. Here we show for the first time that it can also be used to prove security of MPC against a malicious adversary. We formalize additive and replicated secret sharing schemes and apply them to Maurer's MPC protocol for secure...

  5. Enzymes involved in organellar DNA replication in photosynthetic eukaryotes.

    Science.gov (United States)

    Moriyama, Takashi; Sato, Naoki

    2014-01-01

    Plastids and mitochondria possess their own genomes. Although the replication mechanisms of these organellar genomes remain unclear in photosynthetic eukaryotes, several organelle-localized enzymes related to genome replication, including DNA polymerase, DNA primase, DNA helicase, DNA topoisomerase, single-stranded DNA maintenance protein, DNA ligase, primer removal enzyme, and several DNA recombination-related enzymes, have been identified. In the reference Eudicot plant Arabidopsis thaliana, the replication-related enzymes of plastids and mitochondria are similar because many of them are dual targeted to both organelles, whereas in the red alga Cyanidioschyzon merolae, plastids and mitochondria contain different replication machinery components. The enzymes involved in organellar genome replication in green plants and red algae were derived from different origins, including proteobacterial, cyanobacterial, and eukaryotic lineages. In the present review, we summarize the available data for enzymes related to organellar genome replication in green plants and red algae. In addition, based on the type and distribution of replication enzymes in photosynthetic eukaryotes, we discuss the transitional history of replication enzymes in the organelles of plants.

  6. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Science.gov (United States)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  7. Viral hijacking of a replicative helicase loader and its implications for helicase loading control and phage replication

    Energy Technology Data Exchange (ETDEWEB)

    Hood, Iris V.; Berger, James M.

    2016-05-31

    Replisome assembly requires the loading of replicative hexameric helicases onto origins by AAA+ ATPases. How loader activity is appropriately controlled remains unclear. Here, we use structural and biochemical analyses to establish how an antimicrobial phage protein interferes with the function of theStaphylococcus aureusreplicative helicase loader, DnaI. The viral protein binds to the loader’s AAA+ ATPase domain, allowing binding of the host replicative helicase but impeding loader self-assembly and ATPase activity. Close inspection of the complex highlights an unexpected locus for the binding of an interdomain linker element in DnaI/DnaC-family proteins. We find that the inhibitor protein is genetically coupled to a phage-encoded homolog of the bacterial helicase loader, which we show binds to the host helicase but not to the inhibitor itself. These findings establish a new approach by which viruses can hijack host replication processes and explain how loader activity is internally regulated to prevent aberrant auto-association.

  8. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  9. The transcription elongation factor Bur1-Bur2 interacts with replication protein A and maintains genome stability during replication stress

    DEFF Research Database (Denmark)

    Clausing, Emanuel; Mayer, Andreas; Chanarat, Sittinan

    2010-01-01

    Multiple DNA-associated processes such as DNA repair, replication, and recombination are crucial for the maintenance of genome integrity. Here, we show a novel interaction between the transcription elongation factor Bur1-Bur2 and replication protein A (RPA), the eukaryotic single-stranded DNA......-binding protein with functions in DNA repair, recombination, and replication. Bur1 interacted via its C-terminal domain with RPA, and bur1-¿C mutants showed a deregulated DNA damage response accompanied by increased sensitivity to DNA damage and replication stress as well as increased levels of persisting Rad52...... foci. Interestingly, the DNA damage sensitivity of an rfa1 mutant was suppressed by bur1 mutation, further underscoring a functional link between these two protein complexes. The transcription elongation factor Bur1-Bur2 interacts with RPA and maintains genome integrity during DNA replication stress....

  10. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    International Nuclear Information System (INIS)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-01-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [de

  11. Modeling DNA Replication.

    Science.gov (United States)

    Bennett, Joan

    1998-01-01

    Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)

  12. Distinct functions of human RecQ helicases during DNA replication.

    Science.gov (United States)

    Urban, Vaclav; Dobrovolna, Jana; Janscak, Pavel

    2017-06-01

    DNA replication is the most vulnerable process of DNA metabolism in proliferating cells and therefore it is tightly controlled and coordinated with processes that maintain genomic stability. Human RecQ helicases are among the most important factors involved in the maintenance of replication fork integrity, especially under conditions of replication stress. RecQ helicases promote recovery of replication forks being stalled due to different replication roadblocks of either exogenous or endogenous source. They prevent generation of aberrant replication fork structures and replication fork collapse, and are involved in proper checkpoint signaling. The essential role of human RecQ helicases in the genome maintenance during DNA replication is underlined by association of defects in their function with cancer predisposition. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Evolution of complexity in RNA-like replicator systems

    Directory of Open Access Journals (Sweden)

    Hogeweg Paulien

    2008-03-01

    Full Text Available Abstract Background The evolution of complexity is among the most important questions in biology. The evolution of complexity is often observed as the increase of genetic information or that of the organizational complexity of a system. It is well recognized that the formation of biological organization – be it of molecules or ecosystems – is ultimately instructed by the genetic information, whereas it is also true that the genetic information is functional only in the context of the organization. Therefore, to obtain a more complete picture of the evolution of complexity, we must study the evolution of both information and organization. Results Here we investigate the evolution of complexity in a simulated RNA-like replicator system. The simplicity of the system allows us to explicitly model the genotype-phenotype-interaction mapping of individual replicators, whereby we avoid preconceiving the functionality of genotypes (information or the ecological organization of replicators in the model. In particular, the model assumes that interactions among replicators – to replicate or to be replicated – depend on their secondary structures and base-pair matching. The results showed that a population of replicators, originally consisting of one genotype, evolves to form a complex ecosystem of up to four species. During this diversification, the species evolve through acquiring unique genotypes with distinct ecological functionality. The analysis of this diversification reveals that parasitic replicators, which have been thought to destabilize the replicator's diversity, actually promote the evolution of diversity through generating a novel "niche" for catalytic replicators. This also makes the current replicator system extremely stable upon the evolution of parasites. The results also show that the stability of the system crucially depends on the spatial pattern formation of replicators. Finally, the evolutionary dynamics is shown to

  14. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  15. Molecular Mechanisms of DNA Replication Checkpoint Activation

    Directory of Open Access Journals (Sweden)

    Bénédicte Recolin

    2014-03-01

    Full Text Available The major challenge of the cell cycle is to deliver an intact, and fully duplicated, genetic material to the daughter cells. To this end, progression of DNA synthesis is monitored by a feedback mechanism known as replication checkpoint that is untimely linked to DNA replication. This signaling pathway ensures coordination of DNA synthesis with cell cycle progression. Failure to activate this checkpoint in response to perturbation of DNA synthesis (replication stress results in forced cell division leading to chromosome fragmentation, aneuploidy, and genomic instability. In this review, we will describe current knowledge of the molecular determinants of the DNA replication checkpoint in eukaryotic cells and discuss a model of activation of this signaling pathway crucial for maintenance of genomic stability.

  16. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226583; The ATLAS collaboration; Filipčič, Andrej; Guan, Wen; Tsulaia, Vakhtang; Walker, Rodney; Wenaus, Torre

    2017-01-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from the resources that comprise the Grid computing of most experiments, therefore exploiting these resources requires a change in strategy for the experiment. The resources may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The ARC CE with its non-intrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the Event Service primarily to address the issue of jobs that can be terminated at any point when opportunistic resources are needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in...

  17. Exploiting Opportunistic Resources for ATLAS with ARC CE and the Event Service

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2016-01-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from the resources that comprise the Grid computing of most experiments, therefore exploiting these resources requires a change in strategy for the experiment. The resources may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The ARC CE with its non-intrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the Event Service primarily to address the issue of jobs that can be terminated at any point when opportunistic resources are needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in...

  18. A new MCM modification cycle regulates DNA replication initiation.

    Science.gov (United States)

    Wei, Lei; Zhao, Xiaolan

    2016-03-01

    The MCM DNA helicase is a central regulatory target during genome replication. MCM is kept inactive during G1, and it initiates replication after being activated in S phase. During this transition, the only known chemical change to MCM is the gain of multisite phosphorylation that promotes cofactor recruitment. Because replication initiation is intimately linked to multiple biological cues, additional changes to MCM can provide further regulatory points. Here, we describe a yeast MCM SUMOylation cycle that regulates replication. MCM subunits undergo SUMOylation upon loading at origins in G1 before MCM phosphorylation. MCM SUMOylation levels then decline as MCM phosphorylation levels rise, thus suggesting an inhibitory role of MCM SUMOylation during replication. Indeed, increasing MCM SUMOylation impairs replication initiation, partly through promoting the recruitment of a phosphatase that decreases MCM phosphorylation and activation. We propose that MCM SUMOylation counterbalances kinase-based regulation, thus ensuring accurate control of replication initiation.

  19. Research Computing and Data for Geoscience

    OpenAIRE

    Smith, Preston

    2015-01-01

    This presentation will discuss the data storage and computational resources available for GIS researchers at Purdue. This presentation will discuss the data storage and computational resources available for GIS researchers at Purdue.

  20. Modifications of the 3 '-UTR stem-loop of infectious bursal disease virus are allowed without influencing replication or virulence

    NARCIS (Netherlands)

    Boot, H.J.; Pritz-Verschuren, S.B.E.

    2004-01-01

    Many questions regarding the initiation of replication and translation of the segmented, double-stranded RNA genome of infectious bursal disease virus (IBDV) remain to be solved. Computer analysis shows that the non-polyadenylated extreme 3'-untranslated regions (UTRs) of the coding strand of both

  1. A CI-Independent Form of Replicative Inhibition: Turn Off of Early Replication of Bacteriophage Lambda

    Science.gov (United States)

    Hayes, Sidney; Horbay, Monique A.; Hayes, Connie

    2012-01-01

    Several earlier studies have described an unusual exclusion phenotype exhibited by cells with plasmids carrying a portion of the replication region of phage lambda. Cells exhibiting this inhibition phenotype (IP) prevent the plating of homo-immune and hybrid hetero-immune lambdoid phages. We have attempted to define aspects of IP, and show that it is directed to repλ phages. IP was observed in cells with plasmids containing a λ DNA fragment including oop, encoding a short OOP micro RNA, and part of the lambda origin of replication, oriλ, defined by iteron sequences ITN1-4 and an adjacent high AT-rich sequence. Transcription of the intact oop sequence from its promoter, pO is required for IP, as are iterons ITN3–4, but not the high AT-rich portion of oriλ. The results suggest that IP silencing is directed to theta mode replication initiation from an infecting repλ genome, or an induced repλ prophage. Phage mutations suppressing IP, i.e., Sip, map within, or adjacent to cro or in O, or both. Our results for plasmid based IP suggest the hypothesis that there is a natural mechanism for silencing early theta-mode replication initiation, i.e. the buildup of λ genomes with oop + oriλ+ sequence. PMID:22590552

  2. Computational Science at the Argonne Leadership Computing Facility

    Science.gov (United States)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  3. Replication of cultured lung epithelial cells

    International Nuclear Information System (INIS)

    Guzowski, D.; Bienkowski, R.

    1986-01-01

    The authors have investigated the conditions necessary to support replication of lung type 2 epithelial cells in culture. Cells were isolated from mature fetal rabbit lungs (29d gestation) and cultured on feeder layers of mitotically inactivated 3T3 fibroblasts. The epithelial nature of the cells was demonstrated by indirect immunofluorescent staining for keratin and by polyacid dichrome stain. Ultrastructural examination during the first week showed that the cells contained myofilaments, microvilli and lamellar bodies (markers for type 2 cells). The following changes were observed after the first week: increase in cell size; loss of lamellar bodies and appearance of multivesicular bodies; increase in rough endoplasmic reticulum and golgi; increase in tonafilaments and well-defined junctions. General cell morphology was good for up to 10 wk. Cells cultured on plastic surface degenerated after 1 wk. Cell replication was assayed by autoradiography of cultures exposed to ( 3 H)-thymidine and by direct cell counts. The cells did not replicate during the first week; however, between 2-10 wk the cells incorporated the label and went through approximately 6 population doublings. They have demonstrated that lung alveolar epithelial cells can replicate in culture if they are maintained on an appropriate substrate. The coincidence of ability to replicate and loss of markers for differentiation may reflect the dichotomy between growth and differentiation commonly observed in developing systems

  4. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    Science.gov (United States)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  5. Spacetime replication of continuous variable quantum information

    International Nuclear Information System (INIS)

    Hayden, Patrick; Nezami, Sepehr; Salton, Grant; Sanders, Barry C

    2016-01-01

    The theory of relativity requires that no information travel faster than light, whereas the unitarity of quantum mechanics ensures that quantum information cannot be cloned. These conditions provide the basic constraints that appear in information replication tasks, which formalize aspects of the behavior of information in relativistic quantum mechanics. In this article, we provide continuous variable (CV) strategies for spacetime quantum information replication that are directly amenable to optical or mechanical implementation. We use a new class of homologically constructed CV quantum error correcting codes to provide efficient solutions for the general case of information replication. As compared to schemes encoding qubits, our CV solution requires half as many shares per encoded system. We also provide an optimized five-mode strategy for replicating quantum information in a particular configuration of four spacetime regions designed not to be reducible to previously performed experiments. For this optimized strategy, we provide detailed encoding and decoding procedures using standard optical apparatus and calculate the recovery fidelity when finite squeezing is used. As such we provide a scheme for experimentally realizing quantum information replication using quantum optics. (paper)

  6. COPI is required for enterovirus 71 replication.

    Directory of Open Access Journals (Sweden)

    Jianmin Wang

    Full Text Available Enterovirus 71 (EV71, a member of the Picornaviridae family, is found in Asian countries where it causes a wide range of human diseases. No effective therapy is available for the treatment of these infections. Picornaviruses undergo RNA replication in association with membranes of infected cells. COPI and COPII have been shown to be involved in the formation of picornavirus-induced vesicles. Replication of several picornaviruses, including poliovirus and Echovirus 11 (EV11, is dependent on COPI or COPII. Here, we report that COPI, but not COPII, is required for EV71 replication. Replication of EV71 was inhibited by brefeldin A and golgicide A, inhibitors of COPI activity. Furthermore, we found EV71 2C protein interacted with COPI subunits by co-immunoprecipitation and GST pull-down assay, indicating that COPI coatomer might be directed to the viral replication complex through viral 2C protein. Additionally, because the pathway is conserved among different species of enteroviruses, it may represent a novel target for antiviral therapies.

  7. DNA replication after mutagenic treatment in Hordeum vulgare.

    Science.gov (United States)

    Kwasniewska, Jolanta; Kus, Arita; Swoboda, Monika; Braszewska-Zalewska, Agnieszka

    2016-12-01

    The temporal and spatial properties of DNA replication in plants related to DNA damage and mutagenesis is poorly understood. Experiments were carried out to explore the relationships between DNA replication, chromatin structure and DNA damage in nuclei from barley root tips. We quantitavely analysed the topological organisation of replication foci using pulse EdU labelling during the S phase and its relationship with the DNA damage induced by mutagenic treatment with maleic hydrazide (MH), nitroso-N-methyl-urea (MNU) and gamma ray. Treatment with mutagens did not change the characteristic S-phase patterns in the nuclei; however, the frequencies of the S-phase-labelled cells after treatment differed from those observed in the control cells. The analyses of DNA replication in barley nuclei were extended to the micronuclei induced by mutagens. Replication in the chromatin of the micronuclei was rare. The results of simultanous TUNEL reaction to identify cells with DNA strand breaks and the labelling of the S-phase cells with EdU revealed the possibility of DNA replication occurring in damaged nuclei. For the first time, the intensity of EdU fluorescence to study the rate of DNA replication was analysed. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Dynamic Resource Allocation with the arcControlTower

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Nilsen, Jon Kerr

    2015-01-01

    Distributed computing resources available for high-energy physics research are becoming less dedicated to one type of workflow and researchers’ workloads are increasingly exploiting modern computing technologies such as parallelism. The current pilot job management model used by many experiments relies on static dedicated resources and cannot easily adapt to these changes. The model used for ATLAS in Nordic countries and some other places enables a flexible job management system based on dynamic resources allocation. Rather than a fixed set of resources managed centrally, the model allows resources to be requested on the fly. The ARC Computing Element (ARC-CE) and ARC Control Tower (aCT) are the key components of the model. The aCT requests jobs from the ATLAS job management system (PanDA) and submits a fully-formed job description to ARC-CEs. ARC-CE can then dynamically request the required resources from the underlying batch system. In this paper we describe the architecture of the model and the experienc...

  9. The Design of Finite State Machine for Asynchronous Replication Protocol

    Science.gov (United States)

    Wang, Yanlong; Li, Zhanhuai; Lin, Wei; Hei, Minglei; Hao, Jianhua

    Data replication is a key way to design a disaster tolerance system and to achieve reliability and availability. It is difficult for a replication protocol to deal with the diverse and complex environment. This means that data is less well replicated than it ought to be. To reduce data loss and to optimize replication protocols, we (1) present a finite state machine, (2) run it to manage an asynchronous replication protocol and (3) report a simple evaluation of the asynchronous replication protocol based on our state machine. It's proved that our state machine is applicable to guarantee the asynchronous replication protocol running in the proper state to the largest extent in the event of various possible events. It also can helpful to build up replication-based disaster tolerance systems to ensure the business continuity.

  10. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    Science.gov (United States)

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  11. Computing and Communications Infrastructure for Network-Centric Warfare: Exploiting COTS, Assuring Performance

    Science.gov (United States)

    2004-06-01

    remote databases, has seen little vendor acceptance. Each database ( Oracle , DB2, MySQL , etc.) has its own client- server protocol. Therefore each...existing standards – SQL , X.500/LDAP, FTP, etc. • View information dissemination as selective replication – State-oriented vs . message-oriented...allowing the 8 application to start. The resource management system would serve as a broker to the resources, making sure that resources are not

  12. The Alleged Crisis and the Illusion of Exact Replication.

    Science.gov (United States)

    Stroebe, Wolfgang; Strack, Fritz

    2014-01-01

    There has been increasing criticism of the way psychologists conduct and analyze studies. These critiques as well as failures to replicate several high-profile studies have been used as justification to proclaim a "replication crisis" in psychology. Psychologists are encouraged to conduct more "exact" replications of published studies to assess the reproducibility of psychological research. This article argues that the alleged "crisis of replicability" is primarily due to an epistemological misunderstanding that emphasizes the phenomenon instead of its underlying mechanisms. As a consequence, a replicated phenomenon may not serve as a rigorous test of a theoretical hypothesis because identical operationalizations of variables in studies conducted at different times and with different subject populations might test different theoretical constructs. Therefore, we propose that for meaningful replications, attempts at reinstating the original circumstances are not sufficient. Instead, replicators must ascertain that conditions are realized that reflect the theoretical variable(s) manipulated (and/or measured) in the original study. © The Author(s) 2013.

  13. DNA replication stress restricts ribosomal DNA copy number.

    Science.gov (United States)

    Salim, Devika; Bradford, William D; Freeland, Amy; Cady, Gillian; Wang, Jianmin; Pruitt, Steven C; Gerton, Jennifer L

    2017-09-01

    Ribosomal RNAs (rRNAs) in budding yeast are encoded by ~100-200 repeats of a 9.1kb sequence arranged in tandem on chromosome XII, the ribosomal DNA (rDNA) locus. Copy number of rDNA repeat units in eukaryotic cells is maintained far in excess of the requirement for ribosome biogenesis. Despite the importance of the repeats for both ribosomal and non-ribosomal functions, it is currently not known how "normal" copy number is determined or maintained. To identify essential genes involved in the maintenance of rDNA copy number, we developed a droplet digital PCR based assay to measure rDNA copy number in yeast and used it to screen a yeast conditional temperature-sensitive mutant collection of essential genes. Our screen revealed that low rDNA copy number is associated with compromised DNA replication. Further, subculturing yeast under two separate conditions of DNA replication stress selected for a contraction of the rDNA array independent of the replication fork blocking protein, Fob1. Interestingly, cells with a contracted array grew better than their counterparts with normal copy number under conditions of DNA replication stress. Our data indicate that DNA replication stresses select for a smaller rDNA array. We speculate that this liberates scarce replication factors for use by the rest of the genome, which in turn helps cells complete DNA replication and continue to propagate. Interestingly, tumors from mini chromosome maintenance 2 (MCM2)-deficient mice also show a loss of rDNA repeats. Our data suggest that a reduction in rDNA copy number may indicate a history of DNA replication stress, and that rDNA array size could serve as a diagnostic marker for replication stress. Taken together, these data begin to suggest the selective pressures that combine to yield a "normal" rDNA copy number.

  14. DNA replication stress restricts ribosomal DNA copy number

    Science.gov (United States)

    Salim, Devika; Bradford, William D.; Freeland, Amy; Cady, Gillian; Wang, Jianmin

    2017-01-01

    Ribosomal RNAs (rRNAs) in budding yeast are encoded by ~100–200 repeats of a 9.1kb sequence arranged in tandem on chromosome XII, the ribosomal DNA (rDNA) locus. Copy number of rDNA repeat units in eukaryotic cells is maintained far in excess of the requirement for ribosome biogenesis. Despite the importance of the repeats for both ribosomal and non-ribosomal functions, it is currently not known how “normal” copy number is determined or maintained. To identify essential genes involved in the maintenance of rDNA copy number, we developed a droplet digital PCR based assay to measure rDNA copy number in yeast and used it to screen a yeast conditional temperature-sensitive mutant collection of essential genes. Our screen revealed that low rDNA copy number is associated with compromised DNA replication. Further, subculturing yeast under two separate conditions of DNA replication stress selected for a contraction of the rDNA array independent of the replication fork blocking protein, Fob1. Interestingly, cells with a contracted array grew better than their counterparts with normal copy number under conditions of DNA replication stress. Our data indicate that DNA replication stresses select for a smaller rDNA array. We speculate that this liberates scarce replication factors for use by the rest of the genome, which in turn helps cells complete DNA replication and continue to propagate. Interestingly, tumors from mini chromosome maintenance 2 (MCM2)-deficient mice also show a loss of rDNA repeats. Our data suggest that a reduction in rDNA copy number may indicate a history of DNA replication stress, and that rDNA array size could serve as a diagnostic marker for replication stress. Taken together, these data begin to suggest the selective pressures that combine to yield a “normal” rDNA copy number. PMID:28915237

  15. DNA replication stress restricts ribosomal DNA copy number.

    Directory of Open Access Journals (Sweden)

    Devika Salim

    2017-09-01

    Full Text Available Ribosomal RNAs (rRNAs in budding yeast are encoded by ~100-200 repeats of a 9.1kb sequence arranged in tandem on chromosome XII, the ribosomal DNA (rDNA locus. Copy number of rDNA repeat units in eukaryotic cells is maintained far in excess of the requirement for ribosome biogenesis. Despite the importance of the repeats for both ribosomal and non-ribosomal functions, it is currently not known how "normal" copy number is determined or maintained. To identify essential genes involved in the maintenance of rDNA copy number, we developed a droplet digital PCR based assay to measure rDNA copy number in yeast and used it to screen a yeast conditional temperature-sensitive mutant collection of essential genes. Our screen revealed that low rDNA copy number is associated with compromised DNA replication. Further, subculturing yeast under two separate conditions of DNA replication stress selected for a contraction of the rDNA array independent of the replication fork blocking protein, Fob1. Interestingly, cells with a contracted array grew better than their counterparts with normal copy number under conditions of DNA replication stress. Our data indicate that DNA replication stresses select for a smaller rDNA array. We speculate that this liberates scarce replication factors for use by the rest of the genome, which in turn helps cells complete DNA replication and continue to propagate. Interestingly, tumors from mini chromosome maintenance 2 (MCM2-deficient mice also show a loss of rDNA repeats. Our data suggest that a reduction in rDNA copy number may indicate a history of DNA replication stress, and that rDNA array size could serve as a diagnostic marker for replication stress. Taken together, these data begin to suggest the selective pressures that combine to yield a "normal" rDNA copy number.

  16. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    Science.gov (United States)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  17. Chaotic interactions of self-replicating RNA.

    Science.gov (United States)

    Forst, C V

    1996-03-01

    A general system of high-order differential equations describing complex dynamics of replicating biomolecules is given. Symmetry relations and coordinate transformations of general replication systems leading to topologically equivalent systems are derived. Three chaotic attractors observed in Lotka-Volterra equations of dimension n = 3 are shown to represent three cross-sections of one and the same chaotic regime. Also a fractal torus in a generalized three-dimensional Lotka-Volterra Model has been linked to one of the chaotic attractors. The strange attractors are studied in the equivalent four-dimensional catalytic replicator network. The fractal torus has been examined in adapted Lotka-Volterra equations. Analytic expressions are derived for the Lyapunov exponents of the flow in the replicator system. Lyapunov spectra for different pathways into chaos has been calculated. In the generalized Lotka-Volterra system a second inner rest point--coexisting with (quasi)-periodic orbits--can be observed; with an abundance of different bifurcations. Pathways from chaotic tori, via quasi-periodic tori, via limit cycles, via multi-periodic orbits--emerging out of periodic doubling bifurcations--to "simple" chaotic attractors can be found.

  18. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    Directory of Open Access Journals (Sweden)

    P. O. Umenne

    2012-12-01

    Full Text Available Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ execution were developed at the University of Surrey, UK in the 90s. The objective of the research was to develop a software-based computer architecture on which Agents execution could be explored. The combination of Intelligent Agents and HYDRA computer architecture gave rise to a new computer concept: the NET-Computer in which the comput­ing resources reside on the Internet. The Internet computers form the hardware and software resources, and the user is provided with a simple interface to access the Internet and run user tasks. The Agents autonomously roam the Internet (NET-Computer executing the tasks. A growing segment of the Internet is E-Commerce for online shopping for products and services. The Internet computing resources provide a marketplace for product suppliers and consumers alike. Consumers are looking for suppliers selling products and services, while suppliers are looking for buyers. Searching the vast amount of information available on the Internet causes a great deal of problems for both consumers and suppliers. Intelligent Agents executing on the NET-Computer can surf through the Internet and select specific information of interest to the user. The simulation results show that Intelligent Agents executing HYDRA computer architecture could be applied in E-Commerce.

  19. Insights into the Initiation of Eukaryotic DNA Replication.

    Science.gov (United States)

    Bruck, Irina; Perez-Arnaiz, Patricia; Colbert, Max K; Kaplan, Daniel L

    2015-01-01

    The initiation of DNA replication is a highly regulated event in eukaryotic cells to ensure that the entire genome is copied once and only once during S phase. The primary target of cellular regulation of eukaryotic DNA replication initiation is the assembly and activation of the replication fork helicase, the 11-subunit assembly that unwinds DNA at a replication fork. The replication fork helicase, called CMG for Cdc45-Mcm2-7, and GINS, assembles in S phase from the constituent Cdc45, Mcm2-7, and GINS proteins. The assembly and activation of the CMG replication fork helicase during S phase is governed by 2 S-phase specific kinases, CDK and DDK. CDK stimulates the interaction between Sld2, Sld3, and Dpb11, 3 initiation factors that are each required for the initiation of DNA replication. DDK, on the other hand, phosphorylates the Mcm2, Mcm4, and Mcm6 subunits of the Mcm2-7 complex. Sld3 recruits Cdc45 to Mcm2-7 in a manner that depends on DDK, and recent work suggests that Sld3 binds directly to Mcm2-7 and also to single-stranded DNA. Furthermore, recent work demonstrates that Sld3 and its human homolog Treslin substantially stimulate DDK phosphorylation of Mcm2. These data suggest that the initiation factor Sld3/Treslin coordinates the assembly and activation of the eukaryotic replication fork helicase by recruiting Cdc45 to Mcm2-7, stimulating DDK phosphorylation of Mcm2, and binding directly to single-stranded DNA as the origin is melted.

  20. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  1. Resource utilization and costs during the initial years of lung cancer screening with computed tomography in Canada.

    Science.gov (United States)

    Cressman, Sonya; Lam, Stephen; Tammemagi, Martin C; Evans, William K; Leighl, Natasha B; Regier, Dean A; Bolbocean, Corneliu; Shepherd, Frances A; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R; Mayo, John R; McWilliams, Annette; Couture, Christian; English, John C; Goffin, John; Hwang, David M; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J; Goss, Glenwood D; Nicholas, Garth; Seely, Jean M; Sekhon, Harmanjatinder S; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D; Tan, Wan C; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J

    2014-10-01

    It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer's perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400-$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553-$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254-$52,200; p = 0.061). In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure.

  2. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    Directory of Open Access Journals (Sweden)

    Shyamala Loganathan

    2015-01-01

    Full Text Available Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  3. Autonomous replication of plasmids bearing monkey DNA origin-enriched sequences

    International Nuclear Information System (INIS)

    Frappier, L.; Zannis-Hadjopoulos, M.

    1987-01-01

    Twelve clones of origin-enriched sequences (ORS) isolated from early replicating monkey (CV-1) DNA were examined for transient episomal replication in transfected CV-1, COS-7, and HeLa cells. Plasmid DNA was isolated at time intervals after transfection and screened by the Dpn I resistance assay or by the bromodeoxyuridine substitution assay to differentiate between input and replicated DNA. The authors have identified four monkey ORS (ORS3, -8, -9, and -12) that can support plasmid replication in mammalian cells. This replication is carried out in a controlled and semiconservative manner characteristic of mammalian replicons. ORS replication was most efficient in HeLa cells. Electron microscopy showed ORS8 and ORS12 plasmids of the correct size with replication bubbles. Using a unique restriction site in ORS12, we have mapped the replication bubble within the monkey DNA sequence

  4. Materials Chemistry and Performance of Silicone-Based Replicating Compounds.

    Energy Technology Data Exchange (ETDEWEB)

    Brumbach, Michael T.; Mirabal, Alex James; Kalan, Michael; Trujillo, Ana B; Hale, Kevin

    2014-11-01

    Replicating compounds are used to cast reproductions of surface features on a variety of materials. Replicas allow for quantitative measurements and recordkeeping on parts that may otherwise be difficult to measure or maintain. In this study, the chemistry and replicating capability of several replicating compounds was investigated. Additionally, the residue remaining on material surfaces upon removal of replicas was quantified. Cleaning practices were tested for several different replicating compounds. For all replicating compounds investigated, a thin silicone residue was left by the replica. For some compounds, additional inorganic species could be identified in the residue. Simple solvent cleaning could remove some residue.

  5. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  6. Uncertain Context Factors in ERP Project Estimation are an Asset: Insights from a Semi-Replication Case Study in a Financial Services Firm

    NARCIS (Netherlands)

    Daneva, Maia

    This paper reports on the findings of a case study in a company in the financial services sector in which we replicated the use of a previously published approach to systematically balance the contextual uncertainties in the estimation of Enterprise Resource Planning (ERP) projects. The approach is

  7. Bayesian tests to quantify the result of a replication attempt

    NARCIS (Netherlands)

    Verhagen, J.; Wagenmakers, E.-J.

    2014-01-01

    Replication attempts are essential to the empirical sciences. Successful replication attempts increase researchers’ confidence in the presence of an effect, whereas failed replication attempts induce skepticism and doubt. However, it is often unclear to what extent a replication attempt results in

  8. Spatio-temporal re-organization of replication foci accompanies replication domain consolidation during human pluripotent stem cell lineage specification

    Science.gov (United States)

    Wilson, Korey A.; Elefanty, Andrew G.; Stanley, Edouard G.; Gilbert, David M.

    2016-01-01

    ABSTRACT Lineage specification of both mouse and human pluripotent stem cells (PSCs) is accompanied by spatial consolidation of chromosome domains and temporal consolidation of their replication timing. Replication timing and chromatin organization are both established during G1 phase at the timing decision point (TDP). Here, we have developed live cell imaging tools to track spatio-temporal replication domain consolidation during differentiation. First, we demonstrate that the fluorescence ubiquitination cell cycle indicator (Fucci) system is incapable of demarcating G1/S or G2/M cell cycle transitions. Instead, we employ a combination of fluorescent PCNA to monitor S phase progression, cytokinesis to demarcate mitosis, and fluorescent nucleotides to label early and late replication foci and track their 3D organization into sub-nuclear chromatin compartments throughout all cell cycle transitions. We find that, as human PSCs differentiate, the length of S phase devoted to replication of spatially clustered replication foci increases, coincident with global compartmentalization of domains into temporally clustered blocks of chromatin. Importantly, re-localization and anchorage of domains was completed prior to the onset of S phase, even in the context of an abbreviated PSC G1 phase. This approach can also be employed to investigate cell fate transitions in single PSCs, which could be seen to differentiate preferentially from G1 phase. Together, our results establish real-time, live-cell imaging methods for tracking cell cycle transitions during human PSC differentiation that can be applied to study chromosome domain consolidation and other aspects of lineage specification. PMID:27433885

  9. A Molecular Toolbox to Engineer Site-Specific DNA Replication Perturbation.

    Science.gov (United States)

    Larsen, Nicolai B; Hickson, Ian D; Mankouri, Hocine W

    2018-01-01

    Site-specific arrest of DNA replication is a useful tool for analyzing cellular responses to DNA replication perturbation. The E. coli Tus-Ter replication barrier can be reconstituted in eukaryotic cells as a system to engineer an unscheduled collision between a replication fork and an "alien" impediment to DNA replication. To further develop this system as a versatile tool, we describe a set of reagents and a detailed protocol that can be used to engineer Tus-Ter barriers into any locus in the budding yeast genome. Because the Tus-Ter complex is a bipartite system with intrinsic DNA replication-blocking activity, the reagents and protocols developed and validated in yeast could also be optimized to engineer site-specific replication fork barriers into other eukaryotic cell types.

  10. A Molecular Toolbox to Engineer Site-Specific DNA Replication Perturbation

    DEFF Research Database (Denmark)

    Larsen, Nicolai B; Hickson, Ian D; Mankouri, Hocine W

    2018-01-01

    " impediment to DNA replication. To further develop this system as a versatile tool, we describe a set of reagents and a detailed protocol that can be used to engineer Tus-Ter barriers into any locus in the budding yeast genome. Because the Tus-Ter complex is a bipartite system with intrinsic DNA replication......Site-specific arrest of DNA replication is a useful tool for analyzing cellular responses to DNA replication perturbation. The E. coli Tus-Ter replication barrier can be reconstituted in eukaryotic cells as a system to engineer an unscheduled collision between a replication fork and an "alien......-blocking activity, the reagents and protocols developed and validated in yeast could also be optimized to engineer site-specific replication fork barriers into other eukaryotic cell types....

  11. Implications of “too good to be true” for replication, theoretical claims, and experimental design: An example using prominent studies of racial bias

    Directory of Open Access Journals (Sweden)

    Greg Francis

    2016-09-01

    Full Text Available In response to concerns about the validity of empirical findings in psychology, some scientists use replication studies as a way to validate good science and to identify poor science. Such efforts are resource intensive and are sometimes controversial (with accusations of researcher incompetence when a replication fails to show a previous result. An alternative approach is to examine the statistical properties of the reported literature to identify some cases of poor science. This review discusses some details of this process for prominent findings about racial bias, where a set of studies seems too good to be true. This kind of analysis is based on the original studies, so it avoids criticism from the original authors about the validity of replication studies. The analysis is also much easier to perform than a new empirical study. A variation of the analysis can also be used to explore whether it makes sense to run a replication study. As demonstrated here, there are situations where the existing data suggest that a direct replication of a set of studies is not worth the effort. Such a conclusion should motivate scientists to generate alternative experimental designs that better test theoretical ideas.

  12. Structural properties of replication origins in yeast DNA sequences

    International Nuclear Information System (INIS)

    Cao Xiaoqin; Zeng Jia; Yan Hong

    2008-01-01

    Sequence-dependent DNA flexibility is an important structural property originating from the DNA 3D structure. In this paper, we investigate the DNA flexibility of the budding yeast (S. Cerevisiae) replication origins on a genome-wide scale using flexibility parameters from two different models, the trinucleotide and the tetranucleotide models. Based on analyzing average flexibility profiles of 270 replication origins, we find that yeast replication origins are significantly rigid compared with their surrounding genomic regions. To further understand the highly distinctive property of replication origins, we compare the flexibility patterns between yeast replication origins and promoters, and find that they both contain significantly rigid DNAs. Our results suggest that DNA flexibility is an important factor that helps proteins recognize and bind the target sites in order to initiate DNA replication. Inspired by the role of the rigid region in promoters, we speculate that the rigid replication origins may facilitate binding of proteins, including the origin recognition complex (ORC), Cdc6, Cdt1 and the MCM2-7 complex

  13. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  14. From structure to mechanism—understanding initiation of DNA replication

    Science.gov (United States)

    Riera, Alberto; Barbon, Marta; Noguchi, Yasunori; Reuter, L. Maximilian; Schneider, Sarah; Speck, Christian

    2017-01-01

    DNA replication results in the doubling of the genome prior to cell division. This process requires the assembly of 50 or more protein factors into a replication fork. Here, we review recent structural and biochemical insights that start to explain how specific proteins recognize DNA replication origins, load the replicative helicase on DNA, unwind DNA, synthesize new DNA strands, and reassemble chromatin. We focus on the minichromosome maintenance (MCM2–7) proteins, which form the core of the eukaryotic replication fork, as this complex undergoes major structural rearrangements in order to engage with DNA, regulate its DNA-unwinding activity, and maintain genome stability. PMID:28717046

  15. MYC and the Control of DNA Replication

    Science.gov (United States)

    Dominguez-Sola, David; Gautier, Jean

    2014-01-01

    The MYC oncogene is a multifunctional protein that is aberrantly expressed in a significant fraction of tumors from diverse tissue origins. Because of its multifunctional nature, it has been difficult to delineate the exact contributions of MYC’s diverse roles to tumorigenesis. Here, we review the normal role of MYC in regulating DNA replication as well as its ability to generate DNA replication stress when overexpressed. Finally, we discuss the possible mechanisms by which replication stress induced by aberrant MYC expression could contribute to genomic instability and cancer. PMID:24890833

  16. Organization of Replication of Ribosomal DNA in Saccharomyces cerevisiae

    NARCIS (Netherlands)

    Linskens, Maarten H.K.; Huberman, Joel A.

    1988-01-01

    Using recently developed replicon mapping techniques, we have analyzed the replication of the ribosomal DNA in Saccharomyces cerevisiae. The results show that (i) the functional origin of replication colocalizes with an autonomously replicating sequence element previously mapped to the

  17. Cyclophilin B facilitates the replication of Orf virus.

    Science.gov (United States)

    Zhao, Kui; Li, Jida; He, Wenqi; Song, Deguang; Zhang, Ximu; Zhang, Di; Zhou, Yanlong; Gao, Feng

    2017-06-15

    Viruses interact with host cellular factors to construct a more favourable environment for their efficient replication. Expression of cyclophilin B (CypB), a cellular peptidyl-prolyl cis-trans isomerase (PPIase), was found to be significantly up-regulated. Recently, a number of studies have shown that CypB is important in the replication of several viruses, including Japanese encephalitis virus (JEV), hepatitis C virus (HCV) and human papillomavirus type 16 (HPV 16). However, the function of cellular CypB in ORFV replication has not yet been explored. Suppression subtractive hybridization (SSH) technique was applied to identify genes differentially expressed in the ORFV-infected MDBK cells at an early phase of infection. Cellular CypB was confirmed to be significantly up-regulated by quantitative reverse transcription-PCR (qRT-PCR) analysis and Western blotting. The role of CypB in ORFV infection was further determined using Cyclosporin A (CsA) and RNA interference (RNAi). Effect of CypB gene silencing on ORFV replication by 50% tissue culture infectious dose (TCID 50 ) assay and qRT-PCR detection. In the present study, CypB was found to be significantly up-regulated in the ORFV-infected MDBK cells at an early phase of infection. Cyclosporin A (CsA) exhibited suppressive effects on ORFV replication through the inhibition of CypB. Silencing of CypB gene inhibited the replication of ORFV in MDBK cells. In conclusion, these data suggest that CypB is critical for the efficient replication of the ORFV genome. Cellular CypB was confirmed to be significantly up-regulated in the ORFV-infected MDBK cells at an early phase of infection, which could effectively facilitate the replication of ORFV.

  18. Autophagy Facilitates Salmonella Replication in HeLa Cells

    Science.gov (United States)

    Yu, Hong B.; Croxen, Matthew A.; Marchiando, Amanda M.; Ferreira, Rosana B. R.; Cadwell, Ken; Foster, Leonard J.; Finlay, B. Brett

    2014-01-01

    ABSTRACT Autophagy is a process whereby a double-membrane structure (autophagosome) engulfs unnecessary cytosolic proteins, organelles, and invading pathogens and delivers them to the lysosome for degradation. We examined the fate of cytosolic Salmonella targeted by autophagy and found that autophagy-targeted Salmonella present in the cytosol of HeLa cells correlates with intracellular bacterial replication. Real-time analyses revealed that a subset of cytosolic Salmonella extensively associates with autophagy components p62 and/or LC3 and replicates quickly, whereas intravacuolar Salmonella shows no or very limited association with p62 or LC3 and replicates much more slowly. Replication of cytosolic Salmonella in HeLa cells is significantly decreased when autophagy components are depleted. Eventually, hyperreplication of cytosolic Salmonella potentiates cell detachment, facilitating the dissemination of Salmonella to neighboring cells. We propose that Salmonella benefits from autophagy for its cytosolic replication in HeLa cells. PMID:24618251

  19. Stabilization of Reversed Replication Forks by Telomerase Drives Telomere Catastrophe.

    Science.gov (United States)

    Margalef, Pol; Kotsantis, Panagiotis; Borel, Valerie; Bellelli, Roberto; Panier, Stephanie; Boulton, Simon J

    2018-01-25

    Telomere maintenance critically depends on the distinct activities of telomerase, which adds telomeric repeats to solve the end replication problem, and RTEL1, which dismantles DNA secondary structures at telomeres to facilitate replisome progression. Here, we establish that reversed replication forks are a pathological substrate for telomerase and the source of telomere catastrophe in Rtel1 -/- cells. Inhibiting telomerase recruitment to telomeres, but not its activity, or blocking replication fork reversal through PARP1 inhibition or depleting UBC13 or ZRANB3 prevents the rapid accumulation of dysfunctional telomeres in RTEL1-deficient cells. In this context, we establish that telomerase binding to reversed replication forks inhibits telomere replication, which can be mimicked by preventing replication fork restart through depletion of RECQ1 or PARG. Our results lead us to propose that telomerase inappropriately binds to and inhibits restart of reversed replication forks within telomeres, which compromises replication and leads to critically short telomeres. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Chronic DNA Replication Stress Reduces Replicative Lifespan of Cells by TRP53-Dependent, microRNA-Assisted MCM2-7 Downregulation.

    Directory of Open Access Journals (Sweden)

    Gongshi Bai

    2016-01-01

    Full Text Available Circumstances that compromise efficient DNA replication, such as disruptions to replication fork progression, cause a state known as DNA replication stress (RS. Whereas normally proliferating cells experience low levels of RS, excessive RS from intrinsic or extrinsic sources can trigger cell cycle arrest and senescence. Here, we report that a key driver of RS-induced senescence is active downregulation of the Minichromosome Maintenance 2-7 (MCM2-7 factors that are essential for replication origin licensing and which constitute the replicative helicase core. Proliferating cells produce high levels of MCM2-7 that enable formation of dormant origins that can be activated in response to acute, experimentally-induced RS. However, little is known about how physiological RS levels impact MCM2-7 regulation. We found that chronic exposure of primary mouse embryonic fibroblasts (MEFs to either genetically-encoded or environmentally-induced RS triggered gradual MCM2-7 repression, followed by inhibition of replication and senescence that could be accelerated by MCM hemizygosity. The MCM2-7 reduction in response to RS is TRP53-dependent, and involves a group of Trp53-dependent miRNAs, including the miR-34 family, that repress MCM expression in replication-stressed cells before they undergo terminal cell cycle arrest. miR-34 ablation partially rescued MCM2-7 downregulation and genomic instability in mice with endogenous RS. Together, these data demonstrate that active MCM2-7 repression is a physiologically important mechanism for RS-induced cell cycle arrest and genome maintenance on an organismal level.

  1. Cloud Computing Security: A Survey

    OpenAIRE

    Khalil, Issa; Khreishah, Abdallah; Azeem, Muhammad

    2014-01-01

    Cloud computing is an emerging technology paradigm that migrates current technological and computing concepts into utility-like solutions similar to electricity and water systems. Clouds bring out a wide range of benefits including configurable computing resources, economic savings, and service flexibility. However, security and privacy concerns are shown to be the primary obstacles to a wide adoption of clouds. The new concepts that clouds introduce, such as multi-tenancy, resource sharing a...

  2. Hepatitis C Virus Replication Depends on Endosomal Cholesterol Homeostasis.

    Science.gov (United States)

    Stoeck, Ina Karen; Lee, Ji-Young; Tabata, Keisuke; Romero-Brey, Inés; Paul, David; Schult, Philipp; Lohmann, Volker; Kaderali, Lars; Bartenschlager, Ralf

    2018-01-01

    Similar to other positive-strand RNA viruses, hepatitis C virus (HCV) causes massive rearrangements of intracellular membranes, resulting in a membranous web (MW) composed of predominantly double-membrane vesicles (DMVs), the presumed sites of RNA replication. DMVs are enriched for cholesterol, but mechanistic details on the source and recruitment of cholesterol to the viral replication organelle are only partially known. Here we focused on selected lipid transfer proteins implicated in direct lipid transfer at various endoplasmic reticulum (ER)-membrane contact sites. RNA interference (RNAi)-mediated knockdown identified several hitherto unknown HCV dependency factors, such as steroidogenic acute regulatory protein-related lipid transfer domain protein 3 (STARD3), oxysterol-binding protein-related protein 1A and -B (OSBPL1A and -B), and Niemann-Pick-type C1 (NPC1), all residing at late endosome and lysosome membranes and required for efficient HCV RNA replication but not for replication of the closely related dengue virus. Focusing on NPC1, we found that knockdown or pharmacological inhibition caused cholesterol entrapment in lysosomal vesicles concomitant with decreased cholesterol abundance at sites containing the viral replicase factor NS5A. In untreated HCV-infected cells, unesterified cholesterol accumulated at the perinuclear region, partially colocalizing with NS5A at DMVs, arguing for NPC1-mediated endosomal cholesterol transport to the viral replication organelle. Consistent with cholesterol being an important structural component of DMVs, reducing NPC1-dependent endosomal cholesterol transport impaired MW integrity. This suggests that HCV usurps lipid transfer proteins, such as NPC1, at ER-late endosome/lysosome membrane contact sites to recruit cholesterol to the viral replication organelle, where it contributes to MW functionality. IMPORTANCE A key feature of the replication of positive-strand RNA viruses is the rearrangement of the host cell

  3. The dynamic management system for grid resources information of IHEP

    International Nuclear Information System (INIS)

    Gu Ming; Sun Gongxing; Zhang Weiyi

    2003-01-01

    The Grid information system is an essential base for building a Grid computing environment, it collects timely the resources information of each resource in a Grid, and provides an entire information view of all resources to the other components in a Grid computing system. The Grid technology could support strongly the computing of HEP (High Energy Physics) with big science and multi-organization features. In this article, the architecture and implementation of a dynamic management system are described, as well as the grid and LDAP (Lightweight Directory Access Protocol), including Web-based design for resource information collecting, querying and modifying. (authors)

  4. Designing and Implementing a Parenting Resource Center for Pregnant Teens

    Science.gov (United States)

    Broussard, Anne B; Broussard, Brenda S

    2009-01-01

    The Resource Center for Young Parents-To-Be is a longstanding and successful grant-funded project that was initiated as a response to an identified community need. Senior-level baccalaureate nursing students and their maternity-nursing instructors are responsible for staffing the resource center's weekly sessions, which take place at a public school site for pregnant adolescents. Childbirth educators interested in working with this population could assist in replicating this exemplary clinical project in order to provide prenatal education to this vulnerable and hard-to-reach group. PMID:20190852

  5. Geminin: a major DNA replication safeguard in higher eukaryotes

    DEFF Research Database (Denmark)

    Melixetian, Marina; Helin, Kristian

    2004-01-01

    Eukaryotes have evolved multiple mechanisms to restrict DNA replication to once per cell cycle. These mechanisms prevent relicensing of origins of replication after initiation of DNA replication in S phase until the end of mitosis. Most of our knowledge of mechanisms controlling prereplication...

  6. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  7. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  8. Optimal data replication: A new approach to optimizing parallel EM algorithms on a mesh-connected multiprocessor for 3D PET image reconstruction

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.

    1995-01-01

    The EM algorithm promises an estimated image with the maximal likelihood for 3D PET image reconstruction. However, due to its long computation time, the EM algorithm has not been widely used in practice. While several parallel implementations of the EM algorithm have been developed to make the EM algorithm feasible, they do not guarantee an optimal parallelization efficiency. In this paper, the authors propose a new parallel EM algorithm which maximizes the performance by optimizing data replication on a mesh-connected message-passing multiprocessor. To optimize data replication, the authors have formally derived the optimal allocation of shared data, group sizes, integration and broadcasting of replicated data as well as the scheduling of shared data accesses. The proposed parallel EM algorithm has been implemented on an iPSC/860 with 16 PEs. The experimental and theoretical results, which are consistent with each other, have shown that the proposed parallel EM algorithm could improve performance substantially over those using unoptimized data replication

  9. Optical tweezers reveal how proteins alter replication

    Science.gov (United States)

    Chaurasiya, Kathy

    Single molecule force spectroscopy is a powerful method that explores the DNA interaction properties of proteins involved in a wide range of fundamental biological processes such as DNA replication, transcription, and repair. We use optical tweezers to capture and stretch a single DNA molecule in the presence of proteins that bind DNA and alter its mechanical properties. We quantitatively characterize the DNA binding mechanisms of proteins in order to provide a detailed understanding of their function. In this work, we focus on proteins involved in replication of Escherichia coli (E. coli ), endogenous eukaryotic retrotransposons Ty3 and LINE-1, and human immunodeficiency virus (HIV). DNA polymerases replicate the entire genome of the cell, and bind both double-stranded DNA (dsDNA) and single-stranded DNA (ssDNA) during DNA replication. The replicative DNA polymerase in the widely-studied model system E. coli is the DNA polymerase III subunit alpha (DNA pol III alpha). We use optical tweezers to determine that UmuD, a protein that regulates bacterial mutagenesis through its interactions with DNA polymerases, specifically disrupts alpha binding to ssDNA. This suggests that UmuD removes alpha from its ssDNA template to allow DNA repair proteins access to the damaged DNA, and to facilitate exchange of the replicative polymerase for an error-prone translesion synthesis (TLS) polymerase that inserts nucleotides opposite the lesions, so that bacterial DNA replication may proceed. This work demonstrates a biophysical mechanism by which E. coli cells tolerate DNA damage. Retroviruses and retrotransposons reproduce by copying their RNA genome into the nuclear DNA of their eukaryotic hosts. Retroelements encode proteins called nucleic acid chaperones, which rearrange nucleic acid secondary structure and are therefore required for successful replication. The chaperone activity of these proteins requires strong binding affinity for both single- and double-stranded nucleic

  10. From structure to mechanism-understanding initiation of DNA replication.

    Science.gov (United States)

    Riera, Alberto; Barbon, Marta; Noguchi, Yasunori; Reuter, L Maximilian; Schneider, Sarah; Speck, Christian

    2017-06-01

    DNA replication results in the doubling of the genome prior to cell division. This process requires the assembly of 50 or more protein factors into a replication fork. Here, we review recent structural and biochemical insights that start to explain how specific proteins recognize DNA replication origins, load the replicative helicase on DNA, unwind DNA, synthesize new DNA strands, and reassemble chromatin. We focus on the minichromosome maintenance (MCM2-7) proteins, which form the core of the eukaryotic replication fork, as this complex undergoes major structural rearrangements in order to engage with DNA, regulate its DNA-unwinding activity, and maintain genome stability. © 2017 Riera et al.; Published by Cold Spring Harbor Laboratory Press.

  11. Murine leukemia virus (MLV replication monitored with fluorescent proteins

    Directory of Open Access Journals (Sweden)

    Bittner Alexandra

    2004-12-01

    Full Text Available Abstract Background Cancer gene therapy will benefit from vectors that are able to replicate in tumor tissue and cause a bystander effect. Replication-competent murine leukemia virus (MLV has been described to have potential as cancer therapeutics, however, MLV infection does not cause a cytopathic effect in the infected cell and viral replication can only be studied by immunostaining or measurement of reverse transcriptase activity. Results We inserted the coding sequences for green fluorescent protein (GFP into the proline-rich region (PRR of the ecotropic envelope protein (Env and were able to fluorescently label MLV. This allowed us to directly monitor viral replication and attachment to target cells by flow cytometry. We used this method to study viral replication of recombinant MLVs and split viral genomes, which were generated by replacement of the MLV env gene with the red fluorescent protein (RFP and separately cloning GFP-Env into a retroviral vector. Co-transfection of both plasmids into target cells resulted in the generation of semi-replicative vectors, and the two color labeling allowed to determine the distribution of the individual genomes in the target cells and was indicative for the occurrence of recombination events. Conclusions Fluorescently labeled MLVs are excellent tools for the study of factors that influence viral replication and can be used to optimize MLV-based replication-competent viruses or vectors for gene therapy.

  12. Cytoplasmic ATR Activation Promotes Vaccinia Virus Genome Replication

    Directory of Open Access Journals (Sweden)

    Antonio Postigo

    2017-05-01

    Full Text Available In contrast to most DNA viruses, poxviruses replicate their genomes in the cytoplasm without host involvement. We find that vaccinia virus induces cytoplasmic activation of ATR early during infection, before genome uncoating, which is unexpected because ATR plays a fundamental nuclear role in maintaining host genome integrity. ATR, RPA, INTS7, and Chk1 are recruited to cytoplasmic DNA viral factories, suggesting canonical ATR pathway activation. Consistent with this, pharmacological and RNAi-mediated inhibition of canonical ATR signaling suppresses genome replication. RPA and the sliding clamp PCNA interact with the viral polymerase E9 and are required for DNA replication. Moreover, the ATR activator TOPBP1 promotes genome replication and associates with the viral replisome component H5. Our study suggests that, in contrast to long-held beliefs, vaccinia recruits conserved components of the eukaryote DNA replication and repair machinery to amplify its genome in the host cytoplasm.

  13. 云计算环境下的DPSO资源负载均衡算法%DPSO resource load balancing in cloud computing

    Institute of Scientific and Technical Information of China (English)

    冯小靖; 潘郁

    2013-01-01

    Load balancing problem is one of the hot issues in cloud computing. Discrete particle swarm optimization algoritm is used to research load balancing on cloud computing environment. According to dynamic change of resources demand and low require of servers, each resource management node servers as node of the topological structure, and this paper establishes appropriate resource-task model which is resolved by DPSO. Verification results show that the algorithm enhances the utilization ratio and load balancing of resources.%负载均衡问题是云计算研究的热点问题之一.运用离散粒子群算法对云计算环境下的负载均衡问题进行研究,根据云计算环境下资源需求动态变化,并且对资源节点服务器的要求较低的特点,把各个资源节点当做网络拓扑结构中的各个节点,建立相应的资源-任务分配模型,运用离散粒子群算法实现资源负载均衡.验证表明,该算法提高了资源利用率和云计算资源的负载均衡.

  14. The DNA Replication Stress Hypothesis of Alzheimer’s Disease

    Directory of Open Access Journals (Sweden)

    Yuri B. Yurov

    2011-01-01

    Full Text Available A well-recognized theory of Alzheimer’s disease (AD pathogenesis suggests ectopic cell cycle events to mediate neurodegeneration. Vulnerable neurons of the AD brain exhibit biomarkers of cell cycle progression and DNA replication suggesting a reentry into the cell cycle. Chromosome reduplication without proper cell cycle completion and mitotic division probably causes neuronal cell dysfunction and death. However, this theory seems to require some inputs in accordance with the generally recognized amyloid cascade theory as well as to explain causes and consequences of genomic instability (aneuploidy in the AD brain. We propose that unscheduled and incomplete DNA replication (replication stress destabilizes (epigenomic landscape in the brain and leads to DNA replication “catastrophe” causing cell death during the S phase (replicative cell death. DNA replication stress can be a key element of the pathogenetic cascade explaining the interplay between ectopic cell cycle events and genetic instabilities in the AD brain. Abnormal cell cycle reentry and somatic genome variations can be used for updating the cell cycle theory introducing replication stress as a missing link between cell genetics and neurobiology of AD.

  15. Cloud Computing Security Latest Issues amp Countermeasures

    Directory of Open Access Journals (Sweden)

    Shelveen Pandey

    2015-08-01

    Full Text Available Abstract Cloud computing describes effective computing services provided by a third-party organization known as cloud service provider for organizations to perform different tasks over the internet for a fee. Cloud service providers computing resources are dynamically reallocated per demand and their infrastructure platform and software and other resources are shared by multiple corporate and private clients. With the steady increase in the number of cloud computing subscribers of these shared resources over the years security on the cloud is a growing concern. In this review paper the current cloud security issues and practices are described and a few innovative solutions are proposed that can help improve cloud computing security in the future.

  16. Cloud Computing: Architecture and Services

    OpenAIRE

    Ms. Ravneet Kaur

    2018-01-01

    Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid. It is a method for delivering information technology (IT) services where resources are retrieved from the Internet through web-based tools and applications, as opposed to a direct connection to a server. Rather than keeping files on a proprietary hard drive or local storage device, cloud-based storage makes it possib...

  17. DIaaS: Resource Management System for the Intra-Cloud with On-Premise Desktops

    Directory of Open Access Journals (Sweden)

    Hyun-Woo Kim

    2017-01-01

    Full Text Available Infrastructure as a service with desktops (DIaaS based on the extensible mark-up language (XML is herein proposed to utilize surplus resources. DIaaS is a traditional surplus-resource integrated management technology. It is designed to provide fast work distribution and computing services based on user service requests as well as storage services through desktop-based distributed computing and storage resource integration. DIaaS includes a nondisruptive resource service and an auto-scalable scheme to enhance the availability and scalability of intra-cloud computing resources. A performance evaluation of the proposed scheme measured the clustering performance time for surplus resource utilization. The results showed improvement in computing and storage services in a connection of at least two computers compared to the traditional method for high-availability measurement of nondisruptive services. Furthermore, an artificial server error environment was used to create a clustering delay for computing and storage services and for nondisruptive services. It was compared to the Hadoop distributed file system (HDFS.

  18. Four PPPPerspectives on Computational Creativity in theory and in practice

    OpenAIRE

    Jordanous, Anna

    2016-01-01

    Computational creativity is the modelling, simulating or replicating of creativity computationally. In examining and learning from these `creative systems', from what perspective should the creativity of a system be considered? Are we interested in the creativity of the system's output? Or of its creative processes? Features of the system? Or how it operates within its environment? Traditionally computational creativity has focused more on creative systems' products or processes, though this ...

  19. Regulated eukaryotic DNA replication origin firing with purified proteins.

    Science.gov (United States)

    Yeeles, Joseph T P; Deegan, Tom D; Janska, Agnieszka; Early, Anne; Diffley, John F X

    2015-03-26

    Eukaryotic cells initiate DNA replication from multiple origins, which must be tightly regulated to promote precise genome duplication in every cell cycle. To accomplish this, initiation is partitioned into two temporally discrete steps: a double hexameric minichromosome maintenance (MCM) complex is first loaded at replication origins during G1 phase, and then converted to the active CMG (Cdc45-MCM-GINS) helicase during S phase. Here we describe the reconstitution of budding yeast DNA replication initiation with 16 purified replication factors, made from 42 polypeptides. Origin-dependent initiation recapitulates regulation seen in vivo. Cyclin-dependent kinase (CDK) inhibits MCM loading by phosphorylating the origin recognition complex (ORC) and promotes CMG formation by phosphorylating Sld2 and Sld3. Dbf4-dependent kinase (DDK) promotes replication by phosphorylating MCM, and can act either before or after CDK. These experiments define the minimum complement of proteins, protein kinase substrates and co-factors required for regulated eukaryotic DNA replication.

  20. DNA replication stress and cancer chemotherapy.

    Science.gov (United States)

    Kitao, Hiroyuki; Iimori, Makoto; Kataoka, Yuki; Wakasa, Takeshi; Tokunaga, Eriko; Saeki, Hiroshi; Oki, Eiji; Maehara, Yoshihiko

    2018-02-01

    DNA replication is one of the fundamental biological processes in which dysregulation can cause genome instability. This instability is one of the hallmarks of cancer and confers genetic diversity during tumorigenesis. Numerous experimental and clinical studies have indicated that most tumors have experienced and overcome the stresses caused by the perturbation of DNA replication, which is also referred to as DNA replication stress (DRS). When we consider therapeutic approaches for tumors, it is important to exploit the differences in DRS between tumor and normal cells. In this review, we introduce the current understanding of DRS in tumors and discuss the underlying mechanism of cancer therapy from the aspect of DRS. © 2017 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.