WorldWideScience

Sample records for replicated resources computing

  1. Cost and Performance-Based Resource Selection Scheme for Asynchronous Replicated System in Utility-Based Computing Environment

    Directory of Open Access Journals (Sweden)

    Wan Nor Shuhadah Wan Nik

    2017-04-01

    Full Text Available A resource selection problem for asynchronous replicated systems in utility-based computing environment is addressed in this paper. The needs for a special attention on this problem lies on the fact that most of the existing replication scheme in this computing system whether implicitly support synchronous replication and/or only consider read-only job. The problem is undoubtedly complex to be solved as two main issues need to be concerned simultaneously, i.e. 1 the difficulty on predicting the performance of the resources in terms of job response time, and 2 an efficient mechanism must be employed in order to measure the trade-off between the performance and the monetary cost incurred on resources so that minimum cost is preserved while providing low job response time. Therefore, a simple yet efficient algorithm that deals with the complexity of resource selection problem in utility-based computing systems is proposed in this paper. The problem is formulated as a Multi Criteria Decision Making (MCDM problem. The advantages of the algorithm are two-folds. On one fold, it hides the complexity of resource selection process without neglecting important components that affect job response time. The difficulty on estimating job response time is captured by representing them in terms of different QoS criteria levels at each resource. On the other fold, this representation further relaxed the complexity in measuring the trade-offs between the performance and the monetary cost incurred on resources. The experiments proved that our proposed resource selection scheme achieves an appealing result with good system performance and low monetary cost as compared to existing algorithms.

  2. Replicated Data Management for Mobile Computing

    CERN Document Server

    Douglas, Terry

    2008-01-01

    Managing data in a mobile computing environment invariably involves caching or replication. In many cases, a mobile device has access only to data that is stored locally, and much of that data arrives via replication from other devices, PCs, and services. Given portable devices with limited resources, weak or intermittent connectivity, and security vulnerabilities, data replication serves to increase availability, reduce communication costs, foster sharing, and enhance survivability of critical information. Mobile systems have employed a variety of distributed architectures from client-server

  3. The evolution of self-replicating computer organisms

    Science.gov (United States)

    Pargellis, A. N.

    A computer model is described that explores some of the possible behavior of biological life during the early stages of evolution. The simulation starts with a primordial soup composed of randomly generated sequences of computer operations selected from a basis set of 16 opcodes. With a probability of about 10 -4, these sequences spontaneously generate large and inefficient self-replicating “organisms”. Driven by mutations, these protobiotic ancestors more efficiently generate offspring by initially eliminating unnecessary code. Later they increase their complexity by adding additional subroutines as they compete for the system's two limited resources, computer memory and CPU time. The ensuing biology includes replicating hosts, parasites and colonies.

  4. LHCb Computing Resources: 2017 requests

    CERN Document Server

    Bozzi, Concezio

    2016-01-01

    This document presents an assessment of computing resources needed by LHCb in 2017, as resulting from the accumulated experience in Run2 data taking and recent changes in the LHCb computing model parameters.

  5. Quantifying Resource Use in Computations

    CERN Document Server

    van Son, R J J H

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for instance, in cryptanalysis, and in neuroscience, for instance, comparative neuro-anatomy. A System versus Environment game formalism is proposed based on Computability Logic that allows to define a computational work function that describes the theoretical and physical resources needed to perform any purely algorithmic computation. Within this formalism, the cost of a computation is defined as the sum of information storage over the steps of the computation. The size of the computational device, eg, the action table of a Universal Turing Machine, the number of transistors in silicon, or the number and complexity of synapses in a neural net, is explicitly included in the computational cost. The proposed cost function leads in a na...

  6. Quantifying resource use in computations

    NARCIS (Netherlands)

    van Son, R.J.J.H.

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for in- stance, in

  7. Quantifying resource use in computations

    NARCIS (Netherlands)

    van Son, R.J.J.H.

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for in- stance, in

  8. Aggregated Computational Toxicology Online Resource

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...

  9. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  10. Replication-based Inference Algorithms for Hard Computational Problems

    OpenAIRE

    Alamino, Roberto C.; Neirotti, Juan P.; Saad, David

    2013-01-01

    Inference algorithms based on evolving interactions between replicated solutions are introduced and analyzed on a prototypical NP-hard problem - the capacity of the binary Ising perceptron. The efficiency of the algorithm is examined numerically against that of the parallel tempering algorithm, showing improved performance in terms of the results obtained, computing requirements and simplicity of implementation.

  11. Replication of Space-Shuttle Computers in FPGAs and ASICs

    Science.gov (United States)

    Ferguson, Roscoe C.

    2008-01-01

    A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.

  12. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  13. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning.

  14. Web-Based Computing Resource Agent Publishing

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Web-based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent-child agent framework and primary-slave agent framework were proposed respectively and discussed in detail.

  15. Resource Management in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  16. Adaptive computational resource allocation for sensor networks

    Institute of Scientific and Technical Information of China (English)

    WANG Dian-hong; FEI E; YAN Yu-jie

    2008-01-01

    To efficiently utilize the limited computational resource in real-time sensor networks, this paper focu-ses on the challenge of computational resource allocation in sensor networks and provides a solution with the method of economies. It designs a mieroeconomic system in which the applications distribute their computational resource consumption across sensor networks by virtue of mobile agent. Further, it proposes the market-based computational resource allocation policy named MCRA which satisfies the uniform consumption of computational energy in network and the optimal division of the single computational capacity for multiple tasks. The simula-tion in the scenario of target tracing demonstrates that MCRA realizes an efficient allocation of computational re-sources according to the priority of tasks, achieves the superior allocation performance and equilibrium perform-ance compared to traditional allocation policies, and ultimately prolongs the system lifetime.

  17. DESIGN SAMPLING AND REPLICATION ASSIGNMENT UNDER FIXED COMPUTING BUDGET

    Institute of Scientific and Technical Information of China (English)

    Loo Hay LEE; Ek Peng CHEW

    2005-01-01

    For many real world problems, when the design space is huge and unstructured, and time consuming simulation is needed to estimate the performance measure, it is important to decide how many designs to sample and how long to run for each design alternative given that we have only a fixed amount of computing time. In this paper, we present a simulation study on how the distribution of the performance measures and distribution of the estimation errors/noises will affect the decision.From the analysis, it is observed that when the underlying distribution of the noise is bounded and if there is a high chance that we can get the smallest noise, then the decision will be to sample as many as possible, but if the noise is unbounded, then it will be important to reduce the noise level first by assigning more replications for each design. On the other hand, if the distribution of the performance measure indicates that we will have a high chance of getting good designs, the suggestion is also to reduce the noise level, otherwise, we need to sample more designs so as to increase the chances of getting good designs. For the special case when the distributions of both the performance measures and noise are normal, we are able to estimate the number of designs to sample, and the number of replications to run in order to obtain the best performance.

  18. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resourcesresources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  19. Resourceful Computing in Unstructured Environments

    Science.gov (United States)

    1991-07-31

    Patt A 11, no. 3 (March 1989): 244-257. Little, James J., Guy E. Blelloch, and Todd Cass, "How to Program the Connection Machine for Computer Vision...Blelloch, and Todd Cass, "Parallel Algorithms for Computer Vision on the Connection Machine," Proceedings of the Image Understanding Workshop, Los...L. Jones, Emmanuel Mazer, Patrick A. O’Donnell, "Task-Level Planning of Pick-and-Place Robot Motions," Computer Magazine, vol. 22, no. 3, March 1989

  20. Protocols for Bio-Inspired Resource Discovery and Erasure Coded Replication in P2P Networks

    CERN Document Server

    Thampi, Sabu M

    2010-01-01

    Efficient resource discovery and availability improvement are very important issues in unstructured P2P networks. In this paper, a bio-inspired resource discovery scheme inspired by the principle of elephants migration is proposed. A replication scheme based on Q-learning and erasure codes is also introduced. Simulation results show that the proposed schemes significantly increases query success rate and availability, and reduces the network traffic as the resources are effectively distributed to well-performing nodes.

  1. Resource management in mobile computing environments

    CERN Document Server

    Mavromoustakis, Constandinos X; Mastorakis, George

    2014-01-01

    This book reports the latest advances on the design and development of mobile computing systems, describing their applications in the context of modeling, analysis and efficient resource management. It explores the challenges on mobile computing and resource management paradigms, including research efforts and approaches recently carried out in response to them to address future open-ended issues. The book includes 26 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of mobile computing, from basic concepts to advanced findings, reporting the state-of-the-art on resource management in such environments. It is mainly intended as a reference guide for researchers and practitioners involved in the design, development and applications of mobile computing systems, seeking solutions to related issues. It also represents a useful textbook for advanced undergraduate and graduate courses, addressing special t...

  2. Efficient Resource Management in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rushikesh Shingade

    2015-12-01

    Full Text Available Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Management in Cloud Computing (EFRE model, CloudSim is used as a simulation toolkit that allows simulation of DataCenter in Cloud computing system. The CloudSim toolkit also supports the creation of multiple virtual machines (VMs on a node of a DataCenter where cloudlets (user requests are assigned to virtual machines by scheduling policies. This paper represents, allocation policies, Time-Shared and Space-Shared are used for scheduling the cloudlets and compared with the constraints (metrics like total execution time, a number of resources and resource allocation algorithm. CloudSim has been used for simulations and the result of simulation demonstrate that Resource Management is effective.

  3. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  4. Framework of Resource Management for Intercloud Computing

    Directory of Open Access Journals (Sweden)

    Mohammad Aazam

    2014-01-01

    Full Text Available There has been a very rapid increase in digital media content, due to which media cloud is gaining importance. Cloud computing paradigm provides management of resources and helps create extended portfolio of services. Through cloud computing, not only are services managed more efficiently, but also service discovery is made possible. To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible for standalone clouds to handle everything with the increasing user demands. For scalability and better service provisioning, at times, clouds have to communicate with other clouds and share their resources. This scenario is called Intercloud computing or cloud federation. The study on Intercloud computing is still in its start. Resource management is one of the key concerns to be addressed in Intercloud computing. Already done studies discuss this issue only in a trivial and simplistic way. In this study, we present a resource management model, keeping in view different types of services, different customer types, customer characteristic, pricing, and refunding. The presented framework was implemented using Java and NetBeans 8.0 and evaluated using CloudSim 3.0.3 toolkit. Presented results and their discussion validate our model and its efficiency.

  5. COMPUTATIONAL RESOURCES FOR BIOFUEL FEEDSTOCK SPECIES

    Energy Technology Data Exchange (ETDEWEB)

    Buell, Carol Robin [Michigan State University; Childs, Kevin L [Michigan State University

    2013-05-07

    While current production of ethanol as a biofuel relies on starch and sugar inputs, it is anticipated that sustainable production of ethanol for biofuel use will utilize lignocellulosic feedstocks. Candidate plant species to be used for lignocellulosic ethanol production include a large number of species within the Grass, Pine and Birch plant families. For these biofuel feedstock species, there are variable amounts of genome sequence resources available, ranging from complete genome sequences (e.g. sorghum, poplar) to transcriptome data sets (e.g. switchgrass, pine). These data sets are not only dispersed in location but also disparate in content. It will be essential to leverage and improve these genomic data sets for the improvement of biofuel feedstock production. The objectives of this project were to provide computational tools and resources for data-mining genome sequence/annotation and large-scale functional genomic datasets available for biofuel feedstock species. We have created a Bioenergy Feedstock Genomics Resource that provides a web-based portal or clearing house for genomic data for plant species relevant to biofuel feedstock production. Sequence data from a total of 54 plant species are included in the Bioenergy Feedstock Genomics Resource including model plant species that permit leveraging of knowledge across taxa to biofuel feedstock species.We have generated additional computational analyses of these data, including uniform annotation, to facilitate genomic approaches to improved biofuel feedstock production. These data have been centralized in the publicly available Bioenergy Feedstock Genomics Resource (http://bfgr.plantbiology.msu.edu/).

  6. Reliable Date-Replication Using Grid Computing Tools

    CERN Document Server

    Sonnick, D

    2009-01-01

    The LHCb detector at CERN is a physical experiment to measure rare b-decays after the collision of protons in the Large Hadron Collider ring. The measured collisions are called “Events”. These events are containing the data which are necessary to analyze and reconstruct the decays. The events are send to speed optimized writer processes which are writing the events into files on a local hard disk cluster. Because the space is limited on the hard disk cluster, the data needs to be replicated to a long term storage system. This diploma thesis will present the design and implementation of a software which replicates the data in a reliable manner. In addition this software registers the data in special databases to prepare the following analyzes and reconstructions. Because the software which is used in the LHCb experiment is still under development, there is a special need for reliability to deal with error situations or inconsistent data. The subject of this diploma thesis was also presented at the “17th ...

  7. Optimised resource construction for verifiable quantum computation

    Science.gov (United States)

    Kashefi, Elham; Wallden, Petros

    2017-04-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph.

  8. Limitation of computational resource as physical principle

    CERN Document Server

    Ozhigov, Y I

    2003-01-01

    Limitation of computational resources is considered as a universal principle that for simulation is as fundamental as physical laws are. It claims that all experimentally verifiable implications of physical laws can be simulated by the effective classical algorithms. It is demonstrated through a completely deterministic approach proposed for the simulation of biopolymers assembly. A state of molecule during its assembly is described in terms of the reduced density matrix permitting only limited tunneling. An assembly is treated as a sequence of elementary scatterings of simple molecules from the environment on the point of assembly. A decoherence is treated as a forced measurement of quantum state resulted from the shortage of computational resource. All results of measurements are determined by a choice from the limited number of special options of the nonphysical nature which stay unchanged till the completion of assembly; we do not use the random numbers generators. Observations of equal states during the ...

  9. Automating usability of ATLAS Distributed Computing resources

    CERN Document Server

    "Tupputi, S A; The ATLAS collaboration

    2013-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic exclusion/recovery of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources who feature non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes SAM (Site Availability Test) site-by-site SRM tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites.\

  10. Architecturing Conflict Handling of Pervasive Computing Resources

    OpenAIRE

    Jakob, Henner; Consel, Charles; Loriant, Nicolas

    2011-01-01

    International audience; Pervasive computing environments are created to support human activities in different domains (e.g., home automation and healthcare). To do so, applications orchestrate deployed services and devices. In a realistic setting, applications are bound to conflict in their usage of shared resources, e.g., controlling doors for security and fire evacuation purposes. These conflicts can have critical effects on the physical world, putting people and assets at risk. This paper ...

  11. LHCb Computing Resource usage in 2015 (II)

    CERN Document Server

    Bozzi, Concezio

    2016-01-01

    This documents reports the usage of computing resources by the LHCb collaboration during the period January 1st – December 31st 2015. The data in the following sections has been compiled from the EGI Accounting portal: https://accounting.egi.eu. For LHCb specific information, the data is taken from the DIRAC Accounting at the LHCb DIRAC Web portal: http://lhcb-portal-dirac.cern.ch.

  12. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  13. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  14. Automating usability of ATLAS Distributed Computing resources

    Science.gov (United States)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  15. Computational Investigations on Polymerase Actions in Gene Transcription and Replication Combining Physical Modeling and Atomistic Simulations

    OpenAIRE

    Yu, Jin

    2015-01-01

    Polymerases are protein enzymes that move along nucleic acid chains and catalyze template-based polymerization reactions during gene transcription and replication. The polymerases also substantially improve transcription or replication fidelity through the non-equilibrium enzymatic cycles. We briefly review computational efforts that have been made toward understanding mechano-chemical coupling and fidelity control mechanisms of the polymerase elongation. The polymerases are regarded as molec...

  16. Resource Optimization Based on Demand in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ramakrishnan Ramanathan

    2014-10-01

    Full Text Available A Cloud Computing gives the opportunity to dynamically scale the computing resources for application. Cloud Computing consist of large number of resources, it is called resource pool. These resources are shared among the cloud consumer using virtualization technology. Virtualization technologies engaged in cloud environment is resource consolidation and management. Cloud consists of physical and virtual resources. Cloud performance is important for Cloud Provider perspective predicts the dynamic nature of users, user demands and application demand. The cloud consumer perspective, the job should be completed on time with minimum cost and limited resources. Finding optimum resource allocation is difficult in huge system like Cluster, Data Centre and Grid. In this study we present two types of resource allocation schemes such as Commitment Allocation (CA and Over Commitment Allocation (OCA in the physical and virtual level resource. These resource allocation schemes helps to identify the virtual resource utilization and physical resource availability.

  17. Modeling a Dynamic Data Replication Strategy to Increase System Availability in Cloud Computing Environments

    Institute of Scientific and Technical Information of China (English)

    Da-Wei Sun; Gui-Ran Chang; Shang Gao; Li-Zhong Jin; Xing-Wei Wang

    2012-01-01

    Failures are normal rather than exceptional in the cloud computing environmcnts.To improve system availability,replicating the popular data to multiple suitable locations is an advisable choice,as users can access the data from a nearby site.This is,however,not the case for replicas which must have a fixed number of copies on several locations.How to decide a reasonable number and right locations for replicas has become a challenge in the cloud computing.In this paper,a dynamic data replication strategy is put forward with a brief survey of replication strategy suitable for distributed computing environments.It includes:1) analyzing and modeling the relationship between system availability and the number of replicas; 2) evaluating and identifying the popular data and triggering a replication operation when the popularity data passes a dynamic threshold; 3) calculating a suitable number of copies to meet a reasonable system byte effective rate requirement and placing replicas among data nodes in a balanced way; 4) designing the dynamic data replication algorithm in a cloud.Experimental results demonstrate the efficiency and effectiveness of the improved system brought by the proposed strategy in a cloud.

  18. JOSHUA: Symmetric Active/Active Replication for Highly Available HPC Job and Resource Management

    Energy Technology Data Exchange (ETDEWEB)

    Uhlemann, Kai [ORNL; Engelmann, Christian [ORNL; Scott, Steven L [ORNL

    2006-01-01

    Most of today's HPC systems employ a single head node for control, which represents a single point of failure as it interrupts an entire HPC system upon failure. Furthermore, it is also a single point of control as it disables an entire HPC system until repair. One of the most important HPC system service running on the head node is the job and resource management. If it goes down, all currently running jobs loose the service they report back to. They have to be restarted once the head node is up and running again. With this paper, we present a generic approach for providing symmetric active/active replication for highly available HPC job and resource management. The JOSHUA solution provides a virtually synchronous environment for continuous availability without any interruption of service and without any loss of state. Replication is performed externally via the PBS service interface without the need to modify any service code. Test results as well as a reliability analysis of our proof-of-concept prototype implementation show that continuous availability can be provided by JOSHUA with an acceptable performance trade-off.

  19. Optimal Joint Multiple Resource Allocation Method for Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2011-01-01

    Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources. To provide cloud computing services economically, it is important to optimize resource allocation under the assumption that the required resource can be taken from a shared resource pool. In addition, to be able to provide processing ability and storage capacity, it is necessary to allocate bandwidth to access them at the same time. This paper proposes an optimal resource allocation method for cloud computing environments. First, this paper develops a resource allocation model of cloud computing environments, assuming both processing ability and bandwidth are allocated simultaneously to each service request and rented out on an hourly basis. The allocated resources are dedicated to each service request. Next, this paper proposes an optimal joint multiple resource allocation method, based on the above resource allocation model. It is demonstrated by simulation evaluation that the p...

  20. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  1. Contract on using computer resources of another

    Directory of Open Access Journals (Sweden)

    Cvetković Mihajlo

    2016-01-01

    Full Text Available Contractual relations involving the use of another's property are quite common. Yet, the use of computer resources of others over the Internet and legal transactions arising thereof certainly diverge from the traditional framework embodied in the special part of contract law dealing with this issue. Modern performance concepts (such as: infrastructure, software or platform as high-tech services are highly unlikely to be described by the terminology derived from Roman law. The overwhelming novelty of high-tech services obscures the disadvantageous position of contracting parties. In most cases, service providers are global multinational companies which tend to secure their own unjustified privileges and gain by providing lengthy and intricate contracts, often comprising a number of legal documents. General terms and conditions in these service provision contracts are further complicated by the '.service level agreement', rules of conduct and (nonconfidentiality guarantees. Without giving the issue a second thought, users easily accept the pre-fabricated offer without reservations, unaware that such a pseudo-gratuitous contract actually conceals a highly lucrative and mutually binding agreement. The author examines the extent to which the legal provisions governing sale of goods and services, lease, loan and commodatum may apply to 'cloud computing' contracts, and analyses the scope and advantages of contractual consumer protection, as a relatively new area in contract law. The termination of a service contract between the provider and the user features specific post-contractual obligations which are inherent to an online environment.

  2. THE STRATEGY OF RESOURCE MANAGEMENT BASED ON GRID COMPUTING

    Institute of Scientific and Technical Information of China (English)

    Wang Ruchuan; Han Guangfa; Wang Haiyan

    2006-01-01

    This paper analyzes the defaults of traditional method according to the resource management method of grid computing based on virtual organization. It supports the concept to ameliorate the resource management with mobile agent and gives the ameliorated resource management model. Also pointed out is the methodology of ameliorating resource management and the way to realize in reality.

  3. Dynamic Resource Management and Job Scheduling for High Performance Computing

    OpenAIRE

    2016-01-01

    Job scheduling and resource management plays an essential role in high-performance computing. Supercomputing resources are usually managed by a batch system, which is responsible for the effective mapping of jobs onto resources (i.e., compute nodes). From the system perspective, a batch system must ensure high system utilization and throughput, while from the user perspective it must ensure fast response times and fairness when allocating resources across jobs. Parallel jobs can be divide...

  4. Computational Investigations on Polymerase Actions in Gene Transcription and Replication Combining Physical Modeling and Atomistic Simulations

    CERN Document Server

    Yu, Jin

    2015-01-01

    Polymerases are protein enzymes that move along nucleic acid chains and catalyze template-based polymerization reactions during gene transcription and replication. The polymerases also substantially improve transcription or replication fidelity through the non-equilibrium enzymatic cycles. We briefly review computational efforts that have been made toward understanding mechano-chemical coupling and fidelity control mechanisms of the polymerase elongation. The polymerases are regarded as molecular information motors during the elongation process. It requires a full spectrum of computational approaches from multiple time and length scales to understand the full polymerase functional cycle. We keep away from quantum mechanics based approaches to the polymerase catalysis due to abundant former surveys, while address only statistical physics modeling approach and all-atom molecular dynamics simulation approach. We organize this review around our own modeling and simulation practices on a single-subunit T7 RNA poly...

  5. LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents the computing resources needed by LHCb in 2019 and a reassessment of the 2018 requests, as resulting from the current experience of Run2 data taking and minor changes in the LHCb computing model parameters.

  6. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  7. Computational inference of replication and transcription activator regulator activity in herpesvirus from gene expression data

    OpenAIRE

    Recchia, A; Wit, E; Vinciotti, V; Kellam, P

    2008-01-01

    One of the main aims of system biology is to understand the structure and dynamics of genomic systems. A computational approach, facilitated by new technologies for high-throughput quantitative experimental data, is put forward to investigate the regulatory system of dynamic interaction among genes in Kaposi's sarcoma-associated herpesvirus network after induction of lytic replication. A reconstruction of transcription factor activity and gene-regulatory kinetics using data from a time-course...

  8. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  9. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  10. PERFORMANCE IMPROVEMENT IN CLOUD COMPUTING USING RESOURCE CLUSTERING

    Directory of Open Access Journals (Sweden)

    G. Malathy

    2013-01-01

    Full Text Available Cloud computing is a computing paradigm in which the various tasks are assigned to a combination of connections, software and services that can be accessed over the network. The computing resources and services can be efficiently delivered and utilized, making the vision of computing utility realizable. In various applications, execution of services with more number of tasks has to perform with minimum intertask communication. The applications are more likely to exhibit different patterns and levels and the distributed resources organize into various topologies for information and query dissemination. In a distributed system the resource discovery is a significant process for finding appropriate nodes. The earlier resource discovery mechanism in cloud system relies on the recent observations. In this study, resource usage distribution for a group of nodes with identical resource usages patterns are identified and kept as a cluster and is named as resource clustering approach. The resource clustering approach is modeled using CloudSim, a toolkit for modeling and simulating cloud computing environments and the evaluation improves the performance of the system in the usage of the resources. Results show that resource clusters are able to provide high accuracy for resource discovery.

  11. Resource Centered Computing delivering high parallel performance

    OpenAIRE

    2014-01-01

    International audience; Modern parallel programming requires a combination of differentparadigms, expertise and tuning, that correspond to the differentlevels in today's hierarchical architectures. To cope with theinherent difficulty, ORWL (ordered read-write locks) presents a newparadigm and toolbox centered around local or remote resources, suchas data, processors or accelerators. ORWL programmers describe theircomputation in terms of access to these resources during criticalsections. Exclu...

  12. Improved Self Fused Check pointing Replication for Handling Multiple Faults in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sanjay Bansal

    2012-06-01

    Full Text Available The performance of checkpointing replication fault tolerance technique is severely bottlenecks due to handling of number of replicas generated for a large number of nodes to tolerate multiple faults such as multiple failure of nodes, processes etc. In fusion based approach, these checkpointing replicas stored at large number of computing nodes is aggregated into some data structure to handle efficiently through fused data structure. These impose higher overheads of fusing a large numbers of checkpointing replicas. In this paper, a self fused checkpointing replication (SFCR for cloud computing is proposed. All checkpointing replicas assigned to store at a particular node are stored in a self-fused-shared-checkpointing-replicas file already created and located at every node rather than storing as a separate checkpointing element and than fusing. Thus, iteliminates the need of further fusing of the checkpointing replicas stored at different checkpointing replicas storage nodes, as checkpointing replicas assign to store a particular node stored in an already created fuse file at every checkpointing replicas storage node. It improves the performance without affecting the specified fault tolerant capabilities as failure of any node will result in loss of all replicas irrespective of separate checkpointing file or shared checkpointing fused file. Costs to maintain these set of self fused shared files are obviously less than the number of separate replicated files in terms of time and efforts that it takes to create, and update a file. Thus, proposed approach is enhancement of performance without compromising the specified fault tolerancecapability. At the same time when system seems to be prone to many number of faults, some specific self fused shared files consist of important and critical data that can be further replicated to enhance the fault tolerant capability of a group of important and critical nodes or processes at run time. Thus, it also

  13. Resource management in utility and cloud computing

    CERN Document Server

    Zhao, Han

    2013-01-01

    This SpringerBrief reviews the existing market-oriented strategies for economically managing resource allocation in distributed systems. It describes three new schemes that address cost-efficiency, user incentives, and allocation fairness with regard to different scheduling contexts. The first scheme, taking the Amazon EC2? market as a case of study, investigates the optimal resource rental planning models based on linear integer programming and stochastic optimization techniques. This model is useful to explore the interaction between the cloud infrastructure provider and the cloud resource c

  14. LHCb Computing Resources: 2018 requests and preview of 2019 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents a reassessment of computing resources needed by LHCb in 2018 and a preview of computing requests for 2019, as resulting from the current experience of Run2 data taking and recent changes in the LHCb computing model parameters.

  15. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  16. Discovery of replicating circular RNAs by RNA-seq and computational algorithms.

    Directory of Open Access Journals (Sweden)

    Zhixiang Zhang

    2014-12-01

    Full Text Available Replicating circular RNAs are independent plant pathogens known as viroids, or act to modulate the pathogenesis of plant and animal viruses as their satellite RNAs. The rate of discovery of these subviral pathogens was low over the past 40 years because the classical approaches are technical demanding and time-consuming. We previously described an approach for homology-independent discovery of replicating circular RNAs by analysing the total small RNA populations from samples of diseased tissues with a computational program known as progressive filtering of overlapping small RNAs (PFOR. However, PFOR written in PERL language is extremely slow and is unable to discover those subviral pathogens that do not trigger in vivo accumulation of extensively overlapping small RNAs. Moreover, PFOR is yet to identify a new viroid capable of initiating independent infection. Here we report the development of PFOR2 that adopted parallel programming in the C++ language and was 3 to 8 times faster than PFOR. A new computational program was further developed and incorporated into PFOR2 to allow the identification of circular RNAs by deep sequencing of long RNAs instead of small RNAs. PFOR2 analysis of the small RNA libraries from grapevine and apple plants led to the discovery of Grapevine latent viroid (GLVd and Apple hammerhead viroid-like RNA (AHVd-like RNA, respectively. GLVd was proposed as a new species in the genus Apscaviroid, because it contained the typical structural elements found in this group of viroids and initiated independent infection in grapevine seedlings. AHVd-like RNA encoded a biologically active hammerhead ribozyme in both polarities, and was not specifically associated with any of the viruses found in apple plants. We propose that these computational algorithms have the potential to discover novel circular RNAs in plants, invertebrates and vertebrates regardless of whether they replicate and/or induce the in vivo accumulation of small

  17. Discovery of replicating circular RNAs by RNA-seq and computational algorithms.

    Science.gov (United States)

    Zhang, Zhixiang; Qi, Shuishui; Tang, Nan; Zhang, Xinxin; Chen, Shanshan; Zhu, Pengfei; Ma, Lin; Cheng, Jinping; Xu, Yun; Lu, Meiguang; Wang, Hongqing; Ding, Shou-Wei; Li, Shifang; Wu, Qingfa

    2014-12-01

    Replicating circular RNAs are independent plant pathogens known as viroids, or act to modulate the pathogenesis of plant and animal viruses as their satellite RNAs. The rate of discovery of these subviral pathogens was low over the past 40 years because the classical approaches are technical demanding and time-consuming. We previously described an approach for homology-independent discovery of replicating circular RNAs by analysing the total small RNA populations from samples of diseased tissues with a computational program known as progressive filtering of overlapping small RNAs (PFOR). However, PFOR written in PERL language is extremely slow and is unable to discover those subviral pathogens that do not trigger in vivo accumulation of extensively overlapping small RNAs. Moreover, PFOR is yet to identify a new viroid capable of initiating independent infection. Here we report the development of PFOR2 that adopted parallel programming in the C++ language and was 3 to 8 times faster than PFOR. A new computational program was further developed and incorporated into PFOR2 to allow the identification of circular RNAs by deep sequencing of long RNAs instead of small RNAs. PFOR2 analysis of the small RNA libraries from grapevine and apple plants led to the discovery of Grapevine latent viroid (GLVd) and Apple hammerhead viroid-like RNA (AHVd-like RNA), respectively. GLVd was proposed as a new species in the genus Apscaviroid, because it contained the typical structural elements found in this group of viroids and initiated independent infection in grapevine seedlings. AHVd-like RNA encoded a biologically active hammerhead ribozyme in both polarities, and was not specifically associated with any of the viruses found in apple plants. We propose that these computational algorithms have the potential to discover novel circular RNAs in plants, invertebrates and vertebrates regardless of whether they replicate and/or induce the in vivo accumulation of small RNAs.

  18. Discovery of replicating circular RNAs by RNA-seq and computational algorithms.

    Directory of Open Access Journals (Sweden)

    Zhixiang Zhang

    2014-12-01

    Full Text Available Replicating circular RNAs are independent plant pathogens known as viroids, or act to modulate the pathogenesis of plant and animal viruses as their satellite RNAs. The rate of discovery of these subviral pathogens was low over the past 40 years because the classical approaches are technical demanding and time-consuming. We previously described an approach for homology-independent discovery of replicating circular RNAs by analysing the total small RNA populations from samples of diseased tissues with a computational program known as progressive filtering of overlapping small RNAs (PFOR. However, PFOR written in PERL language is extremely slow and is unable to discover those subviral pathogens that do not trigger in vivo accumulation of extensively overlapping small RNAs. Moreover, PFOR is yet to identify a new viroid capable of initiating independent infection. Here we report the development of PFOR2 that adopted parallel programming in the C++ language and was 3 to 8 times faster than PFOR. A new computational program was further developed and incorporated into PFOR2 to allow the identification of circular RNAs by deep sequencing of long RNAs instead of small RNAs. PFOR2 analysis of the small RNA libraries from grapevine and apple plants led to the discovery of Grapevine latent viroid (GLVd and Apple hammerhead viroid-like RNA (AHVd-like RNA, respectively. GLVd was proposed as a new species in the genus Apscaviroid, because it contained the typical structural elements found in this group of viroids and initiated independent infection in grapevine seedlings. AHVd-like RNA encoded a biologically active hammerhead ribozyme in both polarities, and was not specifically associated with any of the viruses found in apple plants. We propose that these computational algorithms have the potential to discover novel circular RNAs in plants, invertebrates and vertebrates regardless of whether they replicate and/or induce the in vivo accumulation of small

  19. Research on Cloud Computing Resources Provisioning Based on Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Zhiping Peng

    2015-01-01

    Full Text Available As one of the core issues for cloud computing, resource management adopts virtualization technology to shield the underlying resource heterogeneity and complexity which makes the massive distributed resources form a unified giant resource pool. It can achieve efficient resource provisioning by using the rational implementing resource management methods and techniques. Therefore, how to manage cloud computing resources effectively becomes a challenging research topic. By analyzing the executing progress of a user job in the cloud computing environment, we proposed a novel resource provisioning scheme based on the reinforcement learning and queuing theory in this study. With the introduction of the concepts of Segmentation Service Level Agreement (SSLA and Utilization Unit Time Cost (UUTC, we viewed the resource provisioning problem in cloud computing as a sequential decision issue, and then we designed a novel optimization object function and employed reinforcement learning to solve it. Experiment results not only demonstrated the effectiveness of the proposed scheme, but also proved to outperform the common methods of resource utilization rate in terms of SLA collision avoidance and user costs.

  20. The (non-)replicability of regulatory resource depletion: A field report employing non-invasive brain stimulation

    Science.gov (United States)

    Martijn, Carolien; Alberts, Hugo J. E. M.; Thomson, Alix C.; David, Bastian; Kessler, Daniel

    2017-01-01

    Cognitive effort and self-control are exhausting. Although evidence is ambiguous, behavioural studies have repeatedly suggested that control-demanding tasks seem to deplete a limited cache of self-regulatory resources leading to performance degradations and fatigue. While resource depletion has indirectly been associated with a decline in right prefrontal cortex capacity, its precise neural underpinnings have not yet been revealed. This study consisted of two independent experiments, which set out to investigate the causal role of the right dorsolateral prefrontal cortex (DLPFC) in a classic dual phase depletion paradigm employing non-invasive brain stimulation. In Experiment 1 we demonstrated a general depletion effect, which was significantly eliminated by anodal transcranial Direct Current Stimulation to the right DLPFC. In Experiment 2, however, we failed to replicate the basic psychological depletion effect within a second independent sample. The dissimilar results are discussed in the context of the current ‘replication crisis’ and suggestions for future studies are offered. While our current results do not allow us to firmly argue for or against the existence of resource depletion, we outline why it is crucial to further clarify which specific external and internal circumstances lead to limited replicability of the described effect. We showcase and discuss the current inter-lab replication problem based on two independent samples tested within one research group (intra-lab). PMID:28362843

  1. Multiple Computing Task Scheduling Method Based on Dynamic Data Replication and Hierarchical Strategy

    Directory of Open Access Journals (Sweden)

    Xiang Zhou

    2014-02-01

    Full Text Available As for the problem of how to carry out task scheduling and data replication effectively in the grid and to reduce task’s execution time, this thesis proposes the task scheduling algorithm and the optimum dynamic data replication algorithm and builds a scheme to effectively combine these two algorithms. First of all, the scheme adopts the ISS algorithm considering the number of tasks waiting queue, the location of task demand data and calculation capacity of site by adopting the method of network structure’s hierarchical scheduling to calculate the cost of comprehensive task with the proper weight efficiency and search out the best compute node area. And then the algorithm of ODHRA is adopted to analyze the data transmission time, memory access latency, waiting copy requests in the queue and the distance between nodes, choose out the best replications location in many copies combined with copy placement and copy management to reduce the file access time. The simulation results show that the proposed scheme compared with other algorithm has better performance in terms of average task execution time. 

  2. Resource Provisioning in SLA-Based Cluster Computing

    Science.gov (United States)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  3. Resource-efficient linear optical quantum computation.

    Science.gov (United States)

    Browne, Daniel E; Rudolph, Terry

    2005-07-01

    We introduce a scheme for linear optics quantum computation, that makes no use of teleported gates, and requires stable interferometry over only the coherence length of the photons. We achieve a much greater degree of efficiency and a simpler implementation than previous proposals. We follow the "cluster state" measurement based quantum computational approach, and show how cluster states may be efficiently generated from pairs of maximally polarization entangled photons using linear optical elements. We demonstrate the universality and usefulness of generic parity measurements, as well as introducing the use of redundant encoding of qubits to enable utilization of destructive measurements--both features of use in a more general context.

  4. Resource requirements for digital computations on electrooptical systems.

    Science.gov (United States)

    Eshaghian, M M; Panda, D K; Kumar, V K

    1991-03-10

    In this paper we study the resource requirements of electrooptical organizations in performing digital computing tasks. We define a generic model of parallel computation using optical interconnects, called the optical model of computation (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Using this model we derive relationships between information transfer and computational resources in solving a given problem. To illustrate our results, we concentrate on a computationally intensive operation, 2-D digital image convolution. Irrespective of the input/output scheme and the order of computation, we show a lower bound of ?(nw) on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  5. Resource requirements for digital computations on electrooptical systems

    Science.gov (United States)

    Eshaghian, Mary M.; Panda, Dhabaleswar K.; Kumar, V. K. Prasanna

    1991-03-01

    The resource requirements of electrooptical organizations in performing digital computing tasks are studied via a generic model of parallel computation using optical interconnects, called the 'optical model of computation' (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Relationships between information transfer and computational resources in solving a given problem are derived. A computationally intensive operation, two-dimensional digital image convolution is undertaken. Irrespective of the input/output scheme and the order of computation, a lower bound of Omega(nw) is obtained on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  6. Data-centric computing on distributed resources

    NARCIS (Netherlands)

    Cushing, R.S.

    2015-01-01

    Distributed computing has always been a challenge due to the NP-completeness of finding optimal underlying management routines. The advent of big data increases the dimensionality of the problem whereby data partitionability, processing complexity and locality play a crucial role in the effectivenes

  7. Allocation Strategies of Virtual Resources in Cloud-Computing Networks

    Directory of Open Access Journals (Sweden)

    D.Giridhar Kumar

    2014-11-01

    Full Text Available In distributed computing, Cloud computing facilitates pay per model as per user demand and requirement. Collection of virtual machines including both computational and storage resources will form the Cloud. In Cloud computing, the main objective is to provide efficient access to remote and geographically distributed resources. Cloud faces many challenges, one of them is scheduling/allocation problem. Scheduling refers to a set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its allocation strategy according to the changing environment and the type of task. In this paper we will see FCFS, Round Robin scheduling in addition to Linear Integer Programming an approach of resource allocation.

  8. A global resource for computational chemistry

    OpenAIRE

    2004-01-01

    Describes the creation and curation of the ca 200,000 molecules and calculations deposited in this collection (WWMM) This article has been submitted to the Journal Of Molecular Modeling (Springer) which allows self-archiving of preprints (but not postprints) - ROMEO-yellow A modular distributable system has been built for high-throughput computation of molecular structures and properties. It has been used to process 250K compounds from the NCI database and to make the results searchabl...

  9. Computational investigations on polymerase actions in gene transcription and replication: Combining physical modeling and atomistic simulations

    Science.gov (United States)

    Jin, Yu

    2016-01-01

    Polymerases are protein enzymes that move along nucleic acid chains and catalyze template-based polymerization reactions during gene transcription and replication. The polymerases also substantially improve transcription or replication fidelity through the non-equilibrium enzymatic cycles. We briefly review computational efforts that have been made toward understanding mechano-chemical coupling and fidelity control mechanisms of the polymerase elongation. The polymerases are regarded as molecular information motors during the elongation process. It requires a full spectrum of computational approaches from multiple time and length scales to understand the full polymerase functional cycle. We stay away from quantum mechanics based approaches to the polymerase catalysis due to abundant former surveys, while addressing statistical physics modeling approaches along with all-atom molecular dynamics simulation studies. We organize this review around our own modeling and simulation practices on a single subunit T7 RNA polymerase, and summarize commensurate studies on structurally similar DNA polymerases as well. For multi-subunit RNA polymerases that have been actively studied in recent years, we leave systematical reviews of the simulation achievements to latest computational chemistry surveys, while covering only representative studies published very recently, including our own work modeling structure-based elongation kinetic of yeast RNA polymerase II. In the end, we briefly go through physical modeling on elongation pauses and backtracking activities of the multi-subunit RNAPs. We emphasize on the fluctuation and control mechanisms of the polymerase actions, highlight the non-equilibrium nature of the operation system, and try to build some perspectives toward understanding the polymerase impacts from the single molecule level to a genome-wide scale. Project supported by the National Natural Science Foundation (Grant No. 11275022).

  10. Cloud Scheduler: a resource manager for distributed compute clouds

    CERN Document Server

    Armstrong, P; Bishop, A; Charbonneau, A; Desmarais, R; Fransham, K; Hill, N; Gable, I; Gaudet, S; Goliath, S; Impey, R; Leavett-Brown, C; Ouellete, J; Paterson, M; Pritchet, C; Penfold-Brown, D; Podaima, W; Schade, D; Sobie, R J

    2010-01-01

    The availability of Infrastructure-as-a-Service (IaaS) computing clouds gives researchers access to a large set of new resources for running complex scientific applications. However, exploiting cloud resources for large numbers of jobs requires significant effort and expertise. In order to make it simple and transparent for researchers to deploy their applications, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. Cloud Scheduler boots and manages the user-customized virtual machines in response to a user's job submission. We describe the motivation and design of the Cloud Scheduler and present results on its use on both science and commercial clouds.

  11. Computer Usage as Instructional Resources for Vocational Training in Nigeria

    Science.gov (United States)

    Oguzor, Nkasiobi Silas

    2011-01-01

    The use of computers has become the driving force in the delivery of instruction of today's vocational education and training (VET) in Nigeria. Though computers have become an increasingly accessible resource for educators to use in their teaching activities, most teachers are still unable to integrate it in their teaching and learning processes.…

  12. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  13. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  14. Shared resource control between human and computer

    Science.gov (United States)

    Hendler, James; Wilson, Reid

    1989-01-01

    The advantages of an AI system of actively monitoring human control of a shared resource (such as a telerobotic manipulator) are presented. A system is described in which a simple AI planning program gains efficiency by monitoring human actions and recognizing when the actions cause a change in the system's assumed state of the world. This enables the planner to recognize when an interaction occurs between human actions and system goals, and allows maintenance of an up-to-date knowledge of the state of the world and thus informs the operator when human action would undo a goal achieved by the system, when an action would render a system goal unachievable, and efficiently replans the establishment of goals after human intervention.

  15. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  16. Computer Resources Handbook for Flight Critical Systems.

    Science.gov (United States)

    1985-01-01

    4- lr , 4-21 71 -r:v-’.7. 7777 -.- - ~~ --- 2- ’K 2. N It- NATIONAL 8UURFAU OF E UORGoPY RESOLUI TESI 4.4, % % ! O0 ASI-TR-85,-502O (0 COMPUTER...associated with the ,.l-a, and the status of the originating unit or function is identifiel (e. g., ’.." 4, . ..-. operating in no rrrji / r estr i ct ed emrg...lllllEEEEElhEE IEEEEEEEEEEEEE Eu. -2w |’’ ".4 -, M.iii - /, - ,, IV. . ,,. 1 0 2-4 11M ~ 2 - Hill- 14 W15 NATIONAL BURAU OF S MCROGOPY RESOUYI TESI 5’W, 4

  17. CPT White Paper on Tier-1 Computing Resource Needs

    CERN Document Server

    CERN. Geneva. CPT Project

    2006-01-01

    In the summer of 2005, CMS like the other LHC experiments published a Computing Technical Design Report (C-TDR) for the LHCC, which describes the CMS computing models as a distributed system of Tier-0, Tier-1, and Tier-2 regional computing centers, and the CERN analysis facility, the CMS-CAF. The C-TDR contains information on resource needs for the different computing tiers that are derived from a set of input assumptions and desiderata on how to achieve high-throughput and a robust computing environment. At the CERN Computing Resources Review Board meeting in October 2005, the funding agencies agreed on a Memorandum of Understanding (MoU) describing the worldwide collaboration on LHC computing (WLCG). In preparation for this meeting the LCG project had put together information from countries regarding their pledges for computing resources at Tier-1 and Tier-2 centers. These pledges include the amount of CPU power, disk storage, tape storage library space, and network connectivity for each of the LHC experime...

  18. Dynamic computing resource allocation in online flood monitoring and prediction

    Science.gov (United States)

    Kuchar, S.; Podhoranyi, M.; Vavrik, R.; Portero, A.

    2016-08-01

    This paper presents tools and methodologies for dynamic allocation of high performance computing resources during operation of the Floreon+ online flood monitoring and prediction system. The resource allocation is done throughout the execution of supported simulations to meet the required service quality levels for system operation. It also ensures flexible reactions to changing weather and flood situations, as it is not economically feasible to operate online flood monitoring systems in the full performance mode during non-flood seasons. Different service quality levels are therefore described for different flooding scenarios, and the runtime manager controls them by allocating only minimal resources currently expected to meet the deadlines. Finally, an experiment covering all presented aspects of computing resource allocation in rainfall-runoff and Monte Carlo uncertainty simulation is performed for the area of the Moravian-Silesian region in the Czech Republic.

  19. Application-adaptive resource scheduling in a computational grid

    Institute of Scientific and Technical Information of China (English)

    LUAN Cui-ju; SONG Guang-hua; ZHENG Yao

    2006-01-01

    Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid.Resource scheduling should consider the specific characteristics of the application, and decide the metrics to be used accordingly.This paper presents a distributed resource scheduling framework mainly consisting of a job scheduler and a local scheduler. In order to meet the requirements of different applications, we adopt HGSA, a Heuristic-based Greedy Scheduling Algorithm, to schedule jobs in the grid, where the heuristic knowledge is the metric weights of the computing resources and the metric workload impact factors. The metric weight is used to control the effect of the metric on the application. For different applications, only metric weights and the metric workload impact factors need to be changed, while the scheduling algorithm remains the same.Experimental results are presented to demonstrate the adaptability of the HGSA.

  20. A Hybrid Approach for Scheduling and Replication based on Multi-criteria Decision Method in Grid Computing

    Directory of Open Access Journals (Sweden)

    Nadia Hadi

    2012-09-01

    Full Text Available Grid computing environments have emerged following the demand of scientists to have a very high computing power and storage capacity. One among the challenges imposed in the use of these environments is the performance problem. To improve performance, scheduling and replicating techniques are used. In this paper we propose an approach to task scheduling combined with data replication decision based on multi criteria principle. This is to improve performance by reducing the response time of tasks and the load of system. This hybrid approach is based on a non-hierarchical model that allows scalability.

  1. A Distributed OpenCL Framework using Redundant Computation and Data Replication

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Junghyun [Seoul National University, Korea; Gangwon, Jo [Seoul National University, Korea; Jaehoon, Jung [Seoul National University, Korea; Lee, Jaejin [Seoul National University, Korea

    2016-01-01

    Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined in a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.

  2. EST analysis pipeline: use of distributed computing resources.

    Science.gov (United States)

    González, Francisco Javier; Vizcaíno, Juan Antonio

    2011-01-01

    This chapter describes how a pipeline for the analysis of expressed sequence tag (EST) data can be -implemented, based on our previous experience generating ESTs from Trichoderma spp. We focus on key steps in the workflow, such as the processing of raw data from the sequencers, the clustering of ESTs, and the functional annotation of the sequences using BLAST, InterProScan, and BLAST2GO. Some of the steps require the use of intensive computing power. Since these resources are not available for small research groups or institutes without bioinformatics support, an alternative will be described: the use of distributed computing resources (local grids and Amazon EC2).

  3. Using OSG Computing Resources with (iLC)Dirac

    CERN Document Server

    Sailer, Andre

    2017-01-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called 'SiteDirectors', which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional sitespecific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were develo...

  4. Load Balancing in Local Computational Grids within Resource Allocation Process

    Directory of Open Access Journals (Sweden)

    Rouhollah Golmohammadi

    2012-11-01

    Full Text Available A suitable resource allocation method in computational grids should schedule resources in a way that provides the requirements of the users and the resource providers; i.e., the maximum number of tasks should be completed in their time and budget constraints and the received load be distributed equally between resources. This is a decision-making problem, while the scheduler should select a resource from all ones. This process is a multi criteria decision-making problem; because of affect of different properties of resources on this decision. The goal of this decision-making process is balancing the load and completing the tasks in their defined constraints. The proposed algorithm is an analytic hierarchy process based Resource Allocation (ARA method. This method estimates a value for the preference of each resource and then selects the appropriate resource based on the allocated values. The simulations show the ARA method decreases the task failure rate at least 48% and increases the balance factor more than 3.4%.

  5. A reminder on millisecond timing accuracy and potential replication failure in computer-based psychology experiments: An open letter.

    Science.gov (United States)

    Plant, Richard R

    2016-03-01

    There is an ongoing 'replication crisis' across the field of psychology in which researchers, funders, and members of the public are questioning the results of some scientific studies and the validity of the data they are based upon. However, few have considered that a growing proportion of research in modern psychology is conducted using a computer. Could it simply be that the hardware and software, or experiment generator, being used to run the experiment itself be a cause of millisecond timing error and subsequent replication failure? This article serves as a reminder that millisecond timing accuracy in psychology studies remains an important issue and that care needs to be taken to ensure that studies can be replicated on current computer hardware and software.

  6. A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking

    Directory of Open Access Journals (Sweden)

    Fang-Yie Leu

    2012-03-01

    Full Text Available In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short, which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET, users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.

  7. A Survey on Resource Allocation Strategies in Cloud Computing

    Directory of Open Access Journals (Sweden)

    V.Vinothina

    2012-06-01

    Full Text Available Cloud computing has become a new age technology that has got huge potentials in enterprises and markets. Clouds can make it possible to access applications and associated data from anywhere. Companies are able to rent resources from cloud for storage and other computational purposes so that their infrastructure cost can be reduced significantly. Further they can make use of company-wide access to applications, based on pay-as-you-go model. Hence there is no need for getting licenses for individual products. However one of the major pitfalls in cloud computing is related to optimizing the resources being allocated. Because of the uniqueness of the model, resource allocation is performed with the objective of minimizing the costs associated with it. The other challenges of resource allocation are meeting customer demands and application requirements. In this paper, various resource allocation strategies and their challenges are discussed in detail. It is believed that this paper would benefit both cloud users and researchers in overcoming the challenges faced.

  8. GridFactory - Distributed computing on ephemeral resources

    DEFF Research Database (Denmark)

    Orellana, Frederik; Niinimaki, Marko

    2011-01-01

    A novel batch system for high throughput computing is presented. The system is specifically designed to leverage virtualization and web technology to facilitate deployment on cloud and other ephemeral resources. In particular, it implements a security model suited for forming collaborations...

  9. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    OpenAIRE

    Cirasella, Jill

    2009-01-01

    This article is an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news.

  10. Active resources concept of computation for enterprise software

    Directory of Open Access Journals (Sweden)

    Koryl Maciej

    2017-06-01

    Full Text Available Traditional computational models for enterprise software are still to a great extent centralized. However, rapid growing of modern computation techniques and frameworks causes that contemporary software becomes more and more distributed. Towards development of new complete and coherent solution for distributed enterprise software construction, synthesis of three well-grounded concepts is proposed: Domain-Driven Design technique of software engineering, REST architectural style and actor model of computation. As a result new resources-based framework arises, which after first cases of use seems to be useful and worthy of further research.

  11. Computing Resource And Work Allocations Using Social Profiles

    Directory of Open Access Journals (Sweden)

    Peter Lavin

    2013-01-01

    Full Text Available If several distributed and disparate computer resources exist, many of whichhave been created for different and diverse reasons, and several large scale com-puting challenges also exist with similar diversity in their backgrounds, then oneproblem which arises in trying to assemble enough of these resources to addresssuch challenges is the need to align and accommodate the different motivationsand objectives which may lie behind the existence of both the resources andthe challenges. Software agents are offered as a mainstream technology formodelling the types of collaborations and relationships needed to do this. Asan initial step towards forming such relationships, agents need a mechanism toconsider social and economic backgrounds. This paper explores addressing so-cial and economic differences using a combination of textual descriptions knownas social profiles and search engine technology, both of which are integrated intoan agent technology.

  12. Exploiting multicore compute resources in the CMS experiment

    Science.gov (United States)

    Ramírez, J. E.; Pérez-Calero Yzquierdo, A.; Hernández, J. M.; CMS Collaboration

    2016-10-01

    CMS has developed a strategy to efficiently exploit the multicore architecture of the compute resources accessible to the experiment. A coherent use of the multiple cores available in a compute node yields substantial gains in terms of resource utilization. The implemented approach makes use of the multithreading support of the event processing framework and the multicore scheduling capabilities of the resource provisioning system. Multicore slots are acquired and provisioned by means of multicore pilot agents which internally schedule and execute single and multicore payloads. Multicore scheduling and multithreaded processing are currently used in production for online event selection and prompt data reconstruction. More workflows are being adapted to run in multicore mode. This paper presents a review of the experience gained in the deployment and operation of the multicore scheduling and processing system, the current status and future plans.

  13. Recognition of Computer Viruses by Detecting Their Gene of Self Replication

    Science.gov (United States)

    2006-03-01

    addresses that are shared with write access. It uses entry-point obscuring ( EPO ) and an encryption method that is both very simple to implement and very...and code-optimized at a lower level. "* Replication includes a split-inject- regenerate mechanism for the virus body "* Replication includes a correct...sections and a file alignment: ghost° = Sheader + Sseci + Ssec2 + +. SseN Fa Viral code regeneration mechanism is indicative of cavity replication. Unless

  14. The Grid Resource Broker, A Ubiquitous Grid Computing Framework

    Directory of Open Access Journals (Sweden)

    Giovanni Aloisio

    2002-01-01

    Full Text Available Portals to computational/data grids provide the scientific community with a friendly environment in order to solve large-scale computational problems. The Grid Resource Broker (GRB is a grid portal that allows trusted users to create and handle computational/data grids on the fly exploiting a simple and friendly web-based GUI. GRB provides location-transparent secure access to Globus services, automatic discovery of resources matching the user's criteria, selection and scheduling on behalf of the user. Moreover, users are not required to learn Globus and they do not need to write specialized code or to rewrite their existing legacy codes. We describe GRB architecture, its components and current GRB features addressing the main differences between our approach and related work in the area.

  15. Computing Bounds on Resource Levels for Flexible Plans

    Science.gov (United States)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow

  16. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    Science.gov (United States)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie; Atlas Collaboration

    2014-06-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  17. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  18. IMPROVING FAULT TOLERANT RESOURCE OPTIMIZED AWARE JOB SCHEDULING FOR GRID COMPUTING

    Directory of Open Access Journals (Sweden)

    K. Nirmala Devi

    2014-01-01

    Full Text Available Workflow brokers of existing Grid Scheduling Systems are lack of cooperation mechanism which causes inefficient schedules of application distributed resources and it also worsens the utilization of various resources including network bandwidth and computational cycles. Furthermore considering the literature, all of these existing brokering systems primarily evolved around models of centralized hierarchical or client/server. In such models, vital responsibility such as resource discovery is delegated to the centralized server machines, thus they are associated with well-known disadvantages regarding single point of failure, scalability and network congestion at links that are leading to the server. In order to overcome these issues, we implement a new approach for decentralized cooperative workflow scheduling in a dynamically distributed resource sharing environment of Grids. The various actors in the system namely the users who belong to multiple control domains, workflow brokers and resources work together enabling a single cooperative resource sharing environment. But this approach ignored the fact that each grid site may have its own fault-tolerance strategy because each site is itself an autonomous domain. For instance, if a grid site handles the job check-pointing mechanism, each computation node must have the ability of periodical transmission of transient state of the job execution by computational node to the server. When there is a failure of job, it will migrate to another computational node and resume from the last stored checkpoint. A Glow worm Swarm Optimization (GSO for job scheduling is used to address the issue of heterogeneity in fault-tolerance of computational grid but Weighted GSO that overcomes the position update imperfections of general GSO in a more efficient manner shown during comparison analysis. This system supports four kinds of fault-tolerance mechanisms, including the job migration, job retry, check-pointing and

  19. Common accounting system for monitoring the ATLAS Distributed Computing resources

    CERN Document Server

    Karavakis, E; The ATLAS collaboration; Campana, S; Gayazov, S; Jezequel, S; Saiz, P; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  20. Research on message resource optimization in computer supported collaborative design

    Institute of Scientific and Technical Information of China (English)

    张敬谊; 张申生; 陈纯; 王波

    2004-01-01

    An adaptive mechanism is presented to reduce bandwidth usage and to optimize the use of computing resources of heterogeneous computer mixes utilized in CSCD to reach the goal of collaborative design in distributed-synchronous mode.The mechanism is realized on a C/S architecture based on operation information sharing. Firstly, messages are aggregated into packets on the client. Secondly, an outgoing-message weight priority queue with traffic adjusting technique is cached on the server. Thirdly, an incoming-message queue is cached on the client. At last, the results of implementing the proposed scheme in a simple collaborative design environment are presented.

  1. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  2. Energy based Efficient Resource Scheduling in Green Computing

    Directory of Open Access Journals (Sweden)

    B.Vasumathi,

    2015-11-01

    Full Text Available Cloud Computing is an evolving area of efficient utilization of computing resources. Data centers accommodating Cloud applications ingest massive quantities of energy, contributing to high functioning expenditures and carbon footprints to the atmosphere. Hence, Green Cloud computing resolutions are required not only to save energy for the environment but also to decrease operating charges. In this paper, we emphasis on the development of energy based resource scheduling framework and present an algorithm that consider the synergy between various data center infrastructures (i.e., software, hardware, etc., and performance. In specific, this paper proposes (a architectural principles for energy efficient management of Clouds; (b energy efficient resource allocation strategies and scheduling algorithm considering Quality of Service (QoS outlooks. The performance of the proposed algorithm has been evaluated with the existing energy based scheduling algorithms. The experimental results demonstrate that this approach is effective in minimizing the cost and energy consumption of Cloud applications thus moving towards the achievement of Green Clouds.

  3. How job demands, resources, and burnout predict objective performance: a constructive replication.

    Science.gov (United States)

    Bakker, Arnold B; Van Emmerik, Hetty; Van Riet, Pim

    2008-07-01

    The present study uses the Job Demands-Resources model (Bakker & Demerouti, 2007) to examine how job characteristics and burnout (exhaustion and cynicism) contribute to explaining variance in objective team performance. A central assumption in the model is that working characteristics evoke two psychologically different processes. In the first process, job demands lead to constant psychological overtaxing and in the long run to exhaustion. In the second process, a lack of job resources precludes actual goal accomplishment, leading to cynicism. In the present study these two processes were used to predict objective team performance. A total of 176 employees from a temporary employment agency completed questionnaires on job characteristics and burnout. These self-reports were linked to information from the company's management information system about teams' (N=71) objective sales performance (actual sales divided by the stated objectives) during the 3 months after the questionnaire data collection period. The results of structural equation modeling analyses did not support the hypothesis that exhaustion mediates the relationship between job demands and performance, but confirmed that cynicism mediates the relationship between job resources and performance suggesting that work conditions influence performance particularly through the attitudinal component of burnout.

  4. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    CERN Document Server

    Öhman, H; The ATLAS collaboration; Hendrix, V

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. With the new cloud technologies come also new challenges, and one such is the contextualization of cloud resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible, which precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration, dynamic resource scaling, and high degree of scalability.

  5. Pre-allocation Strategies of Computational Resources in Cloud Computing using Adaptive Resonance Theory-2

    CERN Document Server

    Nair, T R Gopalakrishnan

    2012-01-01

    One of the major challenges of cloud computing is the management of request-response coupling and optimal allocation strategies of computational resources for the various types of service requests. In the normal situations the intelligence required to classify the nature and order of the request using standard methods is insufficient because the arrival of request is at a random fashion and it is meant for multiple resources with different priority order and variety. Hence, it becomes absolutely essential that we identify the trends of different request streams in every category by auto classifications and organize preallocation strategies in a predictive way. It calls for designs of intelligent modes of interaction between the client request and cloud computing resource manager. This paper discusses about the corresponding scheme using Adaptive Resonance Theory-2.

  6. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  7. Computer-aided identification, synthesis and evaluation of substituted thienopyrimidines as novel inhibitors of HCV replication.

    Science.gov (United States)

    Bassetto, Marcella; Leyssen, Pieter; Neyts, Johan; Yerukhimovich, Mark M; Frick, David N; Brancale, Andrea

    2016-11-10

    A structure-based virtual screening technique was applied to the study of the HCV NS3 helicase, with the aim to find novel inhibitors of the HCV replication. A library of ∼450000 commercially available compounds was analysed in silico and 21 structures were selected for biological evaluation in the HCV replicon assay. One hit characterized by a substituted thieno-pyrimidine scaffold was found to inhibit the viral replication with an EC50 value in the sub-micromolar range and a good selectivity index. Different series of novel thieno-pyrimidine derivatives were designed and synthesised; several new structures showed antiviral activity in the low or sub-micromolar range.

  8. MADLVF: An Energy Efficient Resource Utilization Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    J.K. Verma

    2014-06-01

    Full Text Available Last few decades have remained the witness of steeper growth in demand for higher computational power. It is merely due to shift from the industrial age to Information and Communication Technology (ICT age which was marginally the result of digital revolution. Such trend in demand caused establishment of large-scale data centers situated at geographically apart locations. These large-scale data centers consume a large amount of electrical energy which results into very high operating cost and large amount of carbon dioxide (CO2 emission due to resource underutilization. We propose MADLVF algorithm to overcome the problems such as resource underutilization, high energy consumption, and large CO2 emissions. Further, we present a comparative study between the proposed algorithm and MADRS algorithms showing proposed methodology outperforms over the existing one in terms of energy consumption and the number of VM migrations.

  9. Resources

    Science.gov (United States)

    ... resources Alzheimer's - resources Anorexia nervosa - resources Arthritis - resources Asthma and allergy - resources Autism - resources Blindness - resources BPH - resources Breastfeeding - resources Bulimia - resources Burns - resources Cancer - resources Cerebral ...

  10. Mobile devices and computing cloud resources allocation for interactive applications

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2017-06-01

    Full Text Available Using mobile devices such as smartphones or iPads for various interactive applications is currently very common. In the case of complex applications, e.g. chess games, the capabilities of these devices are insufficient to run the application in real time. One of the solutions is to use cloud computing. However, there is an optimization problem of mobile device and cloud resources allocation. An iterative heuristic algorithm for application distribution is proposed. The algorithm minimizes the energy cost of application execution with constrained execution time.

  11. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  12. Enabling Grid Computing resources within the KM3NeT computing model

    Science.gov (United States)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  13. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  14. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  15. An Optimal Solution of Resource Provisioning Cost in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Arun Pandian

    2013-03-01

    Full Text Available In cloud computing, providing an optimal resource to user becomes more and more important. Cloud computing users can access the pool of computing resources through internet. Cloud providers are charge for these computing resources based on cloud resource usage. The provided resource plans are reservation and on demand. The computing resources are provisioned by cloud resource provisioning model. In this model resource cost is high due to the difficulty in optimization of resource cost under uncertainty. The resource optimization cost is dealing with an uncertainty of resource provisioning cost. The uncertainty of resource provisioning cost consists: on demand cost, Reservation cost, Expending cost. This problem leads difficulty to achieve optimal solution of resource provisioning cost in cloud computing. The Stochastic Integer Programming is applied for difficulty to obtain optimal resource provisioning cost. The Two Stage Stochastic Integer Programming with recourse is applied to solve the complexity of optimization problems under uncertainty. The stochastic programming is enhanced as Deterministic Equivalent Formulation for solve the probability distribution of all scenarios to reduce the on demand cost. The Benders Decomposition is applied for break down the resource optimization problem into multiple sub problems to reduce the on demand cost and reservation cost. The Sample Average Approximation is applied for reduce the problem scenarios in a resource optimization problem. This algorithm is used to reduce the reservation cost and expending cost.

  16. Distributed Cloud Computing Environment Enhanced With Capabilities for Wide-Area Migration and Replication Of Virtual Machines

    Directory of Open Access Journals (Sweden)

    Young-Chul Shim

    2013-12-01

    Full Text Available When a network application is implemented as a virt ual machine on a cloud and is used by a large numbe r of users, the location of the virtual machine shoul d be selected carefully so that the response time experienced by users is minimized. As the user popu lation moves and/or increases, the virtual machine may need to be migrated to a new location or replicated on many locations over a wide-area network. Virtua l machine migration and replication have been studied extensively but in most cases are limited within a subnetwork to be able to maintain service continuit y. In this paper we introduce a distributed cloud computing environment which facilitates the migrati on and replication of a virtual machine over a wide area network. The mechanism is provided by an overl ay network of smart routers, each of which connects a cooperating data center to the Internet. The propos ed approach is analyzed and compared with related works.

  17. APOBEC3G-Augmented Stem Cell Therapy to Modulate HIV Replication: A Computational Study.

    Directory of Open Access Journals (Sweden)

    Iraj Hosseini

    Full Text Available The interplay between the innate immune system restriction factor APOBEC3G and the HIV protein Vif is a key host-retrovirus interaction. APOBEC3G can counteract HIV infection in at least two ways: by inducing lethal mutations on the viral cDNA; and by blocking steps in reverse transcription and viral integration into the host genome. HIV-Vif blocks these antiviral functions of APOBEC3G by impeding its encapsulation. Nonetheless, it has been shown that overexpression of APOBEC3G, or interfering with APOBEC3G-Vif binding, can efficiently block in vitro HIV replication. Some clinical studies have also suggested that high levels of APOBEC3G expression in HIV patients are correlated with increased CD4+ T cell count and low levels of viral load; however, other studies have reported contradictory results and challenged this observation. Stem cell therapy to replace a patient's immune cells with cells that are more HIV-resistant is a promising approach. Pre-implantation gene transfection of these stem cells can augment the HIV-resistance of progeny CD4+ T cells. As a protein, APOBEC3G has the advantage that it can be genetically encoded, while small molecules cannot. We have developed a mathematical model to quantitatively study the effects on in vivo HIV replication of therapeutic delivery of CD34+ stem cells transfected to overexpress APOBEC3G. Our model suggests that stem cell therapy resulting in a high fraction of APOBEC3G-overexpressing CD4+ T cells can effectively inhibit in vivo HIV replication. We extended our model to simulate the combination of APOBEC3G therapy with other biological activities, to estimate the likelihood of improved outcomes.

  18. A resource-sharing model based on a repeated game in fog computing

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2017-03-01

    Full Text Available With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  19. A resource-sharing model based on a repeated game in fog computing.

    Science.gov (United States)

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  20. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    Science.gov (United States)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  1. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  2. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  3. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  4. Homology-independent discovery of replicating pathogenic circular RNAs by deep sequencing and a new computational algorithm.

    Science.gov (United States)

    Wu, Qingfa; Wang, Ying; Cao, Mengji; Pantaleo, Vitantonio; Burgyan, Joszef; Li, Wan-Xiang; Ding, Shou-Wei

    2012-03-06

    A common challenge in pathogen discovery by deep sequencing approaches is to recognize viral or subviral pathogens in samples of diseased tissue that share no significant homology with a known pathogen. Here we report a homology-independent approach for discovering viroids, a distinct class of free circular RNA subviral pathogens that encode no protein and are known to infect plants only. Our approach involves analyzing the sequences of the total small RNAs of the infected plants obtained by deep sequencing with a unique computational algorithm, progressive filtering of overlapping small RNAs (PFOR). Viroid infection triggers production of viroid-derived overlapping siRNAs that cover the entire genome with high densities. PFOR retains viroid-specific siRNAs for genome assembly by progressively eliminating nonoverlapping small RNAs and those that overlap but cannot be assembled into a direct repeat RNA, which is synthesized from circular or multimeric repeated-sequence templates during viroid replication. We show that viroids from the two known families are readily identified and their full-length sequences assembled by PFOR from small RNAs sequenced from infected plants. PFOR analysis of a grapevine library further identified a viroid-like circular RNA 375 nt long that shared no significant sequence homology with known molecules and encoded active hammerhead ribozymes in RNAs of both plus and minus polarities, which presumably self-cleave to release monomer from multimeric replicative intermediates. A potential application of the homology-independent approach for viroid discovery in plant and animal species where RNA replication triggers the biogenesis of siRNAs is discussed.

  5. The Mechanism of Resource Dissemination and Resource Discovery for Computational Grid%计算网格的资源分发和发现机制

    Institute of Scientific and Technical Information of China (English)

    武秀川; 鞠九滨

    2003-01-01

    Computational Grid is a large-scale distributed computing environment. The resource management of com-putational Grid discoveries and locates and allocates resources for users within the filed of grid environment as theyhave a request to these resources. The other case for that is co-operating in order to finish a large computing. Thesetasks are accomplished by the mechanism of resource dissemination and resource discovery of the resource manage-ment for the grid system. In this paper, some problems about resource dissemination and resource discovery are dis-cussed and analyzed,further more future work about that is proposed.

  6. Computational inference of replication and transcription activator regulator activity in herpesvirus from gene expression data

    NARCIS (Netherlands)

    Recchia, A.; Wit, E.; Vinciotti, V.; Kellam, P.

    2008-01-01

    One of the main aims of system biology is to understand the structure and dynamics of genomic systems. A computational approach, facilitated by new technologies for high-throughput quantitative experimental data, is put forward to investigate the regulatory system of dynamic interaction among genes

  7. Computational inference of replication and transcription activator regulator activity in herpesvirus from gene expression data

    NARCIS (Netherlands)

    Recchia, A.; Wit, E.; Vinciotti, V.; Kellam, P.

    2008-01-01

    One of the main aims of system biology is to understand the structure and dynamics of genomic systems. A computational approach, facilitated by new technologies for high-throughput quantitative experimental data, is put forward to investigate the regulatory system of dynamic interaction among genes

  8. An Improved Constraint Based Resource Scheduling Approach Using Job Grouping Strategy in Grid Computing

    Directory of Open Access Journals (Sweden)

    Payal Singhal

    2013-01-01

    Full Text Available Grid computing is a collection of distributed resources interconnected by networks to provide a unified virtual computing resource view to the user. Grid computing has one important responsibility of resource management and techniques to allow the user to make optimal use of the job completion time and achieving good throughput. It is a big deal to design the efficient scheduler and is implementation. In this paper, the constraint based job and resource scheduling algorithm has been proposed. The four constraints are taken into account for grouping the jobs, i.e. Resource memory, Job memory, Job MI and the fourth constraint L2 cache are considered. Our implementation is to reduce the processing time efficiently by adding the fourth constraint L2 cache of the resource and is allocated to the resource for parallel computing. The L2 cache is a part of computer’s processor; it increases the performance of computer. It is smaller and extremely fast computer memory. The use of more constraint of the resource and job can increase the efficiency more. The work has been done in MATLAB using the parallel computing toolbox. All the constraints are calculated using different functions in MATLAB and are allocated to the resource based on it. The resource memory, Cache, job memory size and job MI are the key factors to group the jobs according to the available capability of the selected resource. The processing time is taken into account to analyze the feasibility of the algorithms.

  9. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  10. wolfPAC: building a high-performance distributed computing network for phylogenetic analysis using 'obsolete' computational resources.

    Science.gov (United States)

    Reeves, Patrick A; Friedman, Philip H; Richards, Christopher M

    2005-01-01

    wolfPAC is an AppleScript-based software package that facilitates the use of numerous, remotely located Macintosh computers to perform computationally-intensive phylogenetic analyses using the popular application PAUP* (Phylogenetic Analysis Using Parsimony). It has been designed to utilise readily available, inexpensive processors and to encourage sharing of computational resources within the worldwide phylogenetics community.

  11. Fault tolerant workflow scheduling based on replication and resubmission of tasks in Cloud Computing

    OpenAIRE

    Jayadivya S K; Jaya Nirmala S; Mary Saira Bhanu S

    2012-01-01

    The aim of workflow scheduling system is to schedule the workflows within the user given deadline to achieve a good success rate. Workflow is a set of tasks processed in a predefined order based on its data and control dependency. Scheduling these workflows in a computing environment, like cloud environment, is an NP-Complete problem and it becomes more challenging when failures of tasks areconsidered. To overcome these failures, the workflow scheduling system should be fault tolerant. In thi...

  12. SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    CERN Document Server

    Buyya, Rajkumar; Calheiros, Rodrigo N

    2012-01-01

    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible alloc...

  13. The Relative Effectiveness of Computer-Based and Traditional Resources for Education in Anatomy

    Science.gov (United States)

    Khot, Zaid; Quinlan, Kaitlyn; Norman, Geoffrey R.; Wainman, Bruce

    2013-01-01

    There is increasing use of computer-based resources to teach anatomy, although no study has compared computer-based learning to traditional. In this study, we examine the effectiveness of three formats of anatomy learning: (1) a virtual reality (VR) computer-based module, (2) a static computer-based module providing Key Views (KV), (3) a plastic…

  14. INJECT AN ELASTIC GRID COMPUTING TECHNIQUES TO OPTIMAL RESOURCE MANAGEMENT TECHNIQUE OPERATIONS

    Directory of Open Access Journals (Sweden)

    R. Surendran

    2013-01-01

    Full Text Available Evaluation of sharing on the Internet well- developed from energetic technique of grid computing. Dynamic Grid Computing is Resource sharing in large level high performance computing networks at worldwide. Existing systems have a Limited innovation for resource management process. In proposed work, Grid Computing is an Internet based computing for Optimal Resource Management Technique Operations (ORMTO. ORMTO are Elastic scheduling algorithm, finding the Best Grid node for a task prediction, Fault tolerance resource selection, Perfect resource co-allocation, Grid balanced Resource matchmaking and Agent based grid service, wireless mobility resource access. Survey the various resource management techniques based on the performance measurement factors like time complexity, Space complexity and Energy complexity find the ORMTO with Grid computing. Objectives of ORMTO will provide an efficient Resource co-allocation automatically for a user who is submitting the job without grid knowledge, design a Grid service (portal for selects the Best Fault tolerant Resource for a given task in a fast, secure and efficient manner and provide an Enhanced grid balancing system for multi-tasking via Hybrid topology based Grid Ranking. Best Quality of Service (QOS parameters are important role in all RMT. Proposed system ORMTO use the greater number of QOS Parameters for better enhancement of existing RMT. In proposed system, follow the enhanced techniques and algorithms use to improve the Grid based ORMTO.

  15. Effective Computer Resource Management: Keeping the Tail from Wagging the Dog.

    Science.gov (United States)

    Sampson, James P., Jr.

    1982-01-01

    Predicts that student services will be increasingly influenced by computer technology. Suggests this resource be managed effectively to minimize potential problems and prevent a mechanistic and impersonal environment. Urges student personnel workers to assume active responsibility for planning, evaluating, and operating computer resources. (JAC)

  16. Economic-based Distributed Resource Management and Scheduling for Grid Computing

    CERN Document Server

    Buyya, R

    2002-01-01

    Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off bet...

  17. Wide-Area Computing: Resource Sharing on a Large Scale

    Science.gov (United States)

    1999-01-01

    fault propagation, and a set of useful failure mode assumptions. Handle multilanguage and legacy applications “I don’t know what computer language...ence. He is a member of the IEEE Computer Society and the ACM. Frederick Knabe is a senior research scientist in the Department of Computer Science

  18. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    This paper analyzes the decision-making problem confronting SMEs considering the adoption of cloud computing as an alternative to in-house computing services provision. The economics of choosing between in-house computing and a cloud alternative is analyzed by comparing the total economic costs o...

  19. Professional Computer Education Organizations--A Resource for Administrators.

    Science.gov (United States)

    Ricketts, Dick

    Professional computer education organizations serve a valuable function by generating, collecting, and disseminating information concerning the role of the computer in education. This report touches briefly on the reasons for the rapid and successful development of professional computer education organizations. A number of attributes of effective…

  20. Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Lingna He

    2012-09-01

    Full Text Available In order to replace the traditional Internet software usage patterns and enterprise management mode, this paper proposes a new business calculation mode- cloud computing, resources scheduling strategy is the key technology in cloud computing, Based on the study of cloud computing system structure and the mode of operation, The key research for cloud computing the process of the work scheduling and resource allocation problems based on ant colony algorithm , Detailed analysis and design of the specific implementation for cloud resources scheduling . And in CloudSim simulation environment and simulation experiments, the results show that the algorithm has better scheduling performance and load balance than general algorithm.

  1. Computational system to create an entry file for replicating I-125 seeds simulating brachytherapy case studies using the MCNPX code

    Directory of Open Access Journals (Sweden)

    Leonardo da Silva Boia

    2014-03-01

    decline for short distances.------------------------------Cite this article as: Boia LS, Junior J, Menezes AF, Silva AX. Computational system to create an entry file for replicating I-125 seeds simulating brachytherapy case studies using the MCNPX code. Int J Cancer Ther Oncol 2014; 2(2:02023.DOI: http://dx.doi.org/10.14319/ijcto.0202.3

  2. Relational Computing Using HPC Resources: Services and Optimizations

    OpenAIRE

    2015-01-01

    Computational epidemiology involves processing, analysing and managing large volumes of data. Such massive datasets cannot be handled efficiently by using traditional standalone database management systems, owing to their limitation in the degree of computational efficiency and bandwidth to scale to large volumes of data. In this thesis, we address management and processing of large volumes of data for modeling, simulation and analysis in epidemiological studies. Traditionally, compute intens...

  3. Science and Technology Resources on the Internet: Computer Security.

    Science.gov (United States)

    Kinkus, Jane F.

    2002-01-01

    Discusses issues related to computer security, including confidentiality, integrity, and authentication or availability; and presents a selected list of Web sites that cover the basic issues of computer security under subject headings that include ethics, privacy, kids, antivirus, policies, cryptography, operating system security, and biometrics.…

  4. The gap between research and practice: a replication study on the HR professionals' beliefs about effective human resource practices

    NARCIS (Netherlands)

    Sanders, Karin; Riemsdijk, van Maarten; Groen, Bianca

    2008-01-01

    In 2002 Rynes, Colbert and Brown asked human resource (HR) professionals to what extent they agreed with various HR research findings. Responses from 959 American participants showed that there are large discrepancies between research findings and practitioners' beliefs about effective human resourc

  5. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Science.gov (United States)

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  6. Computer Simulation and Digital Resources for Plastic Surgery Psychomotor Education.

    Science.gov (United States)

    Diaz-Siso, J Rodrigo; Plana, Natalie M; Stranix, John T; Cutting, Court B; McCarthy, Joseph G; Flores, Roberto L

    2016-10-01

    Contemporary plastic surgery residents are increasingly challenged to learn a greater number of complex surgical techniques within a limited period. Surgical simulation and digital education resources have the potential to address some limitations of the traditional training model, and have been shown to accelerate knowledge and skills acquisition. Although animal, cadaver, and bench models are widely used for skills and procedure-specific training, digital simulation has not been fully embraced within plastic surgery. Digital educational resources may play a future role in a multistage strategy for skills and procedures training. The authors present two virtual surgical simulators addressing procedural cognition for cleft repair and craniofacial surgery. Furthermore, the authors describe how partnerships among surgical educators, industry, and philanthropy can be a successful strategy for the development and maintenance of digital simulators and educational resources relevant to plastic surgery training. It is our responsibility as surgical educators not only to create these resources, but to demonstrate their utility for enhanced trainee knowledge and technical skills development. Currently available digital resources should be evaluated in partnership with plastic surgery educational societies to guide trainees and practitioners toward effective digital content.

  7. Quantum computing with incoherent resources and quantum jumps.

    Science.gov (United States)

    Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R

    2012-04-27

    Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.

  8. National Resource for Computation in Chemistry (NRCC). Attached scientific processors for chemical computations: a report to the chemistry community

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, N.S.

    1980-01-01

    The demands of chemists for computational resources are well known and have been amply documented. The best and most cost-effective means of providing these resources is still open to discussion, however. This report surveys the field of attached scientific processors (array processors) and attempts to indicate their present and possible future use in computational chemistry. Array processors have the possibility of providing very cost-effective computation. This report attempts to provide information that will assist chemists who might be considering the use of an array processor for their computations. It describes the general ideas and concepts involved in using array processors, the commercial products that are available, and the experiences reported by those currently using them. In surveying the field of array processors, the author makes certain recommendations regarding their use in computational chemistry. 5 figures, 1 table (RWR)

  9. iTools: a framework for classification, categorization and integration of computational biology resources.

    Science.gov (United States)

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management

  10. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  11. Optimal Computing Resource Management Based on Utility Maximization in Mobile Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Haoyu Meng

    2017-01-01

    Full Text Available Mobile crowdsourcing, as an emerging service paradigm, enables the computing resource requestor (CRR to outsource computation tasks to each computing resource provider (CRP. Considering the importance of pricing as an essential incentive to coordinate the real-time interaction among the CRR and CRPs, in this paper, we propose an optimal real-time pricing strategy for computing resource management in mobile crowdsourcing. Firstly, we analytically model the CRR and CRPs behaviors in form of carefully selected utility and cost functions, based on concepts from microeconomics. Secondly, we propose a distributed algorithm through the exchange of control messages, which contain the information of computing resource demand/supply and real-time prices. We show that there exist real-time prices that can align individual optimality with systematic optimality. Finally, we also take account of the interaction among CRPs and formulate the computing resource management as a game with Nash equilibrium achievable via best response. Simulation results demonstrate that the proposed distributed algorithm can potentially benefit both the CRR and CRPs. The coordinator in mobile crowdsourcing can thus use the optimal real-time pricing strategy to manage computing resources towards the benefit of the overall system.

  12. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    Science.gov (United States)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in

  13. An Efficient Algorithm for Resource Allocation in Parallel and Distributed Computing Systems

    Directory of Open Access Journals (Sweden)

    S.F. El-Zoghdy

    2013-03-01

    Full Text Available Resource allocation in heterogeneous parallel and distributed computing systems is the process of allocating user tasks to processing elements for execution such that some performance objective is optimized. In this paper, a new resource allocation algorithm for the computing grid environment is proposed. It takes into account the heterogeneity of the computational resources. It resolves the single point of failure problem which many of the current algorithms suffer from. In this algorithm, any site manager receives two kinds of tasks namely, remote tasks arriving from its associated local grid manager, and local tasks submitted directly to the site manager by local users in its domain. It allocates the grid workload based on the resources occupation ratio and the communication cost. The grid overall mean task response time is considered as the main performance metric that need to be minimized. The simulation results show that the proposed resource allocation algorithm improves the grid overall mean task response time. (Abstract

  14. Justification of Filter Selection for Robot Balancing in Conditions of Limited Computational Resources

    Science.gov (United States)

    Momot, M. V.; Politsinskaia, E. V.; Sushko, A. V.; Semerenko, I. A.

    2016-08-01

    The paper considers the problem of mathematical filter selection, used for balancing of wheeled robot in conditions of limited computational resources. The solution based on complementary filter is proposed.

  15. Relaxed resource advance reservation policy in grid computing

    Institute of Scientific and Technical Information of China (English)

    XIAO Peng; HU Zhi-gang

    2009-01-01

    The advance reservation technique has been widely applied in many grid systems to provide end-to-end quality of service (QoS). However, it will result in low resource utilization rate and high rejection rate when the reservation rate is high. To mitigate these negative effects brought about by advance reservation, a relaxed advance reservation policy is proposed, which allows accepting new reservation requests that overlap the existing reservations under certain conditions. Both the benefits and the risks of the proposed policy are presented theoretically. The experimental results show that the policy can achieve a higher resource utilization rate and lower rejection rate compared to the conventional reservation policy and backfilling technique. In addition, the policy shows better adaptation when the grid systems are in the presence of a high reservation rate.

  16. Research on Digital Agricultural Information Resources Sharing Plan Based on Cloud Computing

    OpenAIRE

    2011-01-01

    Part 1: Decision Support Systems, Intelligent Systems and Artificial Intelligence Applications; International audience; In order to provide the agricultural works with customized, visual, multi-perspective and multi-level active service, we conduct a research of digital agricultural information resources sharing plan based on cloud computing to integrate and publish the digital agricultural information resources efficiently and timely. Based on cloud computing and virtualization technology, w...

  17. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  18. Efficient Qos Based Resource Scheduling Using PAPRIKA Method for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Hilda Lawrance

    2013-03-01

    Full Text Available Cloud computing is increasingly been used in enterprises and business markets for serving demanding jobs. The performance of resource scheduling in cloud computing is important due to the increase in number of users, services and type of services. Resource scheduling is influenced by many factors such as CPU speed, memory, bandwidth etc. Therefore resource scheduling can be modeled as a multi criteria decision making problem. This study proposes an efficient QoS based resource scheduling algorithm using potentially all pair-wise rankings of all possible alternatives (PAPRIKA. The tasks are arranged based on the QoS parameters and the resources are allocated to the appropriate tasks based on PAPRIKA method and user satisfaction. The scheduling algorithm was simulated with cloudsim tool package. The experiment shows that, the algorithm reduces task completion time and improves resource utility rate.

  19. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  20. Multi-Programmatic and Institutional Computing Capacity Resource Attachment 2 Statement of Work

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M

    2002-04-15

    Lawrence Livermore National Laboratory (LLNL) has identified high-performance computing as a critical competency necessary to meet the goals of LLNL's scientific and engineering programs. Leadership in scientific computing demands the availability of a stable, powerful, well-balanced computational infrastructure, and it requires research directed at advanced architectures, enabling numerical methods and computer science. To encourage all programs to benefit from the huge investment being made by the Advanced Simulation and Computing Program (ASCI) at LLNL, and to provide a mechanism to facilitate multi-programmatic leveraging of resources and access to high-performance equipment by researchers, M&IC was created. The Livermore Computing (LC) Center, a part of the Computations Directorate Integrated Computing and Communications (ICC) Department can be viewed as composed of two facilities, one open and one secure. This acquisition is focused on the M&IC resources in the Open Computing Facility (OCF). For the M&IC program, recent efforts and expenditures have focused on enhancing capacity and stabilizing the TeraCluster 2000 (TC2K) resource. Capacity is a measure of the ability to process a varied workload from many scientists simultaneously. Capability represents the ability to deliver a very large system to run scientific calculations at large scale. In this procurement action, we intend to significantly increase the capability of the M&IC resource to address multiple teraFLOP/s problems, and well as increasing the capacity to do many 100 gigaFLOP/s calculations.

  1. Computers and Resource-Based History Teaching: A UK Perspective.

    Science.gov (United States)

    Spaeth, Donald A.; Cameron, Sonja

    2000-01-01

    Presents an overview of developments in computer-aided history teaching for higher education in the United Kingdom and the United States. Explains that these developments have focused on providing students with access to primary sources to enhance their understanding of historical methods and content. (CMK)

  2. Grid Computing: A Collaborative Approach in Distributed Environment for Achieving Parallel Performance and Better Resource Utilization

    Directory of Open Access Journals (Sweden)

    Sashi Tarun

    2011-01-01

    Full Text Available From the very beginning various measures are taken or consider for better utilization of available limited resources in the computer system for operational environment, this is came in consideration because most of the time our system get free and not able to exploit the system resource/capabilities as whole cause low performance. Parallel Computing can work efficiently, where operations are handled by multi-processors independently or efficiently, without any other processing capabilities. All processing unit’s works in a parallel fashioned and increases the system throughput without any resource allocation problem among different processing units. But this is limited and effective within a single machine. Today in this computing world, maintaining and establishing high speed computational work environment in a distributed scenario seems to be a challenging task because this environment made all operations by not depending on single resources but by interacting with otherresources in the vast network architecture. All current resource management system can only work smoothly if they apply these resources within their clusters, local organizations or disputed among many users who needs processing power, but for vast distributed environment performing various operational activities seems to be difficult because data is physically not maintained in a centralized location, it is geographically dispersed on multiple remote computers systems. Computers in the distributed environment have to depend on multiple resources for their task completion. Effective performance with high availability of resources to each computer in this speedy distributed computational environment is the major concern. To solve this problem a new approach is coined called “Grid Computing” environment. Grid uses a Middleware to coordinate disparate resources across a network, allows users to function as a virtual whole and make computing fast. In this paper I want to

  3. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  4. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  5. Adaptive workflow scheduling in grid computing based on dynamic resource availability

    Directory of Open Access Journals (Sweden)

    Ritu Garg

    2015-06-01

    Full Text Available Grid computing enables large-scale resource sharing and collaboration for solving advanced science and engineering applications. Central to the grid computing is the scheduling of application tasks to the resources. Various strategies have been proposed, including static and dynamic strategies. The former schedules the tasks to resources before the actual execution time and later schedules them at the time of execution. Static scheduling performs better but it is not suitable for dynamic grid environment. The lack of dedicated resources and variations in their availability at run time has made this scheduling a great challenge. In this study, we proposed the adaptive approach to schedule workflow tasks (dependent tasks to the dynamic grid resources based on rescheduling method. It deals with the heterogeneous dynamic grid environment, where the availability of computing nodes and links bandwidth fluctuations are inevitable due to existence of local load or load by other users. The proposed adaptive workflow scheduling (AWS approach involves initial static scheduling, resource monitoring and rescheduling with the aim to achieve the minimum execution time for workflow application. The approach differs from other techniques in literature as it considers the changes in resources (hosts and links availability and considers the impact of existing load over the grid resources. The simulation results using randomly generated task graphs and task graphs corresponding to real world problems (GE and FFT demonstrates that the proposed algorithm is able to deal with fluctuations of resource availability and provides overall optimal performance.

  6. Quantum Computing Resource Estimate of Molecular Energy Simulation

    CERN Document Server

    Whitfield, James D; Aspuru-Guzik, Alán

    2010-01-01

    Over the last century, ingenious physical and mathematical insights paired with rapidly advancing technology have allowed the field of quantum chemistry to advance dramatically. However, efficient methods for the exact simulation of quantum systems on classical computers do not exist. The present paper reports an extension of one of the authors' previous work [Aspuru-Guzik et al., Science {309} p. 1704, (2005)] where it was shown that the chemical Hamiltonian can be efficiently simulated using a quantum computer. In particular, we report in detail how a set of molecular integrals can be used to create a quantum circuit that allows the energy of a molecular system with fixed nuclear geometry to be extracted using the phase estimation algorithm proposed by Abrams and Lloyd [Phys. Rev. Lett. {83} p. 5165, (1999)]. We extend several known results related to this idea and present numerical examples of the state preparation procedure required in the algorithm. With future quantum devices in mind, we provide a compl...

  7. MCPLOTS. A particle physics resource based on volunteer computing

    Energy Technology Data Exchange (ETDEWEB)

    Karneyeu, A. [Joint Inst. for Nuclear Research, Moscow (Russian Federation); Mijovic, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Irfu/SPP, CEA-Saclay, Gif-sur-Yvette (France); Prestel, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Lund Univ. (Sweden). Dept. of Astronomy and Theoretical Physics; Skands, P.Z. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2013-07-15

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  8. MCPLOTS: a particle physics resource based on volunteer computing

    CERN Document Server

    Karneyeu, A; Prestel, S; Skands, P Z

    2014-01-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC@HOME platform.

  9. The gap between research and practice: a replication study on the HR professionals' beliefs about effective human resource practices

    NARCIS (Netherlands)

    Sanders, Karin; van Riemsdijk, Maarten; Groen, B.A.C.

    2008-01-01

    In 2002 Rynes, Colbert and Brown asked human resource (HR) professionals to what extent they agreed with various HR research findings. Responses from 959 American participants showed that there are large discrepancies between research findings and practitioners' beliefs about effective human

  10. A Resource Scheduling Strategy in Cloud Computing Based on Multi-agent Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Wuxue Jiang

    2013-11-01

    Full Text Available Resource scheduling strategies in cloud computing are used either to improve system operating efficiency, or to improve user satisfaction. This paper presents an integrated scheduling strategy considering both resources credibility and user satisfaction. It takes user satisfaction as objective function and resources credibility as a part of the user satisfaction, and realizes optimal scheduling by using genetic algorithm. We integrate this scheduling strategy into Agent subsequently and propose a cloud computing system architecture based on Multi-agent. The numerical results show that this scheduling strategy improves not only the system operating efficiency, but also the user satisfaction.  

  11. Distributed Computation Resources for Earth System Grid Federation (ESGF)

    Science.gov (United States)

    Duffy, D.; Doutriaux, C.; Williams, D. N.

    2014-12-01

    The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational

  12. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud co

  13. Young Children's Exploration of Semiotic Resources during Unofficial Computer Activities in the Classroom

    Science.gov (United States)

    Bjorkvall, Anders; Engblom, Charlotte

    2010-01-01

    The article describes and discusses the learning potential of unofficial techno-literacy activities in the classroom with regards to Swedish 7-8-year-olds' exploration of semiotic resources when interacting with computers. In classroom contexts where every child works with his or her own computer, such activities tend to take up a substantial…

  14. The portability of computer-related educational resources : summary and directions for further research

    NARCIS (Netherlands)

    De Diana, Italo; Collis, Betty A.

    1990-01-01

    In this Special Issue of the Journal of Research on Computing in Education, the portability of computer-related educational resources has been examined by a number of researchers and practitioners, reflecting various backgrounds, cultures, and experiences. A first iteration of a general model of fac

  15. Orchestrating the XO Computer with Digital and Conventional Resources to Teach Mathematics

    Science.gov (United States)

    Díaz, A.; Nussbaum, M.; Varela, I.

    2015-01-01

    Recent research has suggested that simply providing each child with a computer does not lead to an improvement in learning. Given that dozens of countries across the world are purchasing computers for their students, we ask which elements are necessary to improve learning when introducing digital resources into the classroom. Understood the…

  16. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud

  17. A Comparative Study on Resource Allocation Policies in Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Bhavani B H

    2015-11-01

    Full Text Available Cloud computing is one of the latest models used for sharing pool of resources like CPU, memory, network bandwidth, hard drive etc. over the Internet. These resources are requested by the cloud user and are used on a rented basis just like electricity, water, LPG etc. When requests are made by the cloud user, allocation has to be done by the cloud service provider. With the limited amount of resources available, resource allocation becomes a challenging task for the cloud service provider as the resources are to be virtualized and allocated. These resources can be allocated dynamically or statically based on the type of request made by the cloud user and also depending on the application. In this paper, survey on both Static and Dynamic Allocation techniques are made. Also, comparison of both static and dynamic resource allocation techniques is made.

  18. Monitoring of computing resource utilization of the ATLAS experiment

    CERN Document Server

    Rousseau, D; The ATLAS collaboration; Vukotic, I; Aidel, O; Schaffer, RD; Albrand, S

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  19. Monitoring of computing resource utilization of the ATLAS experiment

    Science.gov (United States)

    Rousseau, David; Dimitrov, Gancho; Vukotic, Ilija; Aidel, Osman; Schaffer, Rd; Albrand, Solveig

    2012-12-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  20. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  1. Resource pre-allocation algorithms for low-energy task scheduling of cloud computing

    Institute of Scientific and Technical Information of China (English)

    Xiaolong Xu; Lingling Cao; Xinheng Wang

    2016-01-01

    In order to lower the power consumption and im-prove the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-alocation algorithms based on the “shut down the re-dundant, turn on the demanded” strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future work-loads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control (CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource alocation algorithm based on probabilistic matching (RA-PM) are pro-posed. In order to reduce the power consumption further, the resource alocation algorithm based on the improved simu-lated annealing (RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make re-source pre-alocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.

  2. ARMS: An Agent-Based Resource Management System for Grid Computing

    Directory of Open Access Journals (Sweden)

    Junwei Cao

    2002-01-01

    Full Text Available Resource management is an important component of a grid computing infrastructure. The scalability and adaptability of such systems are two key challenges that must be addressed. In this work an agent-based resource management system, ARMS, is implemented for grid computing. ARMS utilises the performance prediction techniques of the PACE toolkit to provide quantitative data regarding the performance of complex applications running on a local grid resource. At the meta-level, a hierarchy of homogeneous agents are used to provide a scalable and adaptable abstraction of the system architecture. Each agent is able to cooperate with other agents and thereby provide service advertisement and discovery for the scheduling of applications that need to utilise grid resources. A case study with corresponding experimental results is included to demonstrate the efficiency of the resource management and scheduling system.

  3. PDBparam: Online Resource for Computing Structural Parameters of Proteins.

    Science.gov (United States)

    Nagarajan, R; Archana, A; Thangakani, A Mary; Jemimah, S; Velmurugan, D; Gromiha, M Michael

    2016-01-01

    Understanding the structure-function relationship in proteins is a longstanding goal in molecular and computational biology. The development of structure-based parameters has helped to relate the structure with the function of a protein. Although several structural features have been reported in the literature, no single server can calculate a wide-ranging set of structure-based features from protein three-dimensional structures. In this work, we have developed a web-based tool, PDBparam, for computing more than 50 structure-based features for any given protein structure. These features are classified into four major categories: (i) interresidue interactions, which include short-, medium-, and long-range interactions, contact order, long-range order, total contact distance, contact number, and multiple contact index, (ii) secondary structure propensities such as α-helical propensity, β-sheet propensity, and propensity of amino acids to exist at various positions of α-helix and amino acid compositions in high B-value regions, (iii) physicochemical properties containing ionic interactions, hydrogen bond interactions, hydrophobic interactions, disulfide interactions, aromatic interactions, surrounding hydrophobicity, and buriedness, and (iv) identification of binding site residues in protein-protein, protein-nucleic acid, and protein-ligand complexes. The server can be freely accessed at http://www.iitm.ac.in/bioinfo/pdbparam/. We suggest the use of PDBparam as an effective tool for analyzing protein structures.

  4. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  5. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2010-01-01

    GoeGrid is a grid resource center located in Goettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center will be presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster will be detailed. The benefits are an efficient use of computer and manpower resources. Further interdisciplinary projects are commonly organized courses for students of all fields to support education on grid-computing.

  6. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  7. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  8. A Novel Approach for Resource Discovery using Random Projection on Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    M.N.Faruk

    2013-04-01

    Full Text Available Cloud computing offers different type’s utilities to the IT industries. Generally the resources are scattered throughout the clouds. It has to enable the ability to find different resources are available at clouds, This again an important criteria of distributed systems. This paper investigates the problem of locating resources which is multi variant in nature. It also used to locate the relevant dimensions of resources which is avail at the same cloud. It is also addresses the random projection on each cloud and discover the possible resources at each iteration, the outcome of each iteration updated on collision matrix. All the discovered elements are updated at the Management fabric. This paper also describes the feasibility on discovering different types of resources available each cloud.

  9. PhoenixCloud: Provisioning Resources for Heterogeneous Workloads in Cloud Computing

    CERN Document Server

    Zhan, Jianfeng; Shi, Weisong; Gong, Shimin; Zang, Xiutao

    2010-01-01

    As more and more service providers choose Cloud platforms, which is provided by third party resource providers, resource providers needs to provision resources for heterogeneous workloads in different Cloud scenarios. Taking into account the dramatic differences of heterogeneous workloads, can we coordinately provision resources for heterogeneous workloads in Cloud computing? In this paper we focus on this important issue, which is investigated by few previous work. Our contributions are threefold: (1) we respectively propose a coordinated resource provisioning solution for heterogeneous workloads in two typical Cloud scenarios: first, a large organization operates a private Cloud for two heterogeneous workloads; second, a large organization or two service providers running heterogeneous workloads revert to a public Cloud; (2) we build an agile system PhoenixCloud that enables a resource provider to create coordinated runtime environments on demand for heterogeneous workloads when they are consolidated on a C...

  10. Case study of an application of computer mapping in oil-shale resource mapping

    Energy Technology Data Exchange (ETDEWEB)

    Davis, F.G.F. Jr.; Smith, J.W.

    1979-01-01

    The Laramie Energy Technology Center, U.S. Department of Energy, is responsible for evaluating the resources of potential oil and the deposit characteristics of oil shales of the Green River Formation in Colorado, Utah, and Wyoming. While the total oil shale resource represents perhaps 2 trillion barrels of oil, only parts of this total are suitable for any particular development process. To evaluate the resource according to deposit characteristics, a computer system for making resource calculations and geological maps has been established. The system generates resource tables where the calculations have been performed over user-defined geological intervals. The system also has the capability of making area calculations and generating resource maps of geological quality. The graphics package that generates the maps uses corehole assay data and digitized map data. The generated maps may include the following features: selected drainages, towns, political boundaries, township and section surveys, and corehole locations. The maps are then generated according to user-defined scales.

  11. Categorization of Computing Education Resources into the ACM Computing Classification System

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yinlin [Virginia Polytechnic Institute and State University (Virginia Tech); Bogen, Paul Logasa [ORNL; Fox, Dr. Edward A. [Virginia Polytechnic Institute and State University (Virginia Tech); Hsieh, Dr. Haowei [University of Iowa; Cassel, Dr. Lillian N. [Villanova University

    2012-01-01

    The Ensemble Portal harvests resources from multiple heterogonous federated collections. Managing these dynamically increasing collections requires an automatic mechanism to categorize records in to corresponding topics. We propose an approach to use existing ACM DL metadata to build classifiers for harvested resources in the Ensemble project. We also present our experience on utilizing the Amazon Mechanical Turk platform to build ground truth training data sets from Ensemble collections.

  12. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  13. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Goettingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2011-01-01

    GoeGrid is a grid resource center located in G¨ottingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and manpower resources.

  14. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  15. Analysis on the Application of Cloud Computing to the Teaching Resources Sharing Construction in Colleges and Universities

    Institute of Scientific and Technical Information of China (English)

    LIU Mi

    2015-01-01

    Cloud computing is a new computing model. The application of cloud computing to the field of higher education informatization has been very popular currently. In this paper, the concept and characteristics of cloud computing are introduced, the current situation of the teaching resources sharing and construction in colleges and universities is analyzed, and finally the influence of cloud computing on the construction of teaching information resources is discussed.

  16. ADAPTIVE MULTI-TENANCY POLICY FOR ENHANCING SERVICE LEVEL AGREEMENT THROUGH RESOURCE ALLOCATION IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    MasnidaHussin

    2016-07-01

    Full Text Available The appearance of infinite computing resources that available on demand and fast enough to adapt with load surges makes Cloud computing favourable service infrastructure in IT market. Core feature in Cloud service infrastructures is Service Level Agreement (SLA that led seamless service at high quality of service to client. One of the challenges in Cloud is providing heterogeneous computing services for the clients. With the increasing number of clients/tenants in the Cloud, unsatisfied agreement is becoming a critical factor. In this paper, we present an adaptive resource allocation policy which attempts to improve accountable in Cloud SLA while aiming for enhancing system performance. Specifically, our allocation incorporates dynamic matching SLA rules to deal with diverse processing requirements from tenants.Explicitly, it reduces processing overheadswhile achieving better service agreement. Simulation experiments proved the efficacy of our allocation policy in order to satisfy the tenants; and helps improve reliable computing

  17. A survey on resource allocation in high performance distributed computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul; Khan, Samee Ullah; Bickler, Gage; Min-Allah, Nasro; Qureshi, Muhammad Bilal; Zhang, Limin; Yongji, Wang; Ghani, Nasir; Kolodziej, Joanna; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal; Li, Hongxiang; Wang, Lizhe; Chen, Dan; Rayes, Ammar

    2013-11-01

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.

  18. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis."

  19. Dynamic provisioning of local and remote compute resources with OpenStack

    Science.gov (United States)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  20. Improving the Distribution of Resource in a Grid Computing Network Services

    Directory of Open Access Journals (Sweden)

    Najmeh fillolahe

    2016-03-01

    Full Text Available In this study the computational grid environment and a queuing theory based algorithm have been examined for distribution of resources in the computational grid that in which the resources are connected to each other in the form of a star topology. By using the concepts of queue system and how to distribute the subtasks, this algorithm supply the workload power for distribution of existing resources while implementation of tasks in the shortest time. In the first phase of the algorithm it can be seen by computation of consumed time for tasks and subtasks that the grid system reduces the average response time generally. But in the second phase due to the lack of load balance between resources and imbalance in distribution of subtasks between them, in addition to establishing of workload balance, the tasks’ response time also has been increased in long-term. And in third phase in addition to establishing of workload balance, the average response time also has been reduced. Thus by using this algorithm tow important factorize. efficiency and load balance has been enhanced as far as possible. Also the distribution of subtasks in the grid environment and allocation of resources to them is implemented by considering this tow factors.

  1. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  2. Scheduling real-time indivisible loads with special resource allocation requirements on cluster computing

    Directory of Open Access Journals (Sweden)

    Abeer Hamdy

    2010-10-01

    Full Text Available The paper presents a heuristic algorithm to schedule real time indivisible loads represented as directed sequential task graph on a cluster computing. One of the cluster nodes has some special resources (denoted by special node that may be needed by one of the indivisible loads

  3. Towards Self Configured Multi-Agent Resource Allocation Framework for Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    M.N.Faruk

    2014-05-01

    Full Text Available The construction of virtualization and Cloud computing environment to assure numerous features such as improved flexibility, stabilized energy efficiency with minimal operating costs for IT industry. However, highly unpredictable workloads can create demands to promote quality-of-service assurance in the mean while promising competent resource utilization. To evade breach on SLA’s (Service-Level Agreements or may have unproductive resource utilization, In a virtual environment resource allocations must be tailored endlessly during the execution for the dynamic application workloads. In this proposed work, we described a hybrid approach on self-configured resource allocation model in cloud environments based on dynamic workloads application models. We narrated a comprehensive setup of a delegate stimulated enterprise application, the new Virtenterprise_Cloudapp benchmark, deployed on dynamic virtualized cloud platform.

  4. Allocating Tactical High-Performance Computer (HPC) Resources to Offloaded Computation in Battlefield Scenarios

    Science.gov (United States)

    2013-12-01

    devices. Offloading solutions such as Cuckoo (12), MAUI(13), COMET(14), and ThinkAir(15) offload applications via Wi-Fi or 3G networks to servers or...Soldier Smartphone Program. Information Week, 2010. 12. Kemp, R.; Palmer, N.; Kielmann, T.; Bal, H. Cuckoo : A Computation Offloading Framework for...ARMY RESEARCH LAB RDRL CIH S TAMIM SOOKOOR DALE SHIRES DAVID BRUNO RONDA TAYLOR SONG PARK 20 INTENTIONALLY LEFT BLANK. 21

  5. Current status and prospects of computational resources for natural product dereplication: a review.

    Science.gov (United States)

    Mohamed, Ahmed; Nguyen, Canh Hao; Mamitsuka, Hiroshi

    2016-03-01

    Research in natural products has always enhanced drug discovery by providing new and unique chemical compounds. However, recently, drug discovery from natural products is slowed down by the increasing chance of re-isolating known compounds. Rapid identification of previously isolated compounds in an automated manner, called dereplication, steers researchers toward novel findings, thereby reducing the time and effort for identifying new drug leads. Dereplication identifies compounds by comparing processed experimental data with those of known compounds, and so, diverse computational resources such as databases and tools to process and compare compound data are necessary. Automating the dereplication process through the integration of computational resources has always been an aspired goal of natural product researchers. To increase the utilization of current computational resources for natural products, we first provide an overview of the dereplication process, and then list useful resources, categorizing into databases, methods and software tools and further explaining them from a dereplication perspective. Finally, we discuss the current challenges to automating dereplication and proposed solutions.

  6. An Extensible Scientific Computing Resources Integration Framework Based on Grid Service

    Science.gov (United States)

    Cui, Binge; Chen, Xin; Song, Pingjian; Liu, Rongjie

    Scientific computing resources (e.g., components, dynamic linkable libraries, etc) are very valuable assets for the scientific research. However, due to historical reasons, most computing resources can’t be shared by other people. The emergence of Grid computing provides a turning point to solve this problem. The legacy applications can be abstracted and encapsulated into Grid service, and they may be found and invoked on the Web using SOAP messages. The Grid service is loosely coupled with the external JAR or DLL, which builds a bridge from users to computing resources. We defined an XML schema to describe the functions and interfaces of the applications. This information can be acquired by users by invoking the “getCapabilities” operation of the Grid service. We also proposed the concept of class pool to eliminate the memory leaks when invoking the external jars using reflection. The experiment shows that the class pool not only avoids the PermGen space waste and Tomcat server exception, but also significantly improves the application speed. The integration framework has been implemented successfully in a real project.

  7. Measuring the impact of computer resource quality on the software development process and product

    Science.gov (United States)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  8. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    Energy Technology Data Exchange (ETDEWEB)

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J.; Schuster, Heiko; Ternette, Nicola; Alpizar, Adan; Schittenhelm, Ralf B.; Ramarathinam, Sri Harsha; Lindestam-Arlehamn, Cecilia S.; Koh, Ching Chiek; Gillet, Ludovic; Rabsteyn, Armin; Navarro, Pedro; Kim, Sangtae; Lam, Henry; Sturm, Theo; Marcilla, Miguel; Sette, Alessandro; Campbell, David; Deutsch, Eric W.; Moritz, Robert L.; Purcell, Anthony; Rammensee, Hans-Georg; Stevanovic, Stevan; Aebersold, Ruedi

    2015-07-08

    We present a novel proteomics-based workflow and an open source data and computational resource for reproducibly identifying and quantifying HLA-associated peptides at high-throughput. The provided resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra and the analysis of quantitative digital maps of HLA peptidomes generated by SWATH mass spectrometry (MS). This is the first community-based study towards the development of a robust platform for the reproducible and quantitative measurement of HLA peptidomes, an essential step towards the design of efficient immunotherapies.

  9. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  10. A Combined Computational and Experimental Study on the Structure-Regulation Relationships of Putative Mammalian DNA Replication Initiator GINS

    Institute of Scientific and Technical Information of China (English)

    Reiko Hayashi; Takako Arauchi; Moe Tategu; Yuya Goto; Kenichi Yoshida

    2006-01-01

    GINS, a heterotetramer of SLD5, PSF1, PSF2, and PSF3 proteins, is an emerging chromatin factor recognized to be involved in the initiation and elongation step of DNA replication. Although the yeast and Xenopus GINS genes are well documented, their orthologous genes in higher eukaryotes are not fully characterized.In this study, we report the genomic structure and transcriptional regulation of mammalian GINS genes. Serum stimulation increased the GINS Mrna levels in human cells. Reporter gene assay using putative GINS promoter sequences revealed that the expression of mammalian GINS is regulated by 17β-Estradiolstimulated estrogen receptor α, and human PSF3 acts as a gene responsive to transcription factor E2F1. The goal of this study is to present the current data so as to encourage further work in the field of GINS gene regulation and functions in mammalian cells.

  11. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  12. Replication data collection highlights value in diversity of replication attempts

    Science.gov (United States)

    DeSoto, K. Andrew; Schweinsberg, Martin

    2017-01-01

    Researchers agree that replicability and reproducibility are key aspects of science. A collection of Data Descriptors published in Scientific Data presents data obtained in the process of attempting to replicate previously published research. These new replication data describe published and unpublished projects. The different papers in this collection highlight the many ways that scientific replications can be conducted, and they reveal the benefits and challenges of crucial replication research. The organizers of this collection encourage scientists to reuse the data contained in the collection for their own work, and also believe that these replication examples can serve as educational resources for students, early-career researchers, and experienced scientists alike who are interested in learning more about the process of replication. PMID:28291224

  13. Total knee arthroplasty with computer-assisted navigation more closely replicates normal knee biomechanics than conventional surgery.

    Science.gov (United States)

    McClelland, Jodie A; Webster, Kate E; Ramteke, Alankar A; Feller, Julian A

    2017-06-01

    Computer-assisted navigation in total knee arthroplasty (TKA) reduces variability and may improve accuracy in the postoperative static alignment. The effect of navigation on alignment and biomechanics during more dynamic movements has not been investigated. This study compared knee biomechanics during level walking of 121 participants: 39 with conventional TKA, 42 with computer-assisted navigation TKA and 40 unimpaired control participants. Standing lower-limb alignment was significantly closer to ideal in participants with navigation TKA. During gait, when differences in walking speed were accounted for, participants with conventional TKA had less knee flexion during stance and swing than controls (Pknee adduction moments than controls (Pbiomechanics of computer-assisted navigation TKA patients compared to controls than for patients with conventional TKA. Computer-assisted navigation TKA may restore biomechanics during walking that are closer to normal than conventional TKA. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  15. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    OpenAIRE

    Steponas Jonušauskas; Agota Giedrė Raišienė

    2011-01-01

    Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication wid...

  16. On state-dependant sampling for nonlinear controlled systems sharing limited computational resources

    OpenAIRE

    Alamir, Mazen

    2007-01-01

    21 pages. soumis à la revue "IEEE Transactions on Automatic Control"; International audience; In this paper, a framework for dynamic monitoring of sampling periods for nonlinear controlled systems is proposed. This framework is particularly adapted to the context of controlled systems sharing limited computational resources. The proposed scheme can be used in a cascaded structure with any feedback scheduling design. Illustrative examples are given to assess the efficiency of the proposed fram...

  17. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  18. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    Energy Technology Data Exchange (ETDEWEB)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  19. A Dynamic Resource Allocation Method for Parallel DataProcessing in Cloud Computing

    Directory of Open Access Journals (Sweden)

    V. V. Kumar

    2012-01-01

    Full Text Available Problem statement: One of the Cloud Services, Infrastructure as a Service(IaaS provides a Compute resourses for demand in various applications like Parallel Data processing. The computer resources offered in the cloud are extremely dynamic and probably heterogeneous. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today’s IaaS clouds for both, task scheduling and execution. Particular tasks of processing a job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. However, the current algorithms does not consider the resource overload or underutilization during the job execution. In this study, we have focussed on increasing the efficacy of the scheduling algorithm for the real time Cloud Computing services. Approach: Our Algorithm utilizes the Turnaround time Utility effieciently by differentiating it into a gain function and a loss function for a single task. The algorithm also assigns high priority for task of early completion and less priority for abortions /deadlines issues of real time tasks. Results: The algorithm has been implemented on both preemptive and Non-premptive methods. The experimental results shows that it outperfoms the existing utility based scheduling algorithms and also compare its performance with both preemptive and Non-preemptive scheduling methods. Conculsion: Hence, a novel Turnaround time utility scheduling approach which focuses on both high priority and the low priority tasks that arrives for scheduling is proposed.

  20. A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv; Jayaraman, Prem Prakash; Kolodziej, Joanna; Balaji, Pavan; Zeadally, Sherali; Malluhi, Qutaibah Marwan; Tziritas, Nikos; Vishnu, Abhinav; Khan, Samee U.; Zomaya, Albert

    2014-06-06

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.

  1. Computing a Synthetic Chronic Psychosocial Stress Measurement in Multiple Datasets and its Application in the Replication of G × E Interactions of the EBF1 Gene.

    Science.gov (United States)

    Singh, Abanish; Babyak, Michael A; Brummett, Beverly H; Jiang, Rong; Watkins, Lana L; Barefoot, John C; Kraus, William E; Shah, Svati H; Siegler, Ilene C; Hauser, Elizabeth R; Williams, Redford B

    2015-09-01

    Chronic psychosocial stress adversely affects health and is associated with the development of disease [Williams, 2008]. Systematic epidemiological and genetic studies are needed to uncover genetic variants that interact with stress to modify metabolic responses across the life cycle that are the proximal contributors to the development of cardiovascular disease and precipitation of acute clinical events. Among the central challenges in the field are to perform and replicate gene-by-environment (G × E) studies. The challenge of measurement of individual experience of psychosocial stress is magnified in this context. Although many research datasets exist that contain genotyping and disease-related data, measures of psychosocial stress are often either absent or vary substantially across studies. In this paper, we provide an algorithm to create a synthetic measure of chronic psychosocial stress across multiple datasets, applying a consistent criterion that uses proxy indicators of stress components. We validated the computed scores of chronic psychosocial stress by observing moderately strong and significant correlations with the self-rated chronic psychosocial stress in the Multi-Ethnic Study of Atherosclerosis Cohort (Rho = 0.23, P psychosocial stress variable by providing three additional replications of our previous finding of gene-by-stress interaction with central obesity traits [Singh et al., 2015].

  2. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    Science.gov (United States)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-07-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi-Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources.

  3. Provable Data Possession of Resource-constrained Mobile Devices in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jian Yang

    2011-07-01

    Full Text Available Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP faced. The most important features in PDP are: 1 supporting for public, unlimited numbers of times of verification; 2 supporting for dynamic data update; 3 efficiency of storage space and computing. In mobile cloud computing, mobile end-users also need the PDP service. However, the computing workloads and storage burden of client in existing PDP schemes are too heavy to be directly used by the resource-constrained mobile devices. To solve this problem, with the integration of the trusted computing technology, this paper proposes a novel public PDP scheme, in which the trusted third-party agent (TPA takes over most of the calculations from the mobile end-users. By using bilinear signature and Merkle hash tree (MHT, the scheme aggregates the verification tokens of the data file into one small signature to reduce communication and storage burden. MHT is also helpful to support dynamic data update. In our framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM chips, and the needed computing workload and storage space is fit for mobile devices. Our scheme realizes provable secure storage service for resource-constrained mobile devices in mobile cloud computing.

  4. Interactive computer training to teach discrete-trial instruction to undergraduates and special educators in Brazil: A replication and extension.

    Science.gov (United States)

    Higbee, Thomas S; Aporta, Ana Paula; Resende, Alice; Nogueira, Mateus; Goyos, Celso; Pollard, Joy S

    2016-12-01

    Discrete-trial instruction (DTI) is a behavioral method of teaching young children with autism spectrum disorders (ASD) that has received a significant amount of research support. Because of a lack of qualified trainers in many areas of the world, researchers have recently begun to investigate alternative methods of training professionals to implement behavioral teaching procedures. One promising training method is interactive computer training, in which slides with recorded narration, video modeling, and embedded evaluation of content knowledge are used to teach a skill. In the present study, the effectiveness of interactive computer training developed by Pollard, Higbee, Akers, and Brodhead (2014), translated into Brazilian Portuguese, was evaluated with 4 university students (Study 1) and 4 special education teachers (Study 2). We evaluated the effectiveness of training on DTI skills during role-plays with research assistants (Study 1) and during DTI sessions with young children with ASD (Studies 1 and 2) using a multiple baseline design. All participants acquired DTI skills after interactive computer training, although 5 of 8 participants required some form of feedback to reach proficiency. Responding generalized to untaught teaching programs for all participants. We evaluated maintenance with the teachers in Study 2, and DTI skills were maintained with 3 of 4 participants.

  5. A novel agent based autonomous and service composition framework for cost optimization of resource provisioning in cloud computing

    Directory of Open Access Journals (Sweden)

    Aarti Singh

    2017-01-01

    Full Text Available A cloud computing environment offers a simplified, centralized platform or resources for use when needed at a low cost. One of the key functionalities of this type of computing is to allocate the resources on an individual demand. However, with the expanding requirements of cloud user, the need of efficient resource allocation is also emerging. The main role of service provider is to effectively distribute and share the resources which otherwise would result into resource wastage. In addition to the user getting the appropriate service according to request, the cost of respective resource is also optimized. In order to surmount the mentioned shortcomings and perform optimized resource allocation, this research proposes a new Agent based Automated Service Composition (A2SC algorithm comprising of request processing and automated service composition phases and is not only responsible for searching comprehensive services but also considers reducing the cost of virtual machines which are consumed by on-demand services only.

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  7. LHCb experience with LFC replication

    CERN Document Server

    Bonifazi, F; Perez, E D; D'Apice, A; dell'Agnello, L; Düllmann, D; Girone, M; Re, G L; Martelli, B; Peco, G; Ricci, P P; Sapunenko, V; Vagnoni, V; Vitlacil, D

    2008-01-01

    Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.

  8. LHCb experience with LFC replication

    CERN Document Server

    Carbone, Angelo; Dafonte Perez, Eva; D'Apice, Antimo; dell'Agnello, Luca; Duellmann, Dirk; Girone, Maria; Lo Re, Giuseppe; Martelli, Barbara; Peco, Gianluca; Ricci, Pier Paolo; Sapunenko, Vladimir; Vagnoni, Vincenzo; Vitlacil, Dejan

    2007-01-01

    Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informations (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.

  9. Power-Aware Resource Reconfiguration Using Genetic Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Li Deng

    2016-01-01

    Full Text Available Cloud computing enables scalable computation based on virtualization technology. However, current resource reallocation solution seldom considers the stability of virtual machine (VM placement pattern. Varied workloads of applications would lead to frequent resource reconfiguration requirements due to repeated appearance of hot nodes. In this paper, several algorithms for VM placement (multiobjective genetic algorithm (MOGA, power-aware multiobjective genetic algorithm (pMOGA, and enhanced power-aware multiobjective genetic algorithm (EpMOGA are presented to improve stability of VM placement pattern with less migration overhead. The energy consumption is also considered. A type-matching controller is designed to improve evolution process. Nondominated sorting genetic algorithm II (NSGAII is used to select new generations during evolution process. Our simulation results demonstrate that these algorithms all provide resource reallocation solutions with long stabilization time of nodes. pMOGA and EpMOGA also better balance the relationship of stabilization and energy efficiency by adding number of active nodes as one of optimal objectives. Type-matching controller makes EpMOGA superior to pMOGA.

  10. Exploring Graphics Processing Unit (GPU Resource Sharing Efficiency for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Teng Li

    2013-11-01

    Full Text Available The increasing incorporation of Graphics Processing Units (GPUs as accelerators has been one of the forefront High Performance Computing (HPC trends and provides unprecedented performance; however, the prevalent adoption of the Single-Program Multiple-Data (SPMD programming model brings with it challenges of resource underutilization. In other words, under SPMD, every CPU needs GPU capability available to it. However, since CPUs generally outnumber GPUs, the asymmetric resource distribution gives rise to overall computing resource underutilization. In this paper, we propose to efficiently share the GPU under SPMD and formally define a series of GPU sharing scenarios. We provide performance-modeling analysis for each sharing scenario with accurate experimentation validation. With the modeling basis, we further conduct experimental studies to explore potential GPU sharing efficiency improvements from multiple perspectives. Both further theoretical and experimental GPU sharing performance analysis and results are presented. Our results not only demonstrate the significant performance gain for SPMD programs with the proposed efficient GPU sharing, but also the further improved sharing efficiency with the optimization techniques based on our accurate modeling.

  11. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  12. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  13. A Safety Resource Allocation Mechanism against Connection Fault for Vehicular Cloud Computing

    Directory of Open Access Journals (Sweden)

    Tianpeng Ye

    2016-01-01

    Full Text Available The Intelligent Transportation System (ITS becomes an important component of the smart city toward safer roads, better traffic control, and on-demand service by utilizing and processing the information collected from sensors of vehicles and road side infrastructure. In ITS, Vehicular Cloud Computing (VCC is a novel technology balancing the requirement of complex services and the limited capability of on-board computers. However, the behaviors of the vehicles in VCC are dynamic, random, and complex. Thus, one of the key safety issues is the frequent disconnections between the vehicle and the Vehicular Cloud (VC when this vehicle is computing for a service. More important, the connection fault will disturb seriously the normal services of VCC and impact the safety works of the transportation. In this paper, a safety resource allocation mechanism is proposed against connection fault in VCC by using a modified workflow with prediction capability. We firstly propose the probability model for the vehicle movement which satisfies the high dynamics and real-time requirements of VCC. And then we propose a Prediction-based Reliability Maximization Algorithm (PRMA to realize the safety resource allocation for VCC. The evaluation shows that our mechanism can improve the reliability and guarantee the real-time performance of the VCC.

  14. Context-aware computing-based reducing cost of service method in resource discovery and interaction

    Institute of Scientific and Technical Information of China (English)

    TANG Shan-cheng; HOU Yi-bin

    2004-01-01

    Reducing cost of service is an important goal for resource discovery and interaction technologies. The shortcomings of transhipment-method and hibernation-method are to increase holistic cost of service and to slower resource discovery respectively. To overcome these shortcomings, a context-aware computing-based method is developed. This method, firstly,analyzes the courses of devices using resource discovery and interaction technologies to identify some types of context related to reducing cost of service, then, chooses effective methods such as stopping broadcast and hibernation to reduce cost of service according to information supplied by the context but not the transhipment-method's simple hibernations. The results of experiments indicate that under the worst condition this method overcomes the shortcomings of transhipment-method, makes the "poor" devices hibernate longer than hibernation-method to reduce cost of service more effectively, and discovers resources faster than hibernation-method; under the best condition it is far better than hibernation-method in all aspects.

  15. Resources and Approaches for Teaching Quantitative and Computational Skills in the Geosciences and Allied Fields

    Science.gov (United States)

    Orr, C. H.; Mcfadden, R. R.; Manduca, C. A.; Kempler, L. A.

    2016-12-01

    Teaching with data, simulations, and models in the geosciences can increase many facets of student success in the classroom, and in the workforce. Teaching undergraduates about programming and improving students' quantitative and computational skills expands their perception of Geoscience beyond field-based studies. Processing data and developing quantitative models are critically important for Geoscience students. Students need to be able to perform calculations, analyze data, create numerical models and visualizations, and more deeply understand complex systems—all essential aspects of modern science. These skills require students to have comfort and skill with languages and tools such as MATLAB. To achieve comfort and skill, computational and quantitative thinking must build over a 4-year degree program across courses and disciplines. However, in courses focused on Geoscience content it can be challenging to get students comfortable with using computational methods to answers Geoscience questions. To help bridge this gap, we have partnered with MathWorks to develop two workshops focused on collecting and developing strategies and resources to help faculty teach students to incorporate data, simulations, and models into the curriculum at the course and program levels. We brought together faculty members from the sciences, including Geoscience and allied fields, who teach computation and quantitative thinking skills using MATLAB to build a resource collection for teaching. These materials, and the outcomes of the workshops are freely available on our website. The workshop outcomes include a collection of teaching activities, essays, and course descriptions that can help faculty incorporate computational skills at the course or program level. The teaching activities include in-class assignments, problem sets, labs, projects, and toolboxes. These activities range from programming assignments to creating and using models. The outcomes also include workshop

  16. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    The general potential of computer games for teaching and learning is becoming widely recognized. In particular, within the application contexts of primary and lower secondary education, the relevance and value and computer games seem more accepted, and the possibility and willingness to incorporate...... computer games as a possible resource at the level of other educational resources seem more frequent. For some reason, however, to apply computer games in processes of teaching and learning at the high school level, seems an almost non-existent event. This paper reports on study of incorporating...

  17. Method to Reduce the Computational Intensity of Offshore Wind Energy Resource Assessments Using Cokriging

    Science.gov (United States)

    Dvorak, M. J.; Boucher, A.; Jacobson, M. Z.

    2009-12-01

    Wind energy represents the fastest growing renewable energy resource, sustaining double digit growth for the past 10 years with approximately 94,000 MW installed by the end of 2007. Although winds over the ocean are generally stronger and often located closer to large urban electric load centers, offshore wind turbines represent about 1% of installed capacity. In order to evaluate the economic potential of an offshore wind resource, wind resource assessments typically involve running large mesoscale model simulations, validated with sparse in-situ meteorological station data. These simulations are computationally expensive limiting their temporal coverage. Although a wealth of other wind data does exist (e.g. QuickSCAT satellite, SAR satellite, radar/SODAR wind profiler, and radiosounde) these data are often ignored or interpolated trivially because of the widely varying spatial and temporal resolution. A spatio-temporal cokriging approach with non-parametric covariances was developed to interpolate these empirical data and compare it with previously validated surface winds output by the PSU/NCAR MM5 for coastal California. The spatio-temporal covariance model is assumed to be the product of a spatial and a temporal covariance component. The temporal covariance is derived from in-situ wind speed measurements at 10 minutes intervals measured by offshore buoys and variograms are calculated non-parametrically using a FFT. Spatial covariance tables are created using MM5 or QuikSCAT data with a similar 2D FFT method. The cokriging system was initially validated by predicting “missing” hours of PSU/NCAR MM5 data and has displayed reasonable skill. QuikSCAT satellite winds were also substituted for MM5 data when calculating the spatial covariance, with the goal of reducing the computer time needed to accurately predict a wind energy resource.

  18. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  19. Public-Resource Computing: Un nuevo paradigma para la computación y la ciencia

    OpenAIRE

    2006-01-01

    En este artículo se explora el concepto de Computación de Recursos Públicos (Public-Resource Computing), una idea que se ha venido desarrollando con gran éxito desde hace algunos años en la comunidad científica y que consiste en el aprovechamiento de los recursos de computación que se encuentran disponibles en los millones de PC que existen en el mundo conectados a internet. Se discute el proyecto SETI@home, el más exitoso representante de este concepto, y se describe la plataforma BOINC (Ber...

  20. The model of localized business community economic development under limited financial resources: computer model and experiment

    Directory of Open Access Journals (Sweden)

    Berg Dmitry

    2016-01-01

    Full Text Available Globalization processes now affect and are affected by most of organizations, different type resources, and the natural environment. One of the main restrictions initiated by these processes is the financial one: money turnover in global markets leads to its concentration in the certain financial centers, and local business communities suffer from the money lack. This work discusses the advantages of complementary currency introduction into a local economics. By the computer simulation with the engineered program model and the real economic experiment it was proved that the complementary currency does not compete with the traditional currency, furthermore, it acts in compliance with it, providing conditions for the sustainable business community development.

  1. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2011-12-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources.Methodology—meta analysis, survey and descriptive analysis.Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, thatsignificant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  2. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Agota Giedrė Raišienė

    2013-08-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, that significant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  3. Integration and Exposure of Large Scale Computational Resources Across the Earth System Grid Federation (ESGF)

    Science.gov (United States)

    Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.

    2015-12-01

    As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.

  4. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    Directory of Open Access Journals (Sweden)

    Cesar Torres-Huitzil

    2013-01-01

    Full Text Available Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k×k kernel requires of k2−1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA devices. Implementation results show that the architecture is able to compute max/min filters, on 1024×1024 images with up to 255×255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding.

  5. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  6. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods.

    Science.gov (United States)

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  8. Monitoring of Computing Resource Use of Active Software Releases at ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2017-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  9. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  10. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  11. Research of cloud computing resource scheduling model%云环境下资源调度模型研究

    Institute of Scientific and Technical Information of China (English)

    刘赛; 李绪蓉; 万麟瑞; 陈韬

    2013-01-01

    In the cloud computing environment, resource scheduling management is one of the key technologies. This paper describes a cloud computing resource scheduling model and explains the relationship between entities in the resource scheduling process of cloud computing and cloud computing environments. According to the physical server resource properties, a scheduling model that comprehensively considers the cloud computing resources loads is established, and the artificial and automatic virtual machine migration technology is used to balance the load of the physical servers in the cloud computing environment. The experimental results show that this resource scheduling model not only supports balancing the resource load but also improves the virtualization degree and flexibility degree of the resource pool. Finally, the future research directions are discussed.%云计算环境下资源调度管理是云计算的关键技术之一.介绍了一种云计算下资源调度模型,阐述了云计算资源调度流程和云计算环境下实体之间的关系.根据物理服务器的资源属性,建立了一种综合考虑云计算资源负载的调度模型,利用人工加自动的虚拟机迁移技术实现云计算中物理服务器的负载均衡.通过仿真实验分析和比较,该资源调度模型不但可以很好地实现资源负载均衡,而且可以提高资源池虚拟化和弹性化程度.最后展望了下一步的研究方向.

  12. A computational platform for robotized fluorescence microscopy (II): DNA damage, replication, checkpoint activation, and cell cycle progression by high-content high-resolution multiparameter image-cytometry.

    Science.gov (United States)

    Furia, Laura; Pelicci, Pier Giuseppe; Faretta, Mario

    2013-04-01

    Dissection of complex molecular-networks in rare cell populations is limited by current technologies that do not allow simultaneous quantification, high-resolution localization, and statistically robust analysis of multiple parameters. We have developed a novel computational platform (Automated Microscopy for Image CytOmetry, A.M.I.CO) for quantitative image-analysis of data from confocal or widefield robotized microscopes. We have applied this image-cytometry technology to the study of checkpoint activation in response to spontaneous DNA damage in nontransformed mammary cells. Cell-cycle profile and active DNA-replication were correlated to (i) Ki67, to monitor proliferation; (ii) phosphorylated histone H2AX (γH2AX) and 53BP1, as markers of DNA-damage response (DDR); and (iii) p53 and p21, as checkpoint-activation markers. Our data suggest the existence of cell-cycle modulated mechanisms involving different functions of γH2AX and 53BP1 in DDR, and of p53 and p21 in checkpoint activation and quiescence regulation during the cell-cycle. Quantitative analysis, event selection, and physical relocalization have been then employed to correlate protein expression at the population level with interactions between molecules, measured with Proximity Ligation Analysis, with unprecedented statistical relevance. Copyright © 2013 International Society for Advancement of Cytometry.

  13. Computational replication of the patient-specific stenting procedure for coronary artery bifurcations: From OCT and CT imaging to structural and hemodynamics analyses.

    Science.gov (United States)

    Chiastra, Claudio; Wu, Wei; Dickerhoff, Benjamin; Aleiou, Ali; Dubini, Gabriele; Otake, Hiromasa; Migliavacca, Francesco; LaDisa, John F

    2016-07-26

    The optimal stenting technique for coronary artery bifurcations is still debated. With additional advances computational simulations can soon be used to compare stent designs or strategies based on verified structural and hemodynamics results in order to identify the optimal solution for each individual's anatomy. In this study, patient-specific simulations of stent deployment were performed for 2 cases to replicate the complete procedure conducted by interventional cardiologists. Subsequent computational fluid dynamics (CFD) analyses were conducted to quantify hemodynamic quantities linked to restenosis. Patient-specific pre-operative models of coronary bifurcations were reconstructed from CT angiography and optical coherence tomography (OCT). Plaque location and composition were estimated from OCT and assigned to models, and structural simulations were performed in Abaqus. Artery geometries after virtual stent expansion of Xience Prime or Nobori stents created in SolidWorks were compared to post-operative geometry from OCT and CT before being extracted and used for CFD simulations in SimVascular. Inflow boundary conditions based on body surface area, and downstream vascular resistances and capacitances were applied at branches to mimic physiology. Artery geometries obtained after virtual expansion were in good agreement with those reconstructed from patient images. Quantitative comparison of the distance between reconstructed and post-stent geometries revealed a maximum difference in area of 20.4%. Adverse indices of wall shear stress were more pronounced for thicker Nobori stents in both patients. These findings verify structural analyses of stent expansion, introduce a workflow to combine software packages for solid and fluid mechanics analysis, and underscore important stent design features from prior idealized studies. The proposed approach may ultimately be useful in determining an optimal choice of stent and position for each patient.

  14. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    Science.gov (United States)

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  15. Using multiple metaphors and multimodalities as a semiotic resource when teaching year 2 students computational strategies

    Science.gov (United States)

    Mildenhall, Paula; Sherriff, Barbara

    2017-06-01

    Recent research indicates that using multimodal learning experiences can be effective in teaching mathematics. Using a social semiotic lens within a participationist framework, this paper reports on a professional learning collaboration with a primary school teacher designed to explore the use of metaphors and modalities in mathematics instruction. This video case study was conducted in a year 2 classroom over two terms, with the focus on building children's understanding of computational strategies. The findings revealed that the teacher was able to successfully plan both multimodal and multiple metaphor learning experiences that acted as semiotic resources to support the children's understanding of abstract mathematics. The study also led to implications for teaching when using multiple metaphors and multimodalities.

  16. Development of a Computer-Based Resource for Inclusion Science Classrooms

    Science.gov (United States)

    Olsen, J. K.; Slater, T.

    2005-12-01

    Current instructional issues necessitate educators start with curriculum and determine how educational technology can assist students in achieving positive learning goals, functionally supplementing the classroom instruction. Technology projects incorporating principles of situated learning have been shown to provide effective framework for learning, and computer technology has been shown to facilitate learning among special needs students. Students with learning disabilities may benefit from assistive technology, but these resources are not always utilized during classroom instruction: technology is only effective if teachers view it as an integral part of the learning process. The materials currently under development are in the domain of earth and space science, part of the Arizona 5-8 Science Content Standards. The concern of this study is to determine a means of assisting inclusive education that is both feasible and effective in ensuring successful science learning outcomes for all students whether regular education or special needs.

  17. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  18. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction.

    Science.gov (United States)

    Nezarat, Amin; Dastghaibifard, G H

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.

  19. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  20. IMPROVING RESOURCE UTILIZATION USING QoS BASED LOAD BALANCING ALGORITHM FOR MULTIPLE WORKFLOWS IN IAAS CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    L. Shakkeera

    2013-06-01

    Full Text Available loud computing is the extension of parallel computing, distributed computing and grid computing. It provides secure, quick, convenient data storage and net computing services through the internet. The services are available to user in pay per-use-on-demand model. The main aim of using resources from cloud is to reduce the cost and to increase the performance in terms of request response time. Thus, optimizing the resource usage through efficient load balancing strategy is crucial. The main aim of this paper is to develop and implement an Optimized Load balancing algorithm in IaaS virtual cloud environment that aims to utilize the virtual cloud resources efficiently. It minimizes the cost of the applications by effectively using cloud resources and identifies the virtual cloud resources that must be suitable for all the applications. The web application is created with many modules. These modules are considered as tasks and these tasks are submitted to the load balancing server. The server which consists our load balancing policies redirect the tasks to the corresponding virtual machines created by KVM virtual machine manager as per the load balancing algorithm. If the size of the database inside the machine exceeds then the load balancing algorithm uses the other virtual machines for further incoming request. The load balancing strategy are evaluated for various QoS performance metrics like cost, average execution times, throughput, CPU usage, disk space, memory usage, network transmission and reception rate, resource utilization rate and scheduling success rate for the number of virtual machines and it improves the scalability among resources using load balancing techniques.

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  2. A whole genome RNAi screen identifies replication stress response genes.

    Science.gov (United States)

    Kavanaugh, Gina; Ye, Fei; Mohni, Kareem N; Luzwick, Jessica W; Glick, Gloria; Cortez, David

    2015-11-01

    Proper DNA replication is critical to maintain genome stability. When the DNA replication machinery encounters obstacles to replication, replication forks stall and the replication stress response is activated. This response includes activation of cell cycle checkpoints, stabilization of the replication fork, and DNA damage repair and tolerance mechanisms. Defects in the replication stress response can result in alterations to the DNA sequence causing changes in protein function and expression, ultimately leading to disease states such as cancer. To identify additional genes that control the replication stress response, we performed a three-parameter, high content, whole genome siRNA screen measuring DNA replication before and after a challenge with replication stress as well as a marker of checkpoint kinase signalling. We identified over 200 replication stress response genes and subsequently analyzed how they influence cellular viability in response to replication stress. These data will serve as a useful resource for understanding the replication stress response.

  3. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    Science.gov (United States)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  4. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  6. How to Make the Best Use of Limited Computer Resources in French Primary Schools.

    Science.gov (United States)

    Parmentier, Christophe

    1988-01-01

    Discusses computer science developments in French primary schools and describes strategies for using computers in the classroom most efficiently. Highlights include the use of computer networks; software; artificial intelligence and expert systems; computer-assisted learning (CAL) and intelligent CAL; computer peripherals; simulation; and teaching…

  7. A Practitioner Model of the Use of Computer-Based Tools and Resources to Support Mathematics Teaching and Learning.

    Science.gov (United States)

    Ruthven, Kenneth; Hennessy, Sara

    2002-01-01

    Analyzes the pedagogical ideas underpinning teachers' accounts of the successful use of computer-based tools and resources to support the teaching and learning of mathematics. Organizes central themes to form a pedagogical model capable of informing the use of such technologies in classroom teaching and generating theoretical conjectures for…

  8. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    Science.gov (United States)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  9. Data Resources for the Computer-Guided Discovery of Bioactive Natural Products.

    Science.gov (United States)

    Chen, Ya; de Bruyn Kops, Christina; Kirchmair, Johannes

    2017-08-30

    Natural products from plants, animals, marine life, fungi, bacteria, and other organisms are an important resource for modern drug discovery. Their biological relevance and structural diversity make natural products good starting points for drug design. Natural product-based drug discovery can benefit greatly from computational approaches, which are a valuable precursor or supplementary method to in vitro testing. We present an overview of 25 virtual and 31 physical natural product libraries that are useful for applications in cheminformatics, in particular virtual screening. The overview includes detailed information about each library, the extent of its structural information, and the overlap between different sources of natural products. In terms of chemical structures, there is a large overlap between freely available and commercial virtual natural product libraries. Of particular interest for drug discovery is that at least ten percent of known natural products are readily purchasable and many more natural products and derivatives are available through on-demand sourcing, extraction and synthesis services. Many of the readily purchasable natural products are of small size and hence of relevance to fragment-based drug discovery. There are also an increasing number of macrocyclic natural products and derivatives becoming available for screening.

  10. Disposal of waste computer hard disk drive: data destruction and resources recycling.

    Science.gov (United States)

    Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming

    2013-06-01

    An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.

  11. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE Model of Water Resources and Water Environments

    Directory of Open Access Journals (Sweden)

    Guohua Fang

    2016-09-01

    Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.

  12. Integrating GRID Tools to Build a Computing Resource Broker:Activities of DataGrid WP1

    Institute of Scientific and Technical Information of China (English)

    C.Anglano; S.Barale; 等

    2001-01-01

    Resources on a computational Grid are geographically istributed,heterogeneous in nature,owned by different individuals or organizations with their own scheduling policies,have different access cost models with dynamically varying loads and availability conditions.This maker traditional approaches to workload management,load balancing and scheduling inappropriate.The first work package(WP1)of the EU-funded DataGrid project is adddressing the issue of optimzing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of -date(collected in a finite amonut of time at a very loosely coupled site).We describe the DataGrid approach in integrating existing software components(from Condor,GGlobus,etc.)to build a Grid Resource Broker,and the early efforts to define a workable scheduling strategy.

  13. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); Zwahlen, Daniel [Kantonsspital Graubuenden, Department of Radiotherapy, Chur (Switzerland); Bodis, Stephan [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); University Hospital Zurich, Department of Radiation Oncology, Zurich (Switzerland)

    2016-09-15

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [German] Ziel dieser Studie war es, den aktuellen Stand der Infrastruktur und Personalausstattung der

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  15. Computation of groundwater resources and recharge in Chithar River Basin, South India.

    Science.gov (United States)

    Subramani, T; Babu, Savithri; Elango, L

    2013-01-01

    Groundwater recharge and available groundwater resources in Chithar River basin, Tamil Nadu, India spread over an area of 1,722 km(2) have been estimated by considering various hydrological, geological, and hydrogeological parameters, such as rainfall infiltration, drainage, geomorphic units, land use, rock types, depth of weathered and fractured zones, nature of soil, water level fluctuation, saturated thickness of aquifer, and groundwater abstraction. The digital ground elevation models indicate that the regional slope of the basin is towards east. The Proterozoic (Post-Archaean) basement of the study area consists of quartzite, calc-granulite, crystalline limestone, charnockite, and biotite gneiss with or without garnet. Three major soil types were identified namely, black cotton, deep red, and red sandy soils. The rainfall intensity gradually decreases from west to east. Groundwater occurs under water table conditions in the weathered zone and fluctuates between 0 and 25 m. The water table gains maximum during January after northeast monsoon and attains low during October. Groundwater abstraction for domestic/stock and irrigational needs in Chithar River basin has been estimated as 148.84 MCM (million m(3)). Groundwater recharge due to monsoon rainfall infiltration has been estimated as 170.05 MCM based on the water level rise during monsoon period. It is also estimated as 173.9 MCM using rainfall infiltration factor. An amount of 53.8 MCM of water is contributed to groundwater from surface water bodies. Recharge of groundwater due to return flow from irrigation has been computed as 147.6 MCM. The static groundwater reserve in Chithar River basin is estimated as 466.66 MCM and the dynamic reserve is about 187.7 MCM. In the present scenario, the aquifer is under safe condition for extraction of groundwater for domestic and irrigation purposes. If the existing water bodies are maintained properly, the extraction rate can be increased in future about 10% to 15%.

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  17. Archaeal DNA replication.

    Science.gov (United States)

    Kelman, Lori M; Kelman, Zvi

    2014-01-01

    DNA replication is essential for all life forms. Although the process is fundamentally conserved in the three domains of life, bioinformatic, biochemical, structural, and genetic studies have demonstrated that the process and the proteins involved in archaeal DNA replication are more similar to those in eukaryal DNA replication than in bacterial DNA replication, but have some archaeal-specific features. The archaeal replication system, however, is not monolithic, and there are some differences in the replication process between different species. In this review, the current knowledge of the mechanisms governing DNA replication in Archaea is summarized. The general features of the replication process as well as some of the differences are discussed.

  18. DLESE Teaching Box Pilot Project: Developing a Replicable Model for Collaboratively Creating Innovative Instructional Sequences Using Exemplary Resources in the Digital Library for Earth System Education (DLESE)

    Science.gov (United States)

    Weingroff, M.

    2004-12-01

    Before the advent of digital libraries, it was difficult for teachers to find suitable high-quality resources to use in their teaching. Digital libraries such as DLESE have eased the task by making high quality resources more easily accessible and providing search mechanisms that allow teachers to 'fine tune' the criteria over which they search. Searches tend to return lists of resources with some contextualizing information. However, teachers who are teaching 'out of discipline' or who have minimal training in science often need additional support to know how to use and sequence them. The Teaching Box Pilot Project was developed to address these concerns, bringing together educators, scientists, and instructional designers in a partnership to build an online framework to fully support innovative units of instruction about the Earth system. Each box integrates DLESE resources and activities, teaching tips, standards, concepts, teaching outcomes, reviews, and assessment information. Online templates and best practice guidelines are being developed that will enable teachers to create their own boxes or customize existing ones. Two boxes have been developed so far, one on weather for high school students, and one on the evidence for plate tectonics for middle schoolers. The project has met with significant enthusiasm and interest, and we hope to expand it by involving individual teachers, school systems, pre-service programs, and universities in the development and use of teaching boxes. A key ingredient in the project's success has been the close collaboration between the partners, each of whom has brought unique experiences, perspectives, knowledge, and skills to the project. This first effort involved teachers in the San Francisco Bay area, the University of California Museum of Paleontology, San Francisco State University, U.S. Geological Survey, and DLESE. This poster will allow participants to explore one of the teaching boxes. We will discuss how the boxes were

  19. Understanding how replication processes can maintain systems away from equilibrium using Algorithmic Information Theory.

    Science.gov (United States)

    Devine, Sean D

    2016-02-01

    Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  20. Editorial: Special issue on resources for the computer security and information assurance curriculum: Issue 1Curriculum Editorial Comments, Volume 1 and Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Frincke, Deb; Ouderkirk, Steven J.; Popovsky, Barbara

    2006-12-28

    This is a pair of articles to be used as the cover editorials for a special edition of the Journal of Educational Resources in Computing (JERIC) Special Edition on Resources for the Computer Security and Information Assurance Curriculum, volumes 1 and 2.

  1. A New Approach for a Better Load Balancing and a Better Distribution of Resources in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Abdellah IDRISSI

    2015-10-01

    Full Text Available Cloud computing is a new paradigm where data and services of Information Technology are provided via the Internet by using remote servers. It represents a new way of delivering computing resources allowing access to the network on demand. Cloud computing consists of several services, each of which can hold several tasks. As the problem of scheduling tasks is an NP-complete problem, the task management can be an important element in the technology of cloud computing. To optimize the performance of virtual machines hosted in cloud computing, several algorithms of scheduling tasks have been proposed. In this paper, we present an approach allowing to solve the problem optimally and to take into account the QoS constraints based on the different user requests. This technique, based on the Branch and Bound algorithm, allows to assign tasks to different virtual machines while ensuring load balance and a better distribution of resources. The experimental results show that our approach gives very promising results for an effective tasks planning.

  2. Computer Resources for Schools: Notes for Teachers and Students. [Educational Activities Kit.

    Science.gov (United States)

    Computer Museum, Boston, MA.

    This kit features an introduction to the Computer Museum, a history of computer technology, and notes on how a computer works including hardware and software. A total of 20 exhibits are described with brief questions for use as a preview of the exhibit or as ideas for post-visit discussions. There are 24 classroom activities about the history and…

  3. Dynamic scheduling model of computing resource based on MAS cooperation mechanism

    Institute of Scientific and Technical Information of China (English)

    JIANG WeiJin; ZHANG LianMei; WANG Pu

    2009-01-01

    Allocation of grid resources aims at improving resource utility and grid application performance. Currently, the algorithms proposed for this purpose do not fit well the autonomic, dynamic, distributive and heterogeneous features of the grid environment. According to MAS (multi-agent system) cooperation mechanism and market bidding game rules, a model of allocating allocation of grid resources based on market economy is introduced to reveal the relationship between supply and demand. This model can make good use of the studying and negotiating ability of consumers' agent and takes full consideration of the consumer's behavior, thus rendering the application and allocation of resource of the consumers rational and valid. In the meantime, the utility function of consumer Is given; the existence and the uniqueness of Nash equilibrium point in the resource allocation game and the Nash equilibrium solution are discussed. A dynamic game algorithm of allocating grid resources is designed. Experimental results demonstrate that this algorithm diminishes effectively the unnecessary latency, improves significantly the smoothness of response time, the ratio of throughput and resource utility, thus rendering the supply and demand of the whole grid resource reasonable and the overall grid load balanceable.

  4. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Bigeleisen, Jacob; Berne, Bruce J.; Coton, F. Albert; Scheraga, Harold A.; Simmons, Howard E.; Snyder, Lawrence C.; Wiberg, Kenneth B.; Wipke, W. Todd

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered.

  5. Resource discovery algorithm based on hierarchical model and Conscious search in Grid computing system

    Directory of Open Access Journals (Sweden)

    Nasim Nickbakhsh

    2017-03-01

    Full Text Available The distributed system of Grid subscribes the non-homogenous sources at a vast level in a dynamic manner. The resource discovery manner is very influential on the efficiency and of quality the system functionality. The “Bitmap” model is based on the hierarchical and conscious search model that allows for less traffic and low number of messages in relation to other methods in this respect. This proposed method is based on the hierarchical and conscious search model that enhances the Bitmap method with the objective to reduce traffic, reduce the load of resource management processing, reduce the number of emerged messages due to resource discovery and increase the resource according speed. The proposed method and the Bitmap method are simulated through Arena tool. This proposed model is abbreviated as RNTL.

  6. A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.

    Science.gov (United States)

    Moretti, Loris; Sartori, Luca

    2016-10-01

    Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

    2012-07-14

    The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.

  8. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    CERN Document Server

    Buyya, Rajkumar; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of ...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  10. University Students and Ethics of Computer Technology Usage: Human Resource Development

    Science.gov (United States)

    Iyadat, Waleed; Iyadat, Yousef; Ashour, Rateb; Khasawneh, Samer

    2012-01-01

    The primary purpose of this study was to determine the level of students' awareness about computer technology ethics at the Hashemite University in Jordan. A total of 180 university students participated in the study by completing the questionnaire designed by the researchers, named the Computer Technology Ethics Questionnaire (CTEQ). Results…

  11. Using Free Computational Resources to Illustrate the Drug Design Process in an Undergraduate Medicinal Chemistry Course

    Science.gov (United States)

    Rodrigues, Ricardo P.; Andrade, Saulo F.; Mantoani, Susimaire P.; Eifler-Lima, Vera L.; Silva, Vinicius B.; Kawano, Daniel F.

    2015-01-01

    Advances in, and dissemination of, computer technologies in the field of drug research now enable the use of molecular modeling tools to teach important concepts of drug design to chemistry and pharmacy students. A series of computer laboratories is described to introduce undergraduate students to commonly adopted "in silico" drug design…

  12. University Students and Ethics of Computer Technology Usage: Human Resource Development

    Science.gov (United States)

    Iyadat, Waleed; Iyadat, Yousef; Ashour, Rateb; Khasawneh, Samer

    2012-01-01

    The primary purpose of this study was to determine the level of students' awareness about computer technology ethics at the Hashemite University in Jordan. A total of 180 university students participated in the study by completing the questionnaire designed by the researchers, named the Computer Technology Ethics Questionnaire (CTEQ). Results…

  13. A new fuzzy optimal data replication method for data grid

    Directory of Open Access Journals (Sweden)

    Zeinab Ghilavizadeh

    2013-03-01

    Full Text Available These days, There are several applications where we face with large data set and it has become an important part of common resources in different scientific areas. In fact, there are many applications where there are literally huge amount of information handled either in terabyte or in petabyte. Many scientists apply huge amount of data distributed geographically around the world through advanced computing systems. The huge volume data and calculations have created new problems in accessing, processing and distribution of data. The challenges of data management infrastructure have become very difficult under a large amount of data, different geographical spaces, and complicated involved calculations. Data Grid is a remedy to all mentioned problems. In this paper, a new method of dynamic optimal data replication in data grid is introduced where it reduces the total job execution time and increases the locality in accessibilities by detecting and impacting the factors influencing the data replication. Proposed method is composed of two main phases. During the first phase is the phase of file application and replication operation. In this phase, we evaluate three factors influencing the data replication and determine whether the requested file can be replicated or it can be used from distance. In the second phase or the replacement phase, the proposed method investigates whether there is enough space in the destination to store the requested file or not. In this phase, the proposed method also chooses a replica with the lowest value for deletion by considering three replica factors to increase the performance of system. The results of simulation also indicate the improved performance of our proposed method compared with other replication methods represented in the simulator Optorsim.

  14. Replication in Overlay Networks: A Multi-objective Optimization Approach

    Science.gov (United States)

    Al-Haj Hassan, Osama; Ramaswamy, Lakshmish; Miller, John; Rasheed, Khaled; Canfield, E. Rodney

    Recently, overlay network-based collaborative applications such as instant messaging, content sharing, and Internet telephony are becoming increasingly popular. Many of these applications rely upon data-replication to achieve better performance, scalability, and reliability. However, replication entails various costs such as storage for holding replicas and communication overheads for ensuring replica consistency. While simple rule-of-thumb strategies are popular for managing the cost-benefit tradeoffs of replication, they cannot ensure optimal resource utilization. This paper explores a multi-objective optimization approach for replica management, which is unique in the sense that we view the various factors influencing replication decisions such as access latency, storage costs, and data availability as objectives, and not as constraints. This enables us to search for solutions that yield close to optimal values for these parameters. We propose two novel algorithms, namely multi-objective Evolutionary (MOE) algorithm and multi-objective Randomized Greedy (MORG) algorithm for deciding the number of replicas as well as their placement within the overlay. While MOE yields higher quality solutions, MORG is better in terms of computational efficiency. The paper reports a series of experiments that demonstrate the effectiveness of the proposed algorithms.

  15. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  16. Tracking the Flow of Resources in Electronic Waste - The Case of End-of-Life Computer Hard Disk Drives.

    Science.gov (United States)

    Habib, Komal; Parajuly, Keshav; Wenzel, Henrik

    2015-10-20

    Recovery of resources, in particular, metals, from waste flows is widely seen as a prioritized option to reduce their potential supply constraints in the future. The current waste electrical and electronic equipment (WEEE) treatment system is more focused on bulk metals, where the recycling rate of specialty metals, such as rare earths, is negligible compared to their increasing use in modern products, such as electronics. This study investigates the challenges in recovering these resources in the existing WEEE treatment system. It is illustrated by following the material flows of resources in a conventional WEEE treatment plant in Denmark. Computer hard disk drives (HDDs) containing neodymium-iron-boron (NdFeB) magnets were selected as the case product for this experiment. The resulting output fractions were tracked until their final treatment in order to estimate the recovery potential of rare earth elements (REEs) and other resources contained in HDDs. The results further show that out of the 244 kg of HDDs treated, 212 kg comprising mainly of aluminum and steel can be finally recovered from the metallurgic process. The results further demonstrate the complete loss of REEs in the existing shredding-based WEEE treatment processes. Dismantling and separate processing of NdFeB magnets from their end-use products can be a more preferred option over shredding. However, it remains a technological and logistic challenge for the existing system.

  17. Methods of resource management and applications in computing systems based on cloud technology

    Directory of Open Access Journals (Sweden)

    Карина Андріївна Мацуєва

    2015-07-01

    Full Text Available This article describes the methods of resource management and applications that are parts of an information system for science research (ISSR. The control model of requests in ISSR is given and results of working real cloud system using the additional module of load distribution programmed in Python are presented 

  18. Recommended Computer End-User Skills for Business Students by Fortune 500 Human Resource Executives.

    Science.gov (United States)

    Zhao, Jensen J.

    1996-01-01

    Human resources executives (83 responses from 380) strongly recommended 11 and recommended 46 end-user skills for business graduates. Core skills included use of keyboard, mouse, microcomputer, and printer; Windows; Excel; and telecommunications functions (electronic mail, Internet, local area networks, downloading). Knowing one application of…

  19. Recommendations for protecting National Library of Medicine Computing and Networking Resources

    Energy Technology Data Exchange (ETDEWEB)

    Feingold, R.

    1994-11-01

    Protecting Information Technology (IT) involves a number of interrelated factors. These include mission, available resources, technologies, existing policies and procedures, internal culture, contemporary threats, and strategic enterprise direction. In the face of this formidable list, a structured approach provides cost effective actions that allow the organization to manage its risks. We face fundamental challenges that will persist for at least the next several years. It is difficult if not impossible to precisely quantify risk. IT threats and vulnerabilities change rapidly and continually. Limited organizational resources combined with mission restraints-such as availability and connectivity requirements-will insure that most systems will not be absolutely secure (if such security were even possible). In short, there is no technical (or administrative) {open_quotes}silver bullet.{close_quotes} Protection is employing a stratified series of recommendations, matching protection levels against information sensitivities. Adaptive and flexible risk management is the key to effective protection of IT resources. The cost of the protection must be kept less than the expected loss, and one must take into account that an adversary will not expend more to attack a resource than the value of its compromise to that adversary. Notwithstanding the difficulty if not impossibility to precisely quantify risk, the aforementioned allows us to avoid the trap of choosing a course of action simply because {open_quotes}it`s safer{close_quotes} or ignoring an area because no one had explored its potential risk. Recommendations for protecting IT resources begins with discussing contemporary threats and vulnerabilities, and then procedures from general to specific preventive measures. From a risk management perspective, it is imperative to understand that today, the vast majority of threats are against UNIX hosts connected to the Internet.

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  1. Aggregating Data for Computational Toxicology Applications: The U.S. Environmental Protection Agency (EPA Aggregated Computational Toxicology Resource (ACToR System

    Directory of Open Access Journals (Sweden)

    Elaine A. Cohen Hubal

    2012-02-01

    Full Text Available Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains, ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies, ExpoCastDB (detailed human exposure data from observational studies of selected chemicals, and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways. The EPA DSSTox (Distributed Structure-Searchable Toxicity program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built using open source tools and is freely available to download. This review describes the organization of the data repository and provides selected examples of use cases.

  2. Replication Restart in Bacteria.

    Science.gov (United States)

    Michel, Bénédicte; Sandler, Steven J

    2017-07-01

    In bacteria, replication forks assembled at a replication origin travel to the terminus, often a few megabases away. They may encounter obstacles that trigger replisome disassembly, rendering replication restart from abandoned forks crucial for cell viability. During the past 25 years, the genes that encode replication restart proteins have been identified and genetically characterized. In parallel, the enzymes were purified and analyzed in vitro, where they can catalyze replication initiation in a sequence-independent manner from fork-like DNA structures. This work also revealed a close link between replication and homologous recombination, as replication restart from recombination intermediates is an essential step of DNA double-strand break repair in bacteria and, conversely, arrested replication forks can be acted upon by recombination proteins and converted into various recombination substrates. In this review, we summarize this intense period of research that led to the characterization of the ubiquitous replication restart protein PriA and its partners, to the definition of several replication restart pathways in vivo, and to the description of tight links between replication and homologous recombination, responsible for the importance of replication restart in the maintenance of genome stability. Copyright © 2017 American Society for Microbiology.

  3. Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key

    Science.gov (United States)

    Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.

    2017-01-01

    Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.

  4. A mathematical model for a distributed attack on targeted resources in a computer network

    Science.gov (United States)

    Haldar, Kaushik; Mishra, Bimal Kumar

    2014-09-01

    A mathematical model has been developed to analyze the spread of a distributed attack on critical targeted resources in a network. The model provides an epidemic framework with two sub-frameworks to consider the difference between the overall behavior of the attacking hosts and the targeted resources. The analysis focuses on obtaining threshold conditions that determine the success or failure of such attacks. Considering the criticality of the systems involved and the strength of the defence mechanism involved, a measure has been suggested that highlights the level of success that has been achieved by the attacker. To understand the overall dynamics of the system in the long run, its equilibrium points have been obtained and their stability has been analyzed, and conditions for their stability have been outlined.

  5. Sustainable supply chain management through enterprise resource planning (ERP): a model of sustainable computing

    OpenAIRE

    Broto Rauth Bhardwaj

    2015-01-01

    Green supply chain management (GSCM) is a driver of sustainable strategy. This topic is becoming increasingly important for both academia and industry. With the increasing demand for reducing carbon foot prints, there is a need to study the drivers of sustainable development. There is also need for developing the sustainability model. Using resource based theory (RBT) the present model for sustainable strategy has been developed. On the basis of data collected, the key drivers of sustainabili...

  6. Development of a Computational Framework for Stochastic Co-optimization of Water and Energy Resource Allocations under Climatic Uncertainty

    Science.gov (United States)

    Xuan, Y.; Mahinthakumar, K.; Arumugam, S.; DeCarolis, J.

    2015-12-01

    Owing to the lack of a consistent approach to assimilate probabilistic forecasts for water and energy systems, utilization of climate forecasts for conjunctive management of these two systems is very limited. Prognostic management of these two systems presents a stochastic co-optimization problem that seeks to determine reservoir releases and power allocation strategies while minimizing the expected operational costs subject to probabilistic climate forecast constraints. To address these issues, we propose a high performance computing (HPC) enabled computational framework for stochastic co-optimization of water and energy resource allocations under climate uncertainty. The computational framework embodies a new paradigm shift in which attributes of climate (e.g., precipitation, temperature) and its forecasted probability distribution are employed conjointly to inform seasonal water availability and electricity demand. The HPC enabled cyberinfrastructure framework is developed to perform detailed stochastic analyses, and to better quantify and reduce the uncertainties associated with water and power systems management by utilizing improved hydro-climatic forecasts. In this presentation, our stochastic multi-objective solver extended from Optimus (Optimization Methods for Universal Simulators), is introduced. The solver uses parallel cooperative multi-swarm method to solve for efficient solution of large-scale simulation-optimization problems on parallel supercomputers. The cyberinfrastructure harnesses HPC resources to perform intensive computations using ensemble forecast models of streamflow and power demand. The stochastic multi-objective particle swarm optimizer we developed is used to co-optimize water and power system models under constraints over a large number of ensembles. The framework sheds light on the application of climate forecasts and cyber-innovation framework to improve management and promote the sustainability of water and energy systems.

  7. LHCb Data Replication During SC3

    CERN Multimedia

    Smith, A

    2006-01-01

    LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to allow high bandwidth distribution of data across the grid in accordance with the computing model. To enable reliable bulk replication of data, LHCb's DIRAC system has been integrated with gLite's File Transfer Service middleware component to make use of dedicated network links between LHCb computing centres. DIRAC's Data Management tools previously allowed the replication, registration and deletion of files on the grid. For SC3 supplementary functionality has been added to allow bulk replication of data (using FTS) and efficient mass registration to the LFC replica catalog.Provisional performance results have shown that the system developed can meet the expected data replication rate required by the computing model in 2007. This paper details the experience and results of integration and utilisation of DIRAC with the SC3 transfer machinery.

  8. Computer and Video Games in Family Life: The Digital Divide as a Resource in Intergenerational Interactions

    Science.gov (United States)

    Aarsand, Pal Andre

    2007-01-01

    In this ethnographic study of family life, intergenerational video and computer game activities were videotaped and analysed. Both children and adults invoked the notion of a digital divide, i.e. a generation gap between those who master and do not master digital technology. It is argued that the digital divide was exploited by the children to…

  9. Planning and Development of the Computer Resource at Baylor College of Medicine.

    Science.gov (United States)

    And Others; Ogilvie, W. Buckner, Jr.

    1979-01-01

    Describes the development and implementation of a plan at Baylor College of Medicine for providing computer support for both the administrative and scientific/ research needs of the Baylor community. The cost-effectiveness of this plan is also examined. (Author/CMV)

  10. Computers for All Students: A Strategy for Universal Access to Information Resources.

    Science.gov (United States)

    Resmer, Mark; And Others

    This report proposes a strategy of putting networked computing devices into the hands of all students at institutions of higher education. It outlines the rationale for such a strategy, the options for financing, the required institutional support structure needed, and various implementation approaches. The report concludes that the resultant…

  11. The Portability of Computer-Related Educational Resources: An Overview of Issues and Directions.

    Science.gov (United States)

    Collis, Betty A.; De Diana, Italo

    1990-01-01

    Provides an overview of the articles in this special issue, which deals with the portability, or transferability, of educational computer software. Motivations for portable software relating to cost, personnel, and time are discussed, and factors affecting portability are described, including technical factors, educational factors, social/cultural…

  12. Method and apparatus for offloading compute resources to a flash co-processing appliance

    Energy Technology Data Exchange (ETDEWEB)

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing -bung

    2015-10-13

    Solid-State Drive (SSD) burst buffer nodes are interposed into a parallel supercomputing cluster to enable fast burst checkpoint of cluster memory to or from nearby interconnected solid-state storage with asynchronous migration between the burst buffer nodes and slower more distant disk storage. The SSD nodes also perform tasks offloaded from the compute nodes or associated with the checkpoint data. For example, the data for the next job is preloaded in the SSD node and very fast uploaded to the respective compute node just before the next job starts. During a job, the SSD nodes perform fast visualization and statistical analysis upon the checkpoint data. The SSD nodes can also perform data reduction and encryption of the checkpoint data.

  13. Resources and costs for microbial sequence analysis evaluated using virtual machines and cloud computing.

    Directory of Open Access Journals (Sweden)

    Samuel V Angiuoli

    Full Text Available BACKGROUND: The widespread popularity of genomic applications is threatened by the "bioinformatics bottleneck" resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. RESULTS: We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2, which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. CONCLUSIONS: Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer invested

  14. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    Science.gov (United States)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  15. New resource for the computation of cartilage biphasic material properties with the interpolant response surface method.

    Science.gov (United States)

    Keenan, Kathryn E; Kourtis, Lampros C; Besier, Thor F; Lindsey, Derek P; Gold, Garry E; Delp, Scott L; Beaupre, Gary S

    2009-08-01

    Cartilage material properties are important for understanding joint function and diseases, but can be challenging to obtain. Three biphasic material properties (aggregate modulus, Poisson's ratio and permeability) can be determined using an analytical or finite element model combined with optimisation to find the material properties values that best reproduce an experimental creep curve. The purpose of this study was to develop an easy-to-use resource to determine biphasic cartilage material properties. A Cartilage Interpolant Response Surface was generated from interpolation of finite element simulations of creep indentation tests. Creep indentation tests were performed on five sites across a tibial plateau. A least-squares residual search of the Cartilage Interpolant Response Surface resulted in a best-fit curve for each experimental condition with corresponding material properties. These sites provided a representative range of aggregate moduli (0.48-1.58 MPa), Poisson's ratio (0.00-0.05) and permeability (1.7 x 10(- 15)-5.4 x 10(- 15) m(4)/N s) values found in human cartilage. The resource is freely available from https://simtk.org/home/va-squish.

  16. Determining host metabolic limitations on viral replication via integrated modeling and experimental perturbation.

    Directory of Open Access Journals (Sweden)

    Elsa W Birch

    Full Text Available Viral replication relies on host metabolic machinery and precursors to produce large numbers of progeny - often very rapidly. A fundamental example is the infection of Escherichia coli by bacteriophage T7. The resource draw imposed by viral replication represents a significant and complex perturbation to the extensive and interconnected network of host metabolic pathways. To better understand this system, we have integrated a set of structured ordinary differential equations quantifying T7 replication and an E. coli flux balance analysis metabolic model. Further, we present here an integrated simulation algorithm enforcing mutual constraint by the models across the entire duration of phage replication. This method enables quantitative dynamic prediction of virion production given only specification of host nutritional environment, and predictions compare favorably to experimental measurements of phage replication in multiple environments. The level of detail of our computational predictions facilitates exploration of the dynamic changes in host metabolic fluxes that result from viral resource consumption, as well as analysis of the limiting processes dictating maximum viral progeny production. For example, although it is commonly assumed that viral infection dynamics are predominantly limited by the amount of protein synthesis machinery in the host, our results suggest that in many cases metabolic limitation is at least as strict. Taken together, these results emphasize the importance of considering viral infections in the context of host metabolism.

  17. Studying the Earth's Environment from Space: Computer Laboratory Exercised and Instructor Resources

    Science.gov (United States)

    Smith, Elizabeth A.; Alfultis, Michael

    1998-01-01

    Studying the Earth's Environment From Space is a two-year project to develop a suite of CD-ROMs containing Earth System Science curriculum modules for introductory undergraduate science classes. Lecture notes, slides, and computer laboratory exercises, including actual satellite data and software, are being developed in close collaboration with Carla Evans of NASA GSFC Earth Sciences Directorate Scientific and Educational Endeavors (SEE) project. Smith and Alfultis are responsible for the Oceanography and Sea Ice Processes Modules. The GSFC SEE project is responsible for Ozone and Land Vegetation Modules. This document constitutes a report on the first year of activities of Smith and Alfultis' project.

  18. How frog embryos replicate their DNA reliably

    Science.gov (United States)

    Bechhoefer, John; Marshall, Brandon

    2007-03-01

    Frog embryos contain three billion base pairs of DNA. In early embryos (cycles 2-12), DNA replication is extremely rapid, about 20 min., and the entire cell cycle lasts only 25 min., meaning that mitosis (cell division) takes place in about 5 min. In this stripped-down cell cycle, there are no efficient checkpoints to prevent the cell from dividing before its DNA has finished replication - a disastrous scenario. Even worse, the many origins of replication are laid down stochastically and are also initiated stochastically throughout the replication process. Despite the very tight time constraints and despite the randomness introduced by origin stochasticity, replication is extremely reliable, with cell division failing no more than once in 10,000 tries. We discuss a recent model of DNA replication that is drawn from condensed-matter theories of 1d nucleation and growth. Using our model, we discuss different strategies of replication: should one initiate all origins as early as possible, or is it better to hold back and initiate some later on? Using concepts from extreme-value statistics, we derive the distribution of replication times given a particular scenario for the initiation of origins. We show that the experimentally observed initiation strategy for frog embryos meets the reliability constraint and is close to the one that requires the fewest resources of a cell.

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  20. DNA replication and cancer

    DEFF Research Database (Denmark)

    Boyer, Anne-Sophie; Walter, David; Sørensen, Claus Storgaard

    2016-01-01

    A dividing cell has to duplicate its DNA precisely once during the cell cycle to preserve genome integrity avoiding the accumulation of genetic aberrations that promote diseases such as cancer. A large number of endogenous impacts can challenge DNA replication and cells harbor a battery of pathways...... causing DNA replication stress and genome instability. Further, we describe cellular and systemic responses to these insults with a focus on DNA replication restart pathways. Finally, we discuss the therapeutic potential of exploiting intrinsic replicative stress in cancer cells for targeted therapy....

  1. Attentional Resource Allocation and Cultural Modulation in a Computational Model of Ritualized Behavior

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Sørensen, Jesper

    2016-01-01

    ritualized behaviors are perceptually similar across a range of behavioral domains, symbolically mediated experience-dependent information (so-called cultural priors) modulate perception such that communal ceremonies appear coherent and culturally meaningful, while compulsive behaviors remain incoherent and......How do cultural and religious rituals influence human perception and cognition, and what separates the highly patterned behaviors of communal ceremonies from perceptually similar precautionary and compulsive behaviors? These are some of the questions that recent theoretical models and empirical......, in some cases, pathological. In this study, we extend a qualitative model of human action perception and understanding to include ritualized behavior. Based on previous experimental and computational studies, the model was simulated using instrumental and ritualized representations of realistic motor...

  2. Attentional Resource Allocation and Cultural Modulation in a Computational Model of Ritualized Behavior

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Sørensen, Jesper

    2015-01-01

    ritualized behaviors are perceptually similar across a range of behavioral domains, symbolically mediated experience-dependent information (so-called cultural priors) modulate perception such that communal ceremonies appear coherent and culturally meaningful, while compulsive behaviors remain incoherent and......How do cultural and religious rituals influence human perception and cognition, and what separates the highly patterned behaviors of communal ceremonies from perceptually similar precautionary and compulsive behaviors? These are some of the questions that recent theoretical models and empirical......, in some cases, pathological. In this study, we extend a qualitative model of human action perception and understanding to include ritualized behavior. Based on previous experimental and computational studies, the model was simulated using instrumental and ritualized representations of realistic motor...

  3. II - Detector simulation for the LHC and beyond : how to match computing resources and physics requirements

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  4. I - Detector Simulation for the LHC and beyond: how to match computing resources and physics requirements

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  5. Internet resources for dentistry: computer, Internet, reference, and sites for enhancing personal productivity of the dental professional.

    Science.gov (United States)

    Guest, G F

    2000-08-15

    At the onset of the new millennium the Internet has become the new standard means of distributing information. In the last two to three years there has been an explosion of e-commerce with hundreds of new web sites being created every minute. For most corporate entities, a web site is as essential as the phone book listing used to be. Twenty years ago technologist directed how computer-based systems were utilized. Now it is the end users of personal computers that have gained expertise and drive the functionality of software applications. The computer, initially invented for mathematical functions, has transitioned from this role to an integrated communications device that provides the portal to the digital world. The Web needs to be used by healthcare professionals, not only for professional activities, but also for instant access to information and services "just when they need it." This will facilitate the longitudinal use of information as society continues to gain better information access skills. With the demand for current "just in time" information and the standards established by Internet protocols, reference sources of information may be maintained in dynamic fashion. News services have been available through the Internet for several years, but now reference materials such as online journals and digital textbooks have become available and have the potential to change the traditional publishing industry. The pace of change should make us consider Will Rogers' advice, "It isn't good enough to be moving in the right direction. If you are not moving fast enough, you can still get run over!" The intent of this article is to complement previous articles on Internet Resources published in this journal, by presenting information about web sites that present information on computer and Internet technologies, reference materials, news information, and information that lets us improve personal productivity. Neither the author, nor the Journal endorses any of the

  6. Adaptive TrimTree: Green Data Center Networks through Resource Consolidation, Selective Connectedness and Energy Proportional Computing

    Directory of Open Access Journals (Sweden)

    Saima Zafar

    2016-10-01

    Full Text Available A data center is a facility with a group of networked servers used by an organization for storage, management and dissemination of its data. The increase in data center energy consumption over the past several years is staggering, therefore efforts are being initiated to achieve energy efficiency of various components of data centers. One of the main reasons data centers have high energy inefficiency is largely due to the fact that most organizations run their data centers at full capacity 24/7. This results into a number of servers and switches being underutilized or even unutilized, yet working and consuming electricity around the clock. In this paper, we present Adaptive TrimTree; a mechanism that employs a combination of resource consolidation, selective connectedness and energy proportional computing for optimizing energy consumption in a Data Center Network (DCN. Adaptive TrimTree adopts a simple traffic-and-topology-based heuristic to find a minimum power network subset called ‘active network subset’ that satisfies the existing network traffic conditions while switching off the residual unused network components. A ‘passive network subset’ is also identified for redundancy which consists of links and switches that can be required in future and this subset is toggled to sleep state. An energy proportional computing technique is applied to the active network subset for adapting link data rates to workload thus maximizing energy optimization. We have compared our proposed mechanism with fat-tree topology and ElasticTree; a scheme based on resource consolidation. Our simulation results show that our mechanism saves 50%–70% more energy as compared to fat-tree and 19.6% as compared to ElasticTree, with minimal impact on packet loss percentage and delay. Additionally, our mechanism copes better with traffic anomalies and surges due to passive network provision.

  7. Water resources climate change projections using supervised nonlinear and multivariate soft computing techniques

    Science.gov (United States)

    Sarhadi, Ali; Burn, Donald H.; Johnson, Fiona; Mehrotra, Raj; Sharma, Ashish

    2016-05-01

    Accurate projection of global warming on the probabilistic behavior of hydro-climate variables is one of the main challenges in climate change impact assessment studies. Due to the complexity of climate-associated processes, different sources of uncertainty influence the projected behavior of hydro-climate variables in regression-based statistical downscaling procedures. The current study presents a comprehensive methodology to improve the predictive power of the procedure to provide improved projections. It does this by minimizing the uncertainty sources arising from the high-dimensionality of atmospheric predictors, the complex and nonlinear relationships between hydro-climate predictands and atmospheric predictors, as well as the biases that exist in climate model simulations. To address the impact of the high dimensional feature spaces, a supervised nonlinear dimensionality reduction algorithm is presented that is able to capture the nonlinear variability among projectors through extracting a sequence of principal components that have maximal dependency with the target hydro-climate variables. Two soft-computing nonlinear machine-learning methods, Support Vector Regression (SVR) and Relevance Vector Machine (RVM), are engaged to capture the nonlinear relationships between predictand and atmospheric predictors. To correct the spatial and temporal biases over multiple time scales in the GCM predictands, the Multivariate Recursive Nesting Bias Correction (MRNBC) approach is used. The results demonstrate that this combined approach significantly improves the downscaling procedure in terms of precipitation projection.

  8. Replicating animal mitochondrial DNA

    Directory of Open Access Journals (Sweden)

    Emily A. McKinney

    2013-01-01

    Full Text Available The field of mitochondrial DNA (mtDNA replication has been experiencing incredible progress in recent years, and yet little is certain about the mechanism(s used by animal cells to replicate this plasmid-like genome. The long-standing strand-displacement model of mammalian mtDNA replication (for which single-stranded DNA intermediates are a hallmark has been intensively challenged by a new set of data, which suggests that replication proceeds via coupled leading-and lagging-strand synthesis (resembling bacterial genome replication and/or via long stretches of RNA intermediates laid on the mtDNA lagging-strand (the so called RITOLS. The set of proteins required for mtDNA replication is small and includes the catalytic and accessory subunits of DNA polymerase y, the mtDNA helicase Twinkle, the mitochondrial single-stranded DNA-binding protein, and the mitochondrial RNA polymerase (which most likely functions as the mtDNA primase. Mutations in the genes coding for the first three proteins are associated with human diseases and premature aging, justifying the research interest in the genetic, biochemical and structural properties of the mtDNA replication machinery. Here we summarize these properties and discuss the current models of mtDNA replication in animal cells.

  9. The EGI-Engage EPOS Competence Center - Interoperating heterogeneous AAI mechanisms and Orchestrating distributed computational resources

    Science.gov (United States)

    Bailo, Daniele; Scardaci, Diego; Spinuso, Alessandro; Sterzel, Mariusz; Schwichtenberg, Horst; Gemuend, Andre

    2016-04-01

    manage the use of the subsurface of the Earth. EPOS started its Implementation Phase in October 2015 and is now actively working in order to integrate multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) - European wide organizations and e-Infrastructure providing community specific data and data products - and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into the Integrated Core Services (ICS) system, that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. The EPOS competence center (EPOS CC) goal is to tackle two of the main challenges that the ICS are going to face in the near future, by taking advantage of the technical solutions provided by EGI. In order to do this, we will present the two pilot use cases the EGI-EPOS CC is developing: 1) The AAI pilot, dealing with the provision of transparent and homogeneous access to the ICS infrastructure to users owning different kind of credentials (e.g. eduGain, OpenID Connect, X509 certificates etc.). Here the focus is on the mechanisms which allow the credential delegation. 2) The computational pilot, Improve the back-end services of an existing application in the field of Computational Seismology, developed in the context of the EC funded project VERCE. The application allows the processing and the comparison of data resulting from the simulation of seismic wave propagation following a real earthquake and real measurements recorded by seismographs. While the simulation data is produced directly by the users and stored in a Data Management System, the observations need to be pre-staged from institutional data-services, which are maintained by the community itself. This use case aims at exploiting the EGI FedCloud e-infrastructure for Data

  10. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    Science.gov (United States)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  11. The Replication Recipe: What makes for a convincing replication?

    NARCIS (Netherlands)

    Brandt, M.J.; IJzerman, H.; Dijksterhuis, A.J.; Farach, F.J.; Geller, J.; Giner-Sorolla, R.; Grange, J.A.; Perugini, M.; Spies, J.R.; Veer, A. van 't

    2014-01-01

    Psychological scientists have recently started to reconsider the importance of close replications in building a cumulative knowledge base; however, there is no consensus about what constitutes a convincing close replication study. To facilitate convincing close replication attempts we have developed

  12. Modeling DNA Replication.

    Science.gov (United States)

    Bennett, Joan

    1998-01-01

    Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)

  13. Eukaryotic DNA Replication Fork.

    Science.gov (United States)

    Burgers, Peter M J; Kunkel, Thomas A

    2017-06-20

    This review focuses on the biogenesis and composition of the eukaryotic DNA replication fork, with an emphasis on the enzymes that synthesize DNA and repair discontinuities on the lagging strand of the replication fork. Physical and genetic methodologies aimed at understanding these processes are discussed. The preponderance of evidence supports a model in which DNA polymerase ε (Pol ε) carries out the bulk of leading strand DNA synthesis at an undisturbed replication fork. DNA polymerases α and δ carry out the initiation of Okazaki fragment synthesis and its elongation and maturation, respectively. This review also discusses alternative proposals, including cellular processes during which alternative forks may be utilized, and new biochemical studies with purified proteins that are aimed at reconstituting leading and lagging strand DNA synthesis separately and as an integrated replication fork.

  14. Abiotic self-replication.

    Science.gov (United States)

    Meyer, Adam J; Ellefson, Jared W; Ellington, Andrew D

    2012-12-18

    The key to the origins of life is the replication of information. Linear polymers such as nucleic acids that both carry information and can be replicated are currently what we consider to be the basis of living systems. However, these two properties are not necessarily coupled. The ability to mutate in a discrete or quantized way, without frequent reversion, may be an additional requirement for Darwinian evolution, in which case the notion that Darwinian evolution defines life may be less of a tautology than previously thought. In this Account, we examine a variety of in vitro systems of increasing complexity, from simple chemical replicators up to complex systems based on in vitro transcription and translation. Comparing and contrasting these systems provides an interesting window onto the molecular origins of life. For nucleic acids, the story likely begins with simple chemical replication, perhaps of the form A + B → T, in which T serves as a template for the joining of A and B. Molecular variants capable of faster replication would come to dominate a population, and the development of cycles in which templates could foster one another's replication would have led to increasingly complex replicators and from thence to the initial genomes. The initial genomes may have been propagated by RNA replicases, ribozymes capable of joining oligonucleotides and eventually polymerizing mononucleotide substrates. As ribozymes were added to the genome to fill gaps in the chemistry necessary for replication, the backbone of a putative RNA world would have emerged. It is likely that such replicators would have been plagued by molecular parasites, which would have been passively replicated by the RNA world machinery without contributing to it. These molecular parasites would have been a major driver for the development of compartmentalization/cellularization, as more robust compartments could have outcompeted parasite-ridden compartments. The eventual outsourcing of metabolic

  15. Student use of computer tools designed to scaffold scientific problem-solving with hypermedia resources: A case study

    Science.gov (United States)

    Oliver, Kevin Matthew

    National science standards call for increasing student exposure to inquiry and real-world problem solving. Students can benefit from open-ended learning environments that stress the engagement of real problems and the development of thinking skills and processes. The Internet is an ideal resource for context-bound problems with its seemingly endless supply of resources. Problems may arise, however, since young students are cognitively ill-prepared to manage open-ended learning and may have difficulty processing hypermedia. Computer tools were used in a qualitative case study with 12 eighth graders to determine how such implements might support the process of solving open-ended problems. A preliminary study proposition suggested students would solve open-ended problems more appropriately if they used tools in a manner consistent with higher-order critical and creative thinking. Three research questions sought to identify: how students used tools, the nature of science learning in open-ended environments, and any personal or environmental barriers effecting problem solving. The findings were mixed. The participants did not typically use the tools and resources effectively. They successfully collected basic information, but infrequently organized, evaluated, generated, and justified their ideas. While the students understood how to use most tools procedurally, they lacked strategic understanding for why tool use was necessary. Students scored average to high on assessments of general content understanding, but developed artifacts suggesting their understanding of specific micro problems was naive and rife with misconceptions. Process understanding was also inconsistent, with some students describing basic problem solving processes, but most students unable to describe how tools could support open-ended inquiry. Barriers to effective problem solving were identified in the study. Personal barriers included naive epistemologies, while environmental barriers included a

  16. Adenovirus DNA Replication

    OpenAIRE

    Hoeben, Rob C.; Uil, Taco G.

    2013-01-01

    Adenoviruses have attracted much attention as probes to study biological processes such as DNA replication, transcription, splicing, and cellular transformation. More recently these viruses have been used as gene-transfer vectors and oncolytic agents. On the other hand, adenoviruses are notorious pathogens in people with compromised immune functions. This article will briefly summarize the basic replication strategy of adenoviruses and the key proteins involved and will deal with the new deve...

  17. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    Science.gov (United States)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  18. Communication, Control, and Computer Access for Disabled and Elderly Individuals. ResourceBook 4: Update to Books 1, 2, and 3.

    Science.gov (United States)

    Borden, Peter A., Ed.; Vanderheiden, Gregg C., Ed.

    This update to the three-volume first edition of the "Rehab/Education ResourceBook Series" describes special software and products pertaining to communication, control, and computer access, designed specifically for the needs of disabled and elderly people. The 22 chapters cover: speech aids; pointing and typing aids; training and communication…

  19. Computational Resources for GTL

    Energy Technology Data Exchange (ETDEWEB)

    Herbert M. Sauro

    2007-12-18

    This final report summarizes the work conducted under our three year DOE GTL grant ($459,402). The work involved a number of areas, including standardization, the Systems Biology Workbench, Visual Editors, collaboration with other groups and the development of new theory and algorithms. Our work has played a key part in helping to further develop SBML, the de facto standard for System Biology Model exchange and SBGN, the developing standard for visual representation for biochemical models. Our work has also made significant contributions to developing SBW, the systems biology workbench which is now very widely used in the community (roughly 30 downloads per day for the last three years, which equates to about 30,000 downloads in total). We have also used the DOE funding to collaborate extensively with nine different groups around the world. Finally we have developed new methods to reduce model size which are now used by all the major simulation packages, including Matlab. All in all, we consider the last three years to be highly productive and influential in the systems biology community. The project resulted in 16 peer review publications.

  20. Minichromosome replication in vitro: inhibition of re-replication by replicatively assembled nucleosomes.

    Science.gov (United States)

    Krude, T; Knippers, R

    1994-08-19

    Single-stranded circular DNA, containing the SV40 origin sequence, was used as a template for complementary DNA strand synthesis in cytosolic extracts from HeLa cells. In the presence of the replication-dependent chromatin assembly factor CAF-1, defined numbers of nucleosomes were assembled during complementary DNA strand synthesis. These minichromosomes were then induced to semiconservatively replicate by the addition of the SV40 initiator protein T antigen (re-replication). The results indicate that re-replication of minichromosomes appears to be inhibited by two independent mechanisms. One acts at the initiation of minichromosome re-replication, and the other affects replicative chain elongation. To directly demonstrate the inhibitory effect of replicatively assembled nucleosomes, two types of minichromosomes were prepared: (i) post-replicative minichromosomes were assembled in a reaction coupled to replication as above; (ii) pre-replicative minichromosomes were assembled independently of replication on double-stranded DNA. Both types of minichromosomes were used as templates for DNA replication under identical conditions. Replicative fork movement was found to be impeded only on post-replicative minichromosome templates. In contrast, pre-replicative minichromosomes allowed one unconstrained replication cycle, but re-replication was inhibited due to a block in fork movement. Thus, replicatively assembled chromatin may have a profound influence on the re-replication of DNA.

  1. Investigating variation in replicability: A "Many Labs" replication project

    NARCIS (Netherlands)

    Klein, R.A.; Ratliff, K.A.; Vianello, M.; Adams, R.B.; Bahnik, S.; Bernstein, M.J.; Bocian, K.; Brandt, M.J.; Brooks, B.; Brumbaugh, C.C.; Cemalcilar, Z.; Chandler, J.; Cheong, W.; Davis, W.E.; Devos, T.; Eisner, M.; Frankowska, N.; Furrow, D.; Galliani, E.M.; Hasselman, F.W.; Hicks, J.A.; Hovermale, J.F.; Hunt, S.J.; Huntsinger, J.R.; IJzerman, H.; John, M.S.; Joy-Gaba, J.A.; Kappes, H.B.; Krueger, L.E.; Kurtz, J.; Levitan, C.A.; Mallett, R.K.; Morris, W.L.; Nelson, A.J.; Nier, J.A.; Packard, G.; Pilati, R.; Rutchick, A.M.; Schmidt, K.; Skorinko, J.L.M.; Smith, R.; Steiner, T.G.; Storbeck, J.; Van Swol, L.M.; Thompson, D.; Veer, A.E. van 't; Vaughn, L.A.; Vranka, M.; Wichman, A.L.; Woodzicka, J.A.; Nosek, B.A.

    2014-01-01

    Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently.

  2. Hepatitis B virus replication

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Hepadnaviruses, including human hepatitis B virus (HBV), replicate through reverse transcription of an RNA intermediate, the pregenomic RNA (pgRNA). Despite this kinship to retroviruses, there are fundamental differences beyond the fact that hepadnavirions contain DNA instead of RNA. Most peculiar is the initiation of reverse transcription: it occurs by protein-priming, is strictly committed to using an RNA hairpin on the pgRNA,ε, as template, and depends on cellular chaperones;moreover, proper replication can apparently occur only in the specialized environment of intact nucleocapsids.This complexity has hampered an in-depth mechanistic understanding. The recent successful reconstitution in the test tube of active replication initiation complexes from purified components, for duck HBV (DHBV),now allows for the analysis of the biochemistry of hepadnaviral replication at the molecular level. Here we review the current state of knowledge at all steps of the hepadnaviral genome replication cycle, with emphasis on new insights that turned up by the use of such cellfree systems. At this time, they can, unfortunately,not be complemented by three-dimensional structural information on the involved components. However, at least for the s RNA element such information is emerging,raising expectations that combining biophysics with biochemistry and genetics will soon provide a powerful integrated approach for solving the many outstanding questions. The ultimate, though most challenging goal,will be to visualize the hepadnaviral reverse transcriptase in the act of synthesizing DNA, which will also have strong implications for drug development.

  3. Solving Two Deadlock Cycles through Neighbor Replication on Grid Deadlock Detection Model

    Directory of Open Access Journals (Sweden)

    Ahmed N. Abdalla

    2012-01-01

    Full Text Available A data grid is compose of hundreds of geographically distributed computers and storage resources usually locate under different places and enables users to share data and other resources. Problem statement: Data replication is one of the mechanisms in managing data grid architecture that receive particular attention since it can provide efficient access to data, fault tolerance, reduce access latency and also can enhance the performance of the system. However, during transaction deadlock may occur that can reduce the throughput by minimizing the available resources, so it becomes an important resource management problem in distributed systems. Approach: The Neighbor Replication on Grid Deadlock Detection (NRGDD transaction model has been developed to handle two deadlock cycle problems on grid. By deploying this method, the transactions communicate with each other by passing the probe messages. The victim message has been used to detect the deadlock when the number of waiting resource by other transaction is highest and become as the cause of deadlock occurs. In addition, this transaction must be aborted to solve the problem. Results: NRGDD transaction model are able to detect and solve more than one cycle of deadlocks. Conclusion: NRGDD has resolve the deadlock problem by sending the minimum number of probes message to detect the deadlock and it can resolve the deadlock to ensure the transaction can be done smoothly.

  4. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  5. Psychology, replication & beyond.

    Science.gov (United States)

    Laws, Keith R

    2016-06-01

    Modern psychology is apparently in crisis and the prevailing view is that this partly reflects an inability to replicate past findings. If a crisis does exists, then it is some kind of 'chronic' crisis, as psychologists have been censuring themselves over replicability for decades. While the debate in psychology is not new, the lack of progress across the decades is disappointing. Recently though, we have seen a veritable surfeit of debate alongside multiple orchestrated and well-publicised replication initiatives. The spotlight is being shone on certain areas and although not everyone agrees on how we should interpret the outcomes, the debate is happening and impassioned. The issue of reproducibility occupies a central place in our whig history of psychology.

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  7. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  8. DNA replication origins in archaea

    OpenAIRE

    Zhenfang eWu; Jingfang eLiu; Haibo eYang; Hua eXiang

    2014-01-01

    DNA replication initiation, which starts at specific chromosomal site (known as replication origins), is the key regulatory stage of chromosome replication. Archaea, the third domain of life, use a single or multiple origin(s) to initiate replication of their circular chromosomes. The basic structure of replication origins is conserved among archaea, typically including an AT-rich unwinding region flanked by several conserved repeats (origin recognition box, ORB) that are located adjacent to ...

  9. Replication studies in longevity

    DEFF Research Database (Denmark)

    Varcasia, O; Garasto, S; Rizza, T

    2001-01-01

    In Danes we replicated the 3'APOB-VNTR gene/longevity association study previously carried out in Italians, by which the Small alleles (less than 35 repeats) had been identified as frailty alleles for longevity. In Danes, neither genotype nor allele frequencies differed between centenarians and 20...

  10. Replication-Fork Dynamics

    NARCIS (Netherlands)

    Duderstadt, Karl E.; Reyes-Lamothe, Rodrigo; van Oijen, Antoine M.; Sherratt, David J.

    2014-01-01

    The proliferation of all organisms depends on the coordination of enzymatic events within large multiprotein replisomes that duplicate chromosomes. Whereas the structure and function of many core replisome components have been clarified, the timing and order of molecular events during replication re

  11. Coronavirus Attachment and Replication

    Science.gov (United States)

    1988-03-28

    synthesis during RNA replication of vesicular stomatitis virus. J. Virol. 49:303-309. Pedersen, N.C. 1976a. Feline infectious peritonitis: Something old...receptors on intestinal brush border membranes from normal host species were developed for canine (CCV), feline (FIPV), porcine (TGEV), human (HCV...gastroenteritis receptor on pig BBMs ...... ................. ... 114 Feline infectious peritonitis virus receptor on cat BBMs ... .............. 117 Human

  12. Network Resources Optimization Scheduling Model in Cloud Computing%云计算中网络资源配比优化调度模型仿真

    Institute of Scientific and Technical Information of China (English)

    孟湘来; 马小雨

    2015-01-01

    Cloud computing server environment is different, once appear congestion network resources, the regional using different forms of network resource scheduling. The single way of network resource scheduling is difficult to meet the requirements of cloud computing network complexity. Put forward a kind of based on supply and demand equilibrium mechanism of cloud computing network planning model, quadratic weighted average method was used to construct network planning model limitation of cloud computing model to adjust the number assigned to the stretch of road network resources, USES the AGV control network congestion evaluation problems, analysis of cloud computing network equipment requirements and the balance between supply and demand mechanism, the number of nodes oriented, cost, and congestion degree three factors clear cloud computing network congestion intensity evaluation index system, determine the time limits and pressing for resources distribution. Experimental results show that, under this kind of model of cloud computing congestion relief efficiency, cost and utility degree is superior to the traditional model, has higher application value.%云计算服务器的环境不同,一旦出现网络资源拥塞,各区域采用的网络资源调度形式也不同。当前单一的网络资源调度方式很难满足云计算网络复杂性的要求。提出一种基于需求和供给均衡机制的云计算网络规划模型,采用二次加权平均方法构建云计算时效网络规划模型,模型不断调整已分配到路段上的网络资源数量,采用AGV控制网络堵塞评估问题,分析云计算网络设备需求的确定和供需平衡机制,面向节点数、成本以及拥塞程度三个因素明确云计算网络拥塞强度的评估指标体系,确定资源配送的时限要求和紧迫程度。实验结果说明,该种模型下的云计算拥塞救助效率、成本以及效用度都优于传统模型,具有较高的应用价值。

  13. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  14. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    Energy Technology Data Exchange (ETDEWEB)

    Clouse, C. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Edwards, M. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCoy, M. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, M. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Verdon, C. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  15. Performance analysis of data intensive cloud systems based on data management and replication: a survey

    Energy Technology Data Exchange (ETDEWEB)

    Malik, Saif Ur Rehman; Khan, Samee U.; Ewen, Sam J.; Tziritas, Nikos; Kolodziej, Joanna; Zomaya, Albert Y.; Madani, Sajjad A.; Min-Allah, Nasro; Wang, Lizhe; Xu, Cheng-Zhong; Malluhi, Qutaibah Marwan; Pecero, Johnatan E.; Balaji, Pavan; Vishnu, Abhinav; Ranjan, Rajiv; Zeadally, Sherali; Li, Hongxiang

    2015-03-14

    As we delve deeper into the ‘Digital Age’, we witness an explosive growth in the volume, velocity, and variety of the data available on the Internet. For example, in 2012 about 2.5 quintillion bytes of data was created on a daily basis that originated from myriad of sources and applications including mobiledevices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, etc. Such ‘Data Explosions’ has led to one of the most challenging research issues of the current Information and Communication Technology era: how to optimally manage (e.g., store, replicated, filter, and the like) such large amount of data and identify new ways to analyze large amounts of data for unlocking information. It is clear that such large data streams cannot be managed by setting up on-premises enterprise database systems as it leads to a large up-front cost in buying and administering the hardware and software systems. Therefore, next generation data management systems must be deployed on cloud. The cloud computing paradigm provides scalable and elastic resources, such as data and services accessible over the Internet Every Cloud Service Provider must assure that data is efficiently processed and distributed in a way that does not compromise end-users’ Quality of Service (QoS) in terms of data availability, data search delay, data analysis delay, and the like. In the aforementioned perspective, data replication is used in the cloud for improving the performance (e.g., read and write delay) of applications that access data. Through replication a data intensive application or system can achieve high availability, better fault tolerance, and data recovery. In this paper, we survey data management and replication approaches (from 2007 to 2011) that are developed by both industrial and research communities. The focus of the survey is to discuss and characterize the existing approaches of data replication and management that tackle the

  16. Information Resources Construction of Digital Library Based on Cloud Computing%基于云计算的数字图书馆信息资源建设

    Institute of Scientific and Technical Information of China (English)

    欧裕美

    2014-01-01

    介绍了数字图书馆信息资源建设现状,阐述了云计算的海量信息存储技术,讨论了云计算给数字图书馆信息资源建设带来的变革,探讨了基于云计算的数字图书馆信息资源建设面临的问题。%This paper introduces the current status of information resources construction of digital library, expounds the massive information storage technology of coud computing, discusses the changes brought about by the cloud computing to digital library, and probes into some problems existing in information resources construction of digital library.

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  19. ReplicationDomain: a visualization tool and comparative database for genome-wide replication timing data

    Directory of Open Access Journals (Sweden)

    Yokochi Tomoki

    2008-12-01

    Full Text Available Abstract Background Eukaryotic DNA replication is regulated at the level of large chromosomal domains (0.5–5 megabases in mammals within which replicons are activated relatively synchronously. These domains replicate in a specific temporal order during S-phase and our genome-wide analyses of replication timing have demonstrated that this temporal order of domain replication is a stable property of specific cell types. Results We have developed ReplicationDomain http://www.replicationdomain.org as a web-based database for analysis of genome-wide replication timing maps (replication profiles from various cell lines and species. This database also provides comparative information of transcriptional expression and is configured to display any genome-wide property (for instance, ChIP-Chip or ChIP-Seq data via an interactive web interface. Our published microarray data sets are publicly available. Users may graphically display these data sets for a selected genomic region and download the data displayed as text files, or alternatively, download complete genome-wide data sets. Furthermore, we have implemented a user registration system that allows registered users to upload their own data sets. Upon uploading, registered users may choose to: (1 view their data sets privately without sharing; (2 share with other registered users; or (3 make their published or "in press" data sets publicly available, which can fulfill journal and funding agencies' requirements for data sharing. Conclusion ReplicationDomain is a novel and powerful tool to facilitate the comparative visualization of replication timing in various cell types as well as other genome-wide chromatin features and is considerably faster and more convenient than existing browsers when viewing multi-megabase segments of chromosomes. Furthermore, the data upload function with the option of private viewing or sharing of data sets between registered users should be a valuable resource for the

  20. 云计算异构资源整合的分析与应用%Analysis and Application of Cloud Computing Integration of Heterogeneous Resources

    Institute of Scientific and Technical Information of China (English)

    吴金龙

    2012-01-01

    Cloud computing is a computing model of the Internet-based public participation. For information dis.aster recovery hardware and software status of the Shanghai Center of State Grid Corporation, as well as the practical problems faced in the disaster recovery business, the proposed framework for the technical means to regulate the management tools, integrated applications address the integration of heterogeneous resources. Constructed in the introduction of cloud computing on the basis of the integration of heterogeneous resources layer, focusing described the key issues of the resource model, resource access specification, and operation and maintenance management system interface, to design and build a cloud computing resource management platform, currently has a comprehensive sewer minicomputers, servers and storage devices, and achieved significant economic and management benefits, and also pointed out that: optimize the integration of hardware and software resources is only part of the information integration, the limited role of the individual hardware and software integration, only organically dynamic deployment and application software integration and resources together, support each other, in order to obtain the maximum benefits.%云计算是一种基于互联网大众参与的计算模式。针对国家电网公司信息灾备上海中心的软硬件现状以及在灾备业务中面l临的实际问题,提出了以架构为技术手段,以规范为管理原则,综合解决异构资源整合的应用方案。在介绍构建云计算异构资源整合层的基础上,叙述了资源模型、资源接人规范以及运维管理系统接口等问题,设计构建了云计算资源管理平台,全面纳管目前拥有的小型机、服务器以及存储设备,取得了明显的经济和管理效益。实践表明:软硬件资源优化只是信息整合的一部分,单独的软硬件整合作用是有限的,只有与应用软件整

  1. Public Library Training Program for Older Adults Addresses Their Computer and Health Literacy Needs. A Review of: Xie, B. (2011. Improving older adults’ e-health literacy through computer training using NIH online resources. Library & Information Science Research, 34, 63-71. doi: /10.1016/j.lisr.2011.07.006

    Directory of Open Access Journals (Sweden)

    Cari Merkley

    2012-12-01

    – Participants showed significant decreases in their levels of computer anxiety, and significant increases in their interest in computers at the end of the program (p>0.01. Computer and web knowledge also increased among those completing the knowledge tests. Most participants (78% indicated that something they had learned in the program impacted their health decision making, and just over half of respondents (55% changed how they took medication as a result of the program. Participants were also very satisfied with the program’s delivery and format, with 97% indicating that they had learned a lot from the course. Most (68% participants said that they wished the class had been longer, and there was full support for similar programming to be offered at public libraries. Participants also reported that they found the NIHSeniorHealth website more useful, but not significantly more usable, than MedlinePlus.Conclusion – The intervention as designed successfully addressed issues of computer and health literacy with older adult participants. By using existing resources, such as public library computer facilities and curricula developed by the National Institutes of Health, the intervention also provides a model that could be easily replicated in other locations without the need for significant financial resources.

  2. Resource Scheduling Strategy of SLA and QoS Under Cloud Computing%云环境下顾及SLA及QoS的资源调度策略

    Institute of Scientific and Technical Information of China (English)

    朱倩

    2016-01-01

    Considering that cloud computing does not require users paying attention to the bottomed system implementation, we take the technology of cloud computing as currently more popular distributed computing based services. However, efficient re-source allocation can reduce the excessive waste of resources, and increase user satisfaction by reducing cost, so as to improve the system performance. This paper realizes the accurate prediction on system performance with virtual technology in cloud computing platform, discussing the resource scheduling strategy based on Virtual Machine( VM) and ervice Level Agreement ( SLA) in a cloud environment from the service point of view. Simulation results show that the scheduling strategy is an effective method to improve the utilization of system resources, which has a certain practical value.%考虑到云技术是当前较为流行基于服务的分布式计算及其不需要用户关注底层的系统实现。有效的资源调配一方面能减少资源的过度浪费,或者减少成本以增加用户的满意度,最终提升系统的性能。本文通过对云计算平台资源的虚拟化技术,实现系统性能需求的精确预算。从服务的角度,探讨一种云环境下基于Virtual Machine ( VM)的顾及Service Level Agreement ( SLA)及Quality of Service ( QoS)的资源调度策略。模拟实验结果表明,本资源的调度策略是一种提高系统资源利用率的有效手段,具有一定的实用价值。

  3. 人力资源规划计算机辅助预测模型的设计%Computer Aided Prediction Model Design of Human Resources Planning

    Institute of Scientific and Technical Information of China (English)

    俞明; 余浩洋

    2013-01-01

    From the current situation of human resource planning, combined with the content and process of human resources plan, it designs the model of computer aided design prediction, and explains the basic structure and mathematic model. It designs an application model of human resources planning. It has done a total and classified planning, and has analyzed the results.%  从人力资源规划现状出发,结合人力资源规划的内容和步骤,进行了计算机辅助预测模型的设计,说明了其基本结构和数学模型。设计了人力资源规划的应用示例,进行了总量规划和分类规划,并对规划结果进行了分析,提出解决策略。

  4. 云计算环境下的DPSO资源负载均衡算法%DPSO resource load balancing in cloud computing

    Institute of Scientific and Technical Information of China (English)

    冯小靖; 潘郁

    2013-01-01

    Load balancing problem is one of the hot issues in cloud computing. Discrete particle swarm optimization algoritm is used to research load balancing on cloud computing environment. According to dynamic change of resources demand and low require of servers, each resource management node servers as node of the topological structure, and this paper establishes appropriate resource-task model which is resolved by DPSO. Verification results show that the algorithm enhances the utilization ratio and load balancing of resources.%负载均衡问题是云计算研究的热点问题之一.运用离散粒子群算法对云计算环境下的负载均衡问题进行研究,根据云计算环境下资源需求动态变化,并且对资源节点服务器的要求较低的特点,把各个资源节点当做网络拓扑结构中的各个节点,建立相应的资源-任务分配模型,运用离散粒子群算法实现资源负载均衡.验证表明,该算法提高了资源利用率和云计算资源的负载均衡.

  5. Cloud Computing Technology Applied in the Human Resource Management System%云计算技术在人力资源管理系统中的应用

    Institute of Scientific and Technical Information of China (English)

    王燕

    2013-01-01

      随着科技的发展和知识经济时代的来临,企业管理者逐步认识到人力资源管理的信息化将成为未来发展的必然趋势。云计算技术作为新一代的资源共享利用模式,具有需求服务自助化、服务可计量化的特点。将云计算技术引入人力资源管理系统,可对人才招聘、绩效管理和薪酬管理等方面产生重大影响,人力资源管理工作将更加流程化、标准化和透明化。%With the development of technology and the knowledge economy era coming,enterprise managers gradually realize that human resource management information technology will become a trend.As one of the next generation of resource sharing modes,cloud computing technology has the feature that demand service is on self and can be measured.Putting the cloud computing technology into human resource management system,it will have significant impact on talent recruitment,performance management and compensation management so that human resources management will be more streamlined,standardized and transparent.

  6. Elastic resource adjustment method for cloud computing data center%面向云计算数据中心的弹性资源调整方法

    Institute of Scientific and Technical Information of China (English)

    申京; 吴晨光; 郝洋; 殷波; 蔺艳斐

    2015-01-01

    To make resources purchase plan of service which has a variety of service quality requirements, this paper proposes an application performance oriented-cloud computing elastic resources adjustment method through the platform as a service and infrastructure as a service to sign agreement on allocation of resources based on Service-Level Agreement. Using the automatic scaling algorithm,this method adjusts virtual machine resources of load demand in the vertical level. In order to dynamically adjust allocation of resources to meet the needs of application service level,the cloud computing resources utilization rate is optimized. Simulation results are provided to show the effectiveness of the proposed method.%为了制定多种业务质量要求服务的资源购买方案,通过平台服务商与基础设施服务商之间签订基于服务等级协议的资源分配协议,提出一种面向应用性能的云计算弹性资源调整方法。该方法利用自动伸缩算法,在垂直层次上对负载需求的波动进行虚拟机资源调整,以实现动态调整分配资源量来满足应用的服务级别的需求,优化云计算资源利用率。最后通过仿真验证该算法的有效性。

  7. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    Energy Technology Data Exchange (ETDEWEB)

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  8. Content replication and placement in mobile networks

    CERN Document Server

    La, Chi-Anh; Casetti, Claudio; Chiasserini, Carla-Fabiana; Fiore, Marco

    2011-01-01

    Performance and reliability of content access in mobile networks is conditioned by the number and location of content replicas deployed at the network nodes. Location theory has been the traditional, centralized approach to study content replication: computing the number and placement of replicas in a static network can be cast as a facility location problem. The endeavor of this work is to design a practical solution to the above joint optimization problem that is suitable for mobile wireless environments. We thus seek a replication algorithm that is lightweight, distributed, and reactive to network dynamics. We devise a solution that lets nodes (i) share the burden of storing and providing content, so as to achieve load balancing, and (ii) autonomously decide whether to replicate or drop the information, so as to adapt the content availability to dynamic demands and time-varying network topologies. We evaluate our mechanism through simulation, by exploring a wide range of settings, including different node ...

  9. Replicating Cardiovascular Condition-Birth Month Associations

    Science.gov (United States)

    Li, Li; Boland, Mary Regina; Miotto, Riccardo; Tatonetti, Nicholas P.; Dudley, Joel T.

    2016-01-01

    Independent replication is vital for study findings drawn from Electronic Health Records (EHR). This replication study evaluates the relationship between seasonal effects at birth and lifetime cardiovascular condition risk. We performed a Season-wide Association Study on 1,169,599 patients from Mount Sinai Hospital (MSH) to compute phenome-wide associations between birth month and CVD. We then evaluated if seasonal patterns found at MSH matched those reported at Columbia University Medical Center. Coronary arteriosclerosis, essential hypertension, angina, and pre-infarction syndrome passed phenome-wide significance and their seasonal patterns matched those previously reported. Atrial fibrillation, cardiomyopathy, and chronic myocardial ischemia had consistent patterns but were not phenome-wide significant. We confirm that CVD risk peaks for those born in the late winter/early spring among the evaluated patient populations. The replication findings bolster evidence for a seasonal birth month effect in CVD. Further study is required to identify the environmental and developmental mechanisms. PMID:27624541

  10. Reversible Switching of Cooperating Replicators

    Science.gov (United States)

    Urtel, Georg C.; Rind, Thomas; Braun, Dieter

    2017-02-01

    How can molecules with short lifetimes preserve their information over millions of years? For evolution to occur, information-carrying molecules have to replicate before they degrade. Our experiments reveal a robust, reversible cooperation mechanism in oligonucleotide replication. Two inherently slow replicating hairpin molecules can transfer their information to fast crossbreed replicators that outgrow the hairpins. The reverse is also possible. When one replication initiation site is missing, single hairpins reemerge from the crossbreed. With this mechanism, interacting replicators can switch between the hairpin and crossbreed mode, revealing a flexible adaptation to different boundary conditions.

  11. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  15. Self-replication with magnetic dipolar colloids.

    Science.gov (United States)

    Dempster, Joshua M; Zhang, Rui; Olvera de la Cruz, Monica

    2015-10-01

    Colloidal self-replication represents an exciting research frontier in soft matter physics. Currently, all reported self-replication schemes involve coating colloidal particles with stimuli-responsive molecules to allow switchable interactions. In this paper, we introduce a scheme using ferromagnetic dipolar colloids and preprogrammed external magnetic fields to create an autonomous self-replication system. Interparticle dipole-dipole forces and periodically varying weak-strong magnetic fields cooperate to drive colloid monomers from the solute onto templates, bind them into replicas, and dissolve template complexes. We present three general design principles for autonomous linear replicators, derived from a focused study of a minimalist sphere-dimer magnetic system in which single binding sites allow formation of dimeric templates. We show via statistical models and computer simulations that our system exhibits nonlinear growth of templates and produces nearly exponential growth (low error rate) upon adding an optimized competing electrostatic potential. We devise experimental strategies for constructing the required magnetic colloids based on documented laboratory techniques. We also present qualitative ideas about building more complex self-replicating structures utilizing magnetic colloids.

  16. Extremal dynamics in random replicator ecosystems

    Energy Technology Data Exchange (ETDEWEB)

    Kärenlampi, Petri P., E-mail: petri.karenlampi@uef.fi

    2015-10-02

    The seminal numerical experiment by Bak and Sneppen (BS) is repeated, along with computations with replicator models, including a greater amount of features. Both types of models do self-organize, and do obey power-law scaling for the size distribution of activity cycles. However species extinction within the replicator models interferes with the BS self-organized critical (SOC) activity. Speciation–extinction dynamics ruins any stationary state which might contain a steady size distribution of activity cycles. The BS-type activity appears as a dissimilar phenomenon in comparison to speciation–extinction dynamics in the replicator system. No criticality is found from the speciation–extinction dynamics. Neither are speciations and extinctions in real biological macroevolution known to contain any diverging distributions, or self-organization towards any critical state. Consequently, biological macroevolution probably is not a self-organized critical phenomenon. - Highlights: • Extremal Dynamics organizes random replicator ecosystems to two phases in fitness space. • Replicator systems show power-law scaling of activity. • Species extinction interferes with Bak–Sneppen type mutation activity. • Speciation–extinction dynamics does not show any critical phase transition. • Biological macroevolution probably is not a self-organized critical phenomenon.

  17. The Influence Of Quality Services And The Human Resources Development To User Satisfaction For Accounting Computer Study At Local Government Officials Depok West Java

    Directory of Open Access Journals (Sweden)

    Asyari

    2015-08-01

    Full Text Available The benefit that is felt directly by the customer in using a computer accounting program into an expectation of users to a product produced by an accounting information system . the existence of accounting system will provide convenience in processing accounting data into an output of financial statements . investors and the public will be easy to read and profit earnings results thanks to sales of computer usage accounting . This study intends to seek clarity from the influence of quality of services and human resource development of the accounting computer user satisfaction . object of research is the environment of local government officials Depok West Java . The results showed that the effect on the service user satisfaction . And development of employees a significant effect on user satisfaction

  18. Chromatin replication and epigenome maintenance

    DEFF Research Database (Denmark)

    Alabert, Constance; Groth, Anja

    2012-01-01

    initiates, whereas the replication process itself disrupts chromatin and challenges established patterns of genome regulation. Specialized replication-coupled mechanisms assemble new DNA into chromatin, but epigenome maintenance is a continuous process taking place throughout the cell cycle. If DNA...

  19. Chromatin replication and epigenome maintenance

    DEFF Research Database (Denmark)

    Alabert, Constance; Groth, Anja

    2012-01-01

    initiates, whereas the replication process itself disrupts chromatin and challenges established patterns of genome regulation. Specialized replication-coupled mechanisms assemble new DNA into chromatin, but epigenome maintenance is a continuous process taking place throughout the cell cycle. If DNA...

  20. Initiation of adenovirus DNA replication.

    OpenAIRE

    Reiter, T; Fütterer, J; Weingärtner, B; Winnacker, E L

    1980-01-01

    In an attempt to study the mechanism of initiation of adenovirus DNA replication, an assay was developed to investigate the pattern of DNA synthesis in early replicative intermediates of adenovirus DNA. By using wild-type virus-infected cells, it was possible to place the origin of adenovirus type 2 DNA replication within the terminal 350 to 500 base pairs from either of the two molecular termini. In addition, a variety of parameters characteristic of adenovirus DNA replication were compared ...

  1. Chromatin replication and epigenome maintenance

    DEFF Research Database (Denmark)

    Alabert, Constance; Groth, Anja

    2012-01-01

    Stability and function of eukaryotic genomes are closely linked to chromatin structure and organization. During cell division the entire genome must be accurately replicated and the chromatin landscape reproduced on new DNA. Chromatin and nuclear structure influence where and when DNA replication...... initiates, whereas the replication process itself disrupts chromatin and challenges established patterns of genome regulation. Specialized replication-coupled mechanisms assemble new DNA into chromatin, but epigenome maintenance is a continuous process taking place throughout the cell cycle. If DNA...

  2. Multi-source information resources management in cloud computing environment%云计算环境下多源信息资源管理方法

    Institute of Scientific and Technical Information of China (English)

    徐达宇; 杨善林; 罗贺

    2012-01-01

    To realize the effective management for multi-source information resources under dynamic cloud computingenvironment, and to ensure efficient system operation, high quality resource sharing and real-time service providingof cloud computing system, the key problems and challenges were proposed on the basis of summarizing the research results in multi-source information resource cataloguing format and description language, discovery and matching mechanism, dynamic organization and allocation methods as well as real-time monitoring. The research prospect of multi-source information resource management in cloud computing was given, and a multi-source information man- agement framework in cloud computing was constructed. Its application in manufacturing was also discussed.%为了在动态云计算环境下对多源信息资源实现有效的管理,以保证云计算系统高效运行、资源优质共享和服务即时提供,在总结多源信息资源编目格式和描述语言、发现和匹配机制、动态组织和分配方法,以及即时监控等领域研究成果的基础上,提出了该领域所面临的关键问题和挑战,给出了云计算环境下多源信息资源管理领域的研究展望,构建了云计算环境下多源信息资源管理框架,并探讨了其在制造业背景下的应用。

  3. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  5. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  6. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  8. COMPUTING

    CERN Document Server

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  9. Replication Research and Special Education

    Science.gov (United States)

    Travers, Jason C.; Cook, Bryan G.; Therrien, William J.; Coyne, Michael D.

    2016-01-01

    Replicating previously reported empirical research is a necessary aspect of an evidence-based field of special education, but little formal investigation into the prevalence of replication research in the special education research literature has been conducted. Various factors may explain the lack of attention to replication of special education…

  10. Replication Research and Special Education

    Science.gov (United States)

    Travers, Jason C.; Cook, Bryan G.; Therrien, William J.; Coyne, Michael D.

    2016-01-01

    Replicating previously reported empirical research is a necessary aspect of an evidence-based field of special education, but little formal investigation into the prevalence of replication research in the special education research literature has been conducted. Various factors may explain the lack of attention to replication of special education…

  11. Physically Embedded Minimal Self-Replicating Systems

    DEFF Research Database (Denmark)

    Fellermann, Harold

    Self-replication is a fundamental property of all living organisms, yet has only been accomplished to limited extend in manmade systems. This thesis is part of the ongoing research endeavor to bridge the two sides of this gap. In particular, we present simulation results of a minimal life......-like, artificial, molecular aggregate (i.e. protocell) that has been proposed by Steen Rasussen and coworkers and is currently pursued both experimentally and computationally in interdisciplinary international research projects. We develop a space-time continuous physically motivated simulation framework based...... computational models. This allows us to address key issues of the replicating subsystems – container, genome, and metabolism – both individually and in mutual coupling. We analyze each step in the life-cycle of the molecular aggregate, and a final integrated simulation of the entire life-cycle is prepared. Our...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  13. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Science.gov (United States)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  14. Applied Research in Human Resource Management System Computer%人力资源管理系统中计算机应用研究

    Institute of Scientific and Technical Information of China (English)

    葛航

    2014-01-01

    Information technology continues to develop, businesses are increasingly using computer technology management busi-ness, human resource management as the foundation for enterprise development management module to enhance the management efficiency has greatly changed. Based on the current status of the analysis of human resource management, and describes the applica-tion of computer technology in human resources management in the hope of contributing to the development of enterprises.%信息化技术不断发展,各行各业都在逐渐应用计算机技术管理企业,人力资源管理作为企业发展的基础管理模块,对于管理效率的提升也有很大改变。该文根据目前我国人力资源管理现状进行分析,并且阐述了计算机技术在人力管理中的应用,希望为企业发展做出贡献。

  15. A Scheme of Collecting and Accounting Used by Cloud Computing Resource%一种云计算资源使用的采集记账方案

    Institute of Scientific and Technical Information of China (English)

    苏宇; 沈苏彬

    2015-01-01

    随着云计算技术的广泛普及与应用,云计算的计费逐渐成为云计算商业化中的重要功能.基础设施即服务(IaaS)是最基础的云服务,为用户提供的是基础设施资源服务.IaaS云的计费首先要解决如何对云资源使用的采集以及记账,而常规的采集方式并不适用于云计算环境.因此,研究云计算环境下资源使用的采集记账技术及实现方法具有实用价值.通过研究云计算环境下的资源采集方法,分析国内外云资源记账的研究现状和云计费的技术需求;选择异步消息传送的模块间通信方案,设计和实现了模块之间解耦,提高系统的可伸缩性和稳定性,降低系统性能开销的机制.通过基于OpenStack平台对资源使用的采集和记账原型系统的测试,表明异步消息传送能够正确地实现模块间数据的采集和传送,并具有较低的系统开销、较好的稳定性和可伸缩性.%With the popularity and application of cloud computing technology,billing is becoming one of the important functions in cloud computing commercialization. Infrastructure as a Service (IaaS) is the most basic cloud service,by which users are provided with a serv-ice of infrastructure resources. To provide the capability of billing for cloud computing,the capabilities of collecting and accounting for re-sources are needed,while the conventional approach cannot be applied to the cloud computing environments. Therefore,the study on tech-niques of collecting and accounting resource usage for cloud computing has practical value. By studying the resource collecting methods in cloud computing environment,the technical requirements of billing and research situation of cloud resource accounting at home and a-broad are analyzed. The distributed and asynchronous message passing is selected for communications,design and implement the mecha-nism of decoupling among modules,improving system scalability and stability,and reducing the

  16. Anatomy of Mammalian Replication Domains

    Science.gov (United States)

    Takebayashi, Shin-ichiro; Ogata, Masato; Okumura, Katsuzumi

    2017-01-01

    Genetic information is faithfully copied by DNA replication through many rounds of cell division. In mammals, DNA is replicated in Mb-sized chromosomal units called “replication domains.” While genome-wide maps in multiple cell types and disease states have uncovered both dynamic and static properties of replication domains, we are still in the process of understanding the mechanisms that give rise to these properties. A better understanding of the molecular basis of replication domain regulation will bring new insights into chromosome structure and function. PMID:28350365

  17. Greedy scheduling of cellular self-replication leads to optimal doubling times with a log-Frechet distribution.

    Science.gov (United States)

    Pugatch, Rami

    2015-02-24

    Bacterial self-replication is a complex process composed of many de novo synthesis steps catalyzed by a myriad of molecular processing units, e.g., the transcription-translation machinery, metabolic enzymes, and the replisome. Successful completion of all production tasks requires a schedule-a temporal assignment of each of the production tasks to its respective processing units that respects ordering and resource constraints. Most intracellular growth processes are well characterized. However, the manner in which they are coordinated under the control of a scheduling policy is not well understood. When fast replication is favored, a schedule that minimizes the completion time is desirable. However, if resources are scarce, it is typically computationally hard to find such a schedule, in the worst case. Here, we show that optimal scheduling naturally emerges in cellular self-replication. Optimal doubling time is obtained by maintaining a sufficiently large inventory of intermediate metabolites and processing units required for self-replication and additionally requiring that these processing units be "greedy," i.e., not idle if they can perform a production task. We calculate the distribution of doubling times of such optimally scheduled self-replicating factories, and find it has a universal form-log-Frechet, not sensitive to many microscopic details. Analyzing two recent datasets of Escherichia coli growing in a stationary medium, we find excellent agreement between the observed doubling-time distribution and the predicted universal distribution, suggesting E. coli is optimally scheduling its replication. Greedy scheduling appears as a simple generic route to optimal scheduling when speed is the optimization criterion. Other criteria such as efficiency require more elaborate scheduling policies and tighter regulation.

  18. SLA-Based Cloud Computing Resource Scheduling Mechanism%基于SLA的云计算资源调度机制研究

    Institute of Scientific and Technical Information of China (English)

    雷洁; 鄂雪妮; 桂雁军

    2014-01-01

    According to the deficiencies of both the task scheduling algorithm and resource load balancing algorithm of IaaS layer, SLA-based cloud computing resource scheduling mechanism was discussed based on the theory of SLA management and resource scheduling .A SLA-based resource scheduling framework was proposed .The SLA management mechanism oriented to the IaaS resources service provider and its contents were discussed .QoS assurance mechanism of SLA -based management was designed.The service for SLA was achieved by interaction of load balancing module and task scheduling module .The gain of IaaS service provider was maximized while resource utilization was also maximized under the QoS -constrained condition .%针对现有的IaaS层的资源调度研究在任务调度机制和资源负载均衡机制上存在的不足,以SLA管理、资源调度等理论为基础,结合现有研究成果,对基于SLA的云计算资源调度策略进行一些针对性的研究。提出了基于SLA的云计算资源调度框架,讨论了面向IaaS资源服务提供商SLA管理机制及内容,设计了基于SLA管理的QoS保证机制,与负载均衡模块和任务调度模块交互实现服务SLA的保证,有效实现IaaS资源服务提供商在任务QoS约束下最大化资源利用率的同时获得最大的收益。

  19. Graduate Enrollment Increases in Science and Engineering Fields, Especially in Engineering and Computer Sciences. InfoBrief: Science Resources Statistics.

    Science.gov (United States)

    Burrelli, Joan S.

    This brief describes graduate enrollment increases in the science and engineering fields, especially in engineering and computer sciences. Graduate student enrollment is summarized by enrollment status, citizenship, race/ethnicity, and fields. (KHR)

  20. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems

    Science.gov (United States)

    Li, Ying

    2016-09-01

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  1. Replication Attack Mitigations for Static and Mobile WSN

    CERN Document Server

    Manjula, V; 10.5121/ijnsa.2011.3210

    2011-01-01

    Security is important for many sensor network applications. Wireless Sensor Networks (WSN) are often deployed in hostile environments as static or mobile, where an adversary can physically capture some of the nodes. once a node is captured, adversary collects all the credentials like keys and identity etc. the attacker can re-program it and replicate the node in order to eavesdrop the transmitted messages or compromise the functionality of the network. Identity theft leads to two types attack: clone and sybil. In particularly a harmful attack against sensor networks where one or more node(s) illegitimately claims an identity as replicas is known as the node replication attack. The replication attack can be exceedingly injurious to many important functions of the sensor network such as routing, resource allocation, misbehavior detection, etc. This paper analyzes the threat posed by the replication attack and several novel techniques to detect and defend against the replication attack, and analyzes their effect...

  2. FAULT TOLERANT SCHEDULING STRATEGY FOR COMPUTATIONAL GRID ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    MALARVIZHI NANDAGOPAL,

    2010-09-01

    Full Text Available Computational grids have the potential for solving large-scale scientific applications using heterogeneous and geographically distributed resources. In addition to the challenges of managing and scheduling these applications, reliability challenges arise because of the unreliable nature of grid infrastructure. Two major problems that are critical to the effective utilization of computational resources are efficient scheduling of jobs and providing fault tolerance in a reliable manner. This paper addresses these problems by combining the checkpoint replication based fault tolerance echanism with Minimum Total Time to Release (MTTR job scheduling algorithm. TTR includes the service time of the job, waiting time in the queue, transfer of input and output data to and from the resource. The MTTR algorithm minimizes the TTR by selecting a computational resource based on job requirements, job characteristics and hardware features of the resources. The fault tolerance mechanism used here sets the job checkpoints based on the resource failure rate. If resource failure occurs, the job is restarted from its last successful state using a checkpoint file from another grid resource. Acritical aspect for an automatic recovery is the availability of checkpoint files. A strategy to increase the availability of checkpoints is replication. Replica Resource Selection Algorithm (RRSA is proposed to provide Checkpoint Replication Service (CRS. Globus Tool Kit is used as the grid middleware to set up a grid environment and evaluate the performance of the proposed approach. The monitoring tools Ganglia and NWS (Network Weather Service are used to gather hardware and network details respectively. The experimental results demonstrate that, the proposed approach effectively schedule the grid jobs with fault tolerant way thereby reduces TTR of the jobs submitted in the grid. Also, it increases the percentage of jobs completed within specified deadline and making the grid

  3. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    the learning game “Global Conflicts: Latin America” as a resource into the teaching and learning of a course involving the two subjects “English language learning” and “Social studies” at the final year in a Danish high school. The study adapts an explorative research design approach and investigates...

  4. Modeling inhomogeneous DNA replication kinetics.

    Directory of Open Access Journals (Sweden)

    Michel G Gauthier

    Full Text Available In eukaryotic organisms, DNA replication is initiated at a series of chromosomal locations called origins, where replication forks are assembled proceeding bidirectionally to replicate the genome. The distribution and firing rate of these origins, in conjunction with the velocity at which forks progress, dictate the program of the replication process. Previous attempts at modeling DNA replication in eukaryotes have focused on cases where the firing rate and the velocity of replication forks are homogeneous, or uniform, across the genome. However, it is now known that there are large variations in origin activity along the genome and variations in fork velocities can also take place. Here, we generalize previous approaches to modeling replication, to allow for arbitrary spatial variation of initiation rates and fork velocities. We derive rate equations for left- and right-moving forks and for replication probability over time that can be solved numerically to obtain the mean-field replication program. This method accurately reproduces the results of DNA replication simulation. We also successfully adapted our approach to the inverse problem of fitting measurements of DNA replication performed on single DNA molecules. Since such measurements are performed on specified portion of the genome, the examined DNA molecules may be replicated by forks that originate either within the studied molecule or outside of it. This problem was solved by using an effective flux of incoming replication forks at the model boundaries to represent the origin activity outside the studied region. Using this approach, we show that reliable inferences can be made about the replication of specific portions of the genome even if the amount of data that can be obtained from single-molecule experiments is generally limited.

  5. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    Kearney David

    2007-01-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  6. A Gain-Computation Enhancements Resource Allocation for Heterogeneous Service Flows in IEEE 802.16 m Mobile Networks

    Directory of Open Access Journals (Sweden)

    Wafa Ben Hassen

    2012-01-01

    an access method. In IEEE 802.16 m standard, a contiguous method for subchannel construction is adopted in order to reduce OFDMA system complexity. In this context, we propose a new subchannel gain computation method depending on frequency responses dispersion. This method has a crucial role in the resource management and optimization. In a single service access, we propose a dynamic resource allocation algorithm at the physical layer aiming to maximize the cell data rate while ensuring fairness among users. In heterogeneous data traffics, we study scheduling in order to provide delay guaranties to real-time services, maximize throughput of non-real-time services while ensuring fairness to users. We compare performances to recent existing algorithms in OFDMA systems showing that proposed schemes provide lower complexity, higher total system capacity, and fairness among users.

  7. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    David Kearney

    2007-02-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  8. Computer Virus Protection

    Science.gov (United States)

    Rajala, Judith B.

    2004-01-01

    A computer virus is a program--a piece of executable code--that has the unique ability to replicate. Like biological viruses, computer viruses can spread quickly and are often difficult to eradicate. They can attach themselves to just about any type of file, and are spread by replicating and being sent from one individual to another. Simply having…

  9. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    Science.gov (United States)

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  10. CMS computing upgrade and evolution

    CERN Document Server

    Hernandez Calama, Jose

    2013-01-01

    The distributed Grid computing infrastructure has been instrumental in the successful exploitation of the LHC data leading to the discovery of the Higgs boson. The computing system will need to face new challenges from 2015 on when LHC restarts with an anticipated higher detector output rate and event complexity, but with only a limited increase in the computing resources. A more efficient use of the available resources will be mandatory. CMS is improving the data storage, distribution and access as well as the processing efficiency. Remote access to the data through the WAN, dynamic data replication and deletion based on the data access patterns, and separation of disk and tape storage are some of the areas being actively developed. Multi-core processing and scheduling is being pursued in order to make a better use of the multi-core nodes available at the sites. In addition, CMS is exploring new computing techniques, such as Cloud Computing, to get access to opportunistic resources or as a means of using wit...

  11. Offloading Method for Efficient Use of Local Computational Resources in Mobile Location-Based Services Using Clouds

    Directory of Open Access Journals (Sweden)

    Yunsik Son

    2017-01-01

    Full Text Available With the development of mobile computing, location-based services (LBSs have been developed to provide services based on location information through communication networks or the global positioning system. In recent years, LBSs have evolved into smart LBSs, which provide many services using only location information. These include basic services such as traffic, logistic, and entertainment services. However, a smart LBS may require relatively complicated operations, which may not be effectively performed by the mobile computing system. To overcome this problem, a computation offloading technique can be used to perform certain tasks on mobile devices in cloud and fog environments. Furthermore, mobile platforms exist that provide smart LBSs. The smart cross-platform is a solution based on a virtual machine (VM that enables compatibility of content in various mobile and smart device environments. However, owing to the nature of the VM-based execution method, the execution performance is degraded compared to that of the native execution method. In this paper, we introduce a computation offloading technique that utilizes fog computing to improve the performance of VMs running on mobile devices. We applied the proposed method to smart devices with a smart VM (SVM and HTML5 SVM to compare their performances.

  12. Linear equations and rap battles: how students in a wired classroom utilized the computer as a resource to coordinate personal and mathematical positional identities in hybrid spaces

    Science.gov (United States)

    Langer-Osuna, Jennifer

    2015-03-01

    This paper draws on the constructs of hybridity, figured worlds, and cultural capital to examine how a group of African-American students in a technology-driven, project-based algebra classroom utilized the computer as a resource to coordinate personal and mathematical positional identities during group work. Analyses of several vignettes of small group dynamics highlight how hybridity was established as the students engaged in multiple on-task and off-task computer-based activities, each of which drew on different lived experiences and forms of cultural capital. The paper ends with a discussion on how classrooms that make use of student-led collaborative work, and where students are afforded autonomy, have the potential to support the academic engagement of students from historically marginalized communities.

  13. Competition and cooperation in dynamic replication networks.

    Science.gov (United States)

    Dadon, Zehavit; Wagner, Nathaniel; Alasibi, Samaa; Samiappan, Manickasundaram; Mukherjee, Rakesh; Ashkenasy, Gonen

    2015-01-07

    The simultaneous replication of six coiled-coil peptide mutants by reversible thiol-thioester exchange reactions is described. Experimental analysis of the time dependent evolution of networks formed by the peptides under different conditions reveals a complex web of molecular interactions and consequent mutant replication, governed by competition for resources and by autocatalytic and/or cross-catalytic template-assisted reactions. A kinetic model, first of its kind, is then introduced, allowing simulation of varied network behaviour as a consequence of changing competition and cooperation scenarios. We suggest that by clarifying the kinetic description of these relatively complex dynamic networks, both at early stages of the reaction far from equilibrium and at later stages approaching equilibrium, one lays the foundation for studying dynamic networks out-of-equilibrium in the near future.

  14. Replicated Spectrographs in Astronomy

    CERN Document Server

    Hill, Gary J

    2014-01-01

    As telescope apertures increase, the challenge of scaling spectrographic astronomical instruments becomes acute. The next generation of extremely large telescopes (ELTs) strain the availability of glass blanks for optics and engineering to provide sufficient mechanical stability. While breaking the relationship between telescope diameter and instrument pupil size by adaptive optics is a clear path for small fields of view, survey instruments exploiting multiplex advantages will be pressed to find cost-effective solutions. In this review we argue that exploiting the full potential of ELTs will require the barrier of the cost and engineering difficulty of monolithic instruments to be broken by the use of large-scale replication of spectrographs. The first steps in this direction have already been taken with the soon to be commissioned MUSE and VIRUS instruments for the Very Large Telescope and the Hobby-Eberly Telescope, respectively. MUSE employs 24 spectrograph channels, while VIRUS has 150 channels. We compa...

  15. Identification and Mapping of Soils, Vegetation, and Water Resources of Lynn County, Texas, by Computer Analysis of ERTS MSS Data

    Science.gov (United States)

    Baumgardner, M. F.; Kristof, S. J.; Henderson, J. A., Jr.

    1973-01-01

    Results of the analysis and interpretation of ERTS multispectral data obtained over Lynn County, Texas, are presented. The test site was chosen because it embodies a variety of problems associated with the development and management of agricultural resources in the Southern Great Plains. Lynn County is one of ten counties in a larger test site centering around Lubbock, Texas. The purpose of this study is to examine the utility of ERTS data in identifying, characterizing, and mapping soils, vegetation, and water resources in this semiarid region. Successful application of multispectral remote sensing and machine-processing techniques to arid and seminarid land-management problems will provide valuable new tools for the more than one-third of the world's lands lying in arid-semiarid regions.

  16. Grid Computing

    Indian Academy of Sciences (India)

    2016-05-01

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers on demand. In this article,we describe the grid computing model and enumerate themajor differences between grid and cloud computing.

  17. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    OpenAIRE

    Williams, Samuel; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Irvine, CA

    2009-01-01

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to con...

  18. Chromatin Dynamics During DNA Replication and Uncharacterized Replication Factors determined by Nascent Chromatin Capture (NCC) Proteomics

    Science.gov (United States)

    Alabert, Constance; Bukowski-Wills, Jimi-Carlo; Lee, Sung-Bau; Kustatscher, Georg; Nakamura, Kyosuke; de Lima Alves, Flavia; Menard, Patrice; Mejlvang, Jakob; Rappsilber, Juri; Groth, Anja

    2014-01-01

    SUMMARY To maintain genome function and stability, DNA sequence and its organization into chromatin must be duplicated during cell division. Understanding how entire chromosomes are copied remains a major challenge. Here, we use Nascent Chromatin Capture (NCC) to profile chromatin proteome dynamics during replication in human cells. NCC relies on biotin-dUTP labelling of replicating DNA, affinity-purification and quantitative proteomics. Comparing nascent chromatin with mature post-replicative chromatin, we provide association dynamics for 3995 proteins. The replication machinery and 485 chromatin factors like CAF-1, DNMT1, SUV39h1 are enriched in nascent chromatin, whereas 170 factors including histone H1, DNMT3, MBD1-3 and PRC1 show delayed association. This correlates with H4K5K12diAc removal and H3K9me1 accumulation, while H3K27me3 and H3K9me3 remain unchanged. Finally, we combine NCC enrichment with experimentally derived chromatin probabilities to predict a function in nascent chromatin for 93 uncharacterized proteins and identify FAM111A as a replication factor required for PCNA loading. Together, this provides an extensive resource to understand genome and epigenome maintenance. PMID:24561620

  19. SUMO and KSHV Replication

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Pei-Ching [Institute of Microbiology and Immunology, National Yang-Ming University, Taipei 112, Taiwan (China); Kung, Hsing-Jien, E-mail: hkung@nhri.org.tw [Institute for Translational Medicine, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan (China); Department of Biochemistry and Molecular Medicine, University of California, Davis, CA 95616 (United States); UC Davis Cancer Center, University of California, Davis, CA 95616 (United States); Division of Molecular and Genomic Medicine, National Health Research Institutes, 35 Keyan Road, Zhunan, Miaoli County 35053, Taiwan (China)

    2014-09-29

    Small Ubiquitin-related MOdifier (SUMO) modification was initially identified as a reversible post-translational modification that affects the regulation of diverse cellular processes, including signal transduction, protein trafficking, chromosome segregation, and DNA repair. Increasing evidence suggests that the SUMO system also plays an important role in regulating chromatin organization and transcription. It is thus not surprising that double-stranded DNA viruses, such as Kaposi’s sarcoma-associated herpesvirus (KSHV), have exploited SUMO modification as a means of modulating viral chromatin remodeling during the latent-lytic switch. In addition, SUMO regulation allows the disassembly and assembly of promyelocytic leukemia protein-nuclear bodies (PML-NBs), an intrinsic antiviral host defense, during the viral replication cycle. Overcoming PML-NB-mediated cellular intrinsic immunity is essential to allow the initial transcription and replication of the herpesvirus genome after de novo infection. As a consequence, KSHV has evolved a way as to produce multiple SUMO regulatory viral proteins to modulate the cellular SUMO environment in a dynamic way during its life cycle. Remarkably, KSHV encodes one gene product (K-bZIP) with SUMO-ligase activities and one gene product (K-Rta) that exhibits SUMO-targeting ubiquitin ligase (STUbL) activity. In addition, at least two viral products are sumoylated that have functional importance. Furthermore, sumoylation can be modulated by other viral gene products, such as the viral protein kinase Orf36. Interference with the sumoylation of specific viral targets represents a potential therapeutic strategy when treating KSHV, as well as other oncogenic herpesviruses. Here, we summarize the different ways KSHV exploits and manipulates the cellular SUMO system and explore the multi-faceted functions of SUMO during KSHV’s life cycle and pathogenesis.

  20. Capacity Analysis of a Family Care Clinic Using Computer Simulation to Determine Optimal Enrollment Under Capitated Resource Allocation Constraints

    Science.gov (United States)

    1998-04-01

    throughout the Capacity Analysis 16 locations, entities, and resources within the simulation ( PROMODEL 1996). This will be especially useful when... PROMODEL Corporation. Paul, R., & Kuljis, J. (1995). A generic simulation package for organizing outpatient clinics. Proceedings of the HIMSS 1995 Winter...Command. PROMODEL Corporation. (1996). User’s guide to MedModel® healthcare simulation software. Orem, UT: PROMODEL Corporation. Searle, S. (1971

  1. A SURVEY ON STATE MONITORING OF COMPUTATIONAL RESOURCES IN CLOUD%云资源状态监控研究综述

    Institute of Scientific and Technical Information of China (English)

    洪斌; 彭甫阳; 邓波; 王东霞

    2016-01-01

    Cloud computing successfully achieves the efficient use of computational resources through internet sharing.The characteristics of cloud resources allocation such as the dynamics property,randomness and openness make the difficulty in QoS (Quality of Service ) assurance be increasingly noticeable.Through mining and analysing in depth the monitoring data,the monitoring technologies for resource state in cloud environment find timely the abnormal operation states in those computational resources,and make the prediction on resources usage state in the future according to historical operation data so as to timely discover potential performance bottlenecks and security threats, these provide the reliable and stable cloud services to users.In combination with instances,in the paper we introduce some representative research approaches in regard to resources states monitoring,including probability analysis,equation fitting and clustering analysis,etc.,and compare the performance features and limitations of different methods.In end of the paper,we discuss the technical challenges encountered by the monitoring technologies for cloud resource states in the aspects of data complexity and scale,and point out the future development trend such as redundancy removal and dimensionality reduction of primitive data,non-supervision highlighting in algorithm design and analysis, pushing the computational tasks to terminals,and synergies of analysis results,etc.%云计算通过网络共享成功实现了计算资源的高效利用。云资源分配的动态性、随机性、开放性使得云平台的服务质量保障难题日益突出。云环境下资源状态的监控技术通过深入挖掘分析监控数据,及时发现计算资源的异常运行状态。根据历史运行数据等对资源的未来使用状态做出预测。以便及时发现潜在的性能瓶颈和安全威胁,为用户提供可靠稳定的云服务。结合实例介绍了在资源状态监控方面有代

  2. Winning the Popularity Contest: Researcher Preference When Selecting Resources for Civil Engineering, Computer Science, Mathematics and Physics Dissertations

    Science.gov (United States)

    Dotson, Daniel S.; Franks, Tina P.

    2015-01-01

    More than 53,000 citations from 609 dissertations published at The Ohio State University between 1998-2012 representing four science disciplines--civil engineering, computer science, mathematics and physics--were examined to determine what, if any, preferences or trends exist. This case study seeks to identify whether or not researcher preferences…

  3. A Framework for Safe Composition of Heterogeneous SOA Services in a Pervasive Computing Environment with Resource Constraints

    Science.gov (United States)

    Reyes Alamo, Jose M.

    2010-01-01

    The Service Oriented Computing (SOC) paradigm, defines services as software artifacts whose implementations are separated from their specifications. Application developers rely on services to simplify the design, reduce the development time and cost. Within the SOC paradigm, different Service Oriented Architectures (SOAs) have been developed.…

  4. Efficient usage of Adabas replication

    CERN Document Server

    Storr, Dieter W

    2011-01-01

    In today's IT organization replication becomes more and more an essential technology. This makes Software AG's Event Replicator for Adabas an important part of your data processing. Setting the right parameters and establishing the best network communication, as well as selecting efficient target components, is essential for successfully implementing replication. This book provides comprehensive information and unique best-practice experience in the field of Event Replicator for Adabas. It also includes sample codes and configurations making your start very easy. It describes all components ne

  5. Solving the Telomere Replication Problem

    Science.gov (United States)

    Maestroni, Laetitia; Matmati, Samah; Coulon, Stéphane

    2017-01-01

    Telomeres are complex nucleoprotein structures that protect the extremities of linear chromosomes. Telomere replication is a major challenge because many obstacles to the progression of the replication fork are concentrated at the ends of the chromosomes. This is known as the telomere replication problem. In this article, different and new aspects of telomere replication, that can threaten the integrity of telomeres, will be reviewed. In particular, we will focus on the functions of shelterin and the replisome for the preservation of telomere integrity. PMID:28146113

  6. 基于MABC算法的云计算资源调度策略%Cloud computing resource schedule strategy based on MABC algorithm

    Institute of Scientific and Technical Information of China (English)

    卢荣锐; 彭志平

    2013-01-01

    为了提高云计算服务集群资源调度和任务分配的优化效果,提出一种基于改进的人工蜂群优化算法的云计算资源调度策略.针对ABC算法后期收敛速度慢,容易陷入局部最优的问题,引入了控制因子调度策略,通过自适应调整搜索空间,动态地调整蜜蜂之间的信息度,不断地进行信息交换跳出局部最优从而获得全局最优解.在云计算仿真平台CloudSim进行实验,结果表明,此方法能够缩短云环境下的任务平均运行时间,有效地提高了资源利用率.%To improve the optimization problem of the cloud computing' s service cluster resource schedule and task schedule,this paper presents cloud computing resource schedule strategy based on modified artificial bee colony (MABC) algorithm.The ABC algorithm convergence speed is slow,and it is easy to fall into local optimum.It introduces the learning factor scheduling strategy to adjust the search space adaptively.By adjusting the bee among information dynamically,it can carry out information exchange constantly and jump out of local optimal so as to obtain the global optimized solution.Through the cloud simulation platform CloudSim' s simulation,the experimental results show that the improved algorithm can shorten the cloud environment task average run time,improves the utilization rate of resources effectively.

  7. Mutation at Expanding Front of Self-Replicating Colloidal Clusters

    CERN Document Server

    Tanaka, Hidenori; Brenner, Michael P

    2016-01-01

    We construct a scheme for self-replicating square clusters of particles in two spatial dimensions, and validate it with computer simulations in a finite-temperature heat bath. We find that the self-replication reactions propagate through the bath in the form of Fisher waves. Our model reflects existing colloidal systems, but is simple enough to allow simulation of many generations and thereby the first study of evolutionary dynamics in an artificial system. By introducing spatially localized mutations in the replication rules, we show that the mutated cluster population can survive and spread with the expanding front in circular sectors of the colony.

  8. Mutation at Expanding Front of Self-Replicating Colloidal Clusters

    Science.gov (United States)

    Tanaka, Hidenori; Zeravcic, Zorana; Brenner, Michael P.

    2016-12-01

    We construct a scheme for self-replicating square clusters of particles in two spatial dimensions, and validate it with computer simulations in a finite-temperature heat bath. We find that the self-replication reactions propagate through the bath in the form of Fisher waves. Our model reflects existing colloidal systems, but is simple enough to allow simulation of many generations and thereby the first study of evolutionary dynamics in an artificial system. By introducing spatially localized mutations in the replication rules, we show that the mutated cluster population can survive and spread with the expanding front in circular sectors of the colony.

  9. Entropy involved in fidelity of DNA replication

    CERN Document Server

    Arias-Gonzalez, J Ricardo

    2012-01-01

    Information has an entropic character which can be analyzed within the Statistical Theory in molecular systems. R. Landauer and C.H. Bennett showed that a logical copy can be carried out in the limit of no dissipation if the computation is performed sufficiently slowly. Structural and recent single-molecule assays have provided dynamic details of polymerase machinery with insight into information processing. We introduce a rigorous characterization of Shannon Information in biomolecular systems and apply it to DNA replication in the limit of no dissipation. Specifically, we devise an equilibrium pathway in DNA replication to determine the entropy generated in copying the information from a DNA template in the absence of friction. Both the initial state, the free nucleotides randomly distributed in certain concentrations, and the final state, a polymerized strand, are mesoscopic equilibrium states for the nucleotide distribution. We use empirical stacking free energies to calculate the probabilities of incorpo...

  10. 混合云市场的计算资源交易模型%Computing resource trading models in hybrid cloud market Com-puter Engineering and Applications, 2014, 50(18):25-32

    Institute of Scientific and Technical Information of China (English)

    孙英华; 吴哲辉; 郭振波; 顾卫东

    2014-01-01

    A computing resource trading model named HCRM(Hybrid Cloud Resource Market)is proposed based on hybrid cloud environment. The market structure, management layers and quality models of supply and demand are dis-cussed. A quality-aware double auction algorithm named QaDA(Quality-aware Double Auction)is designed and simulated. Compared with traditional CDA(Continuous Double Auction), the simulation results show that QaDA not only can guide reasonable price but also can obtain higher matching ratio and higher deal amount.%基于计算资源共享模型的研究,提出了混合云计算资源市场HCRM(Hybrid Cloud Resource Market)的功能架构,研究了市场管理层的交易管理机制,给出了计算资源的供需质量模型,设计了一种质量感知的双向拍卖算法QaDA(Quality-aware Double Auction)。仿真运行结果表明,与普通的连续双向拍卖算法CDA(Continuous Double Auction)相比,QaDA不仅可以引导用户合理定价,也能获得较高的匹配成功率和较高的交易总额。

  11. Fundamentals of grid computing theory, algorithms and technologies

    CERN Document Server

    2010-01-01

    This volume discusses how the novel technologies of semantic web and workflow have been integrated into the grid and grid services. It focuses on sharing resources, data replication, data management, fault tolerance, scheduling, broadcasting, and load balancing algorithms. The book discusses emerging developments in grid computing, including cloud computing, and explores large-scale computing in high energy physics, weather forecasting, and more. The contributors often use simulations to evaluate the performance of models and algorithms. In the appendices, they present two types of easy-to-use open source software written in Java

  12. Charter School Replication. Policy Guide

    Science.gov (United States)

    Rhim, Lauren Morando

    2009-01-01

    "Replication" is the practice of a single charter school board or management organization opening several more schools that are each based on the same school model. The most rapid strategy to increase the number of new high-quality charter schools available to children is to encourage the replication of existing quality schools. This policy guide…

  13. Computer resource model of the Matra-Bukkalja, Hungary, lignite deposit for production planning, inspection and control

    Energy Technology Data Exchange (ETDEWEB)

    Fust, A.; Zergi, I.

    1985-01-01

    For the planning of lignite surface mining, a reliable geologic model is needed which can be updated by new survey data and used for the operative control of production. A computer model is proposed to analyze control, planning and inspection of production. The model is composed of two components, one from the geologic survey data, and the other refined by the production data. The half variograms of the Matra-Bukkalja lignite deposits are presented. The model can be used for the checking of forecast data.

  14. A Community-Based Approach to Monitor Resource for Cloud Computing%基于社区模型的云资源监测

    Institute of Scientific and Technical Information of China (English)

    祁鑫; 李振

    2012-01-01

    云计算是一种新兴的商业计算模型,资源性能和负载监测是其重要的研究点.分析了传统的分布式计算资源监测策略,针对云计算环境,引入社区模型设计了层次式社区监测,提出了基于敏感因子的监测方法,以解决全局监控可能会带来的数据繁冗和无效问题.仿真实验表明,模型和策略在理论上是合理的,在效率上较传统监测系统有一定的提高.%Cloud computing is an emerging computing model, and the resource performance and load monitoring is an important research point. According to cloud computing environment, this paper analyzed the monitoring methods of traditional distributed system, designed a hierarchical model introducing community model, and proposed an approach based on sensitivity factors, to solve the problems of data redun-dancy and invalid in global monitoring. Simulation results show that the model and method is reasonable in theory, and the efficiency has been improved on some degree.

  15. 基于云计算的实训资源管理系统设计%Design of Training Resource Management System Based on Cloud Computing

    Institute of Scientific and Technical Information of China (English)

    楼桦

    2011-01-01

    Based on cloud computing technology,this paper designs a system of college′s training resource which may meet the requirement of openness,scalability and on-demand deployment,and builds a realizable cloud computing architecture.This architecture reflects the value of three cloud computing services including IaaS,PaaS and SaaS.Combining the testing system,this paper puts forward the technology and method to achieve implementation.%基于云计算技术,设计满足开放性、可扩展性、按需部署的高校实训资源管理系统,提出一种现实可行的云计算架构。体现基础设施即服务、平台即服务和软件即服务三个云计算服务形态在实训资源管理系统的应用价值,结合已实现的可运行系统给出了实现的技术和方法。

  16. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    Directory of Open Access Journals (Sweden)

    Hendro Nindito

    2014-01-01

    Full Text Available The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MySQL running on Linux as the destination. The method applied in this research is prototyping in which the processes of development and testing can be done interactively and repeatedly. The key result of this research is that the replication technology applied, which is called Oracle GoldenGate, can successfully manage to do its task in replicating data in real-time and heterogeneous platforms.

  17. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    Energy Technology Data Exchange (ETDEWEB)

    Computational Research Division, Lawrence Berkeley National Laboratory; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Berkeley; Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-05-04

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads per MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications.

  18. 基于云计算的教学资源管理系统的构建研究%Construction of teaching resources management system based on cloud computing

    Institute of Scientific and Technical Information of China (English)

    黄瑞; 刘剑桥

    2014-01-01

    基于云计算的教学资源管理是未来教学资源管理的发展方向。文章从云计算、系统功能模块、系统基本结构、系统编程模式和系统计算模式等方面对教学资源管理系统进行构建研究,为建设基于云计算的教学资源管理系统提供技术支持。%Teaching resources management based on cloud computing is the direction of development of teaching resources management in the future. In this paper, the cloud computing, function modules, basic architecture, programming model and com-puting model of system are studied, and it provides technical support for building teaching resources management system based on cloud computing.

  19. Biological computation

    CERN Document Server

    Lamm, Ehud

    2011-01-01

    Introduction and Biological BackgroundBiological ComputationThe Influence of Biology on Mathematics-Historical ExamplesBiological IntroductionModels and Simulations Cellular Automata Biological BackgroundThe Game of Life General Definition of Cellular Automata One-Dimensional AutomataExamples of Cellular AutomataComparison with a Continuous Mathematical Model Computational UniversalitySelf-Replication Pseudo Code Evolutionary ComputationEvolutionary Biology and Evolutionary ComputationGenetic AlgorithmsExample ApplicationsAnalysis of the Behavior of Genetic AlgorithmsLamarckian Evolution Genet

  20. Systematic determination of replication activity type highlights interconnections between replication, chromatin structure and nuclear localization.

    Directory of Open Access Journals (Sweden)

    Shlomit Farkash-Amar

    Full Text Available DNA replication is a highly regulated process, with each genomic locus replicating at a distinct time of replication (ToR. Advances in ToR measurement technology enabled several genome-wide profiling studies that revealed tight associations between ToR and general genomic features and a remarkable ToR conservation in mammals. Genome wide studies further showed that at the hundreds kb-to-megabase scale the genome can be divided into constant ToR regions (CTRs in which the replication process propagates at a faster pace due to the activation of multiple origins and temporal transition regions (TTRs in which the replication process propagates at a slower pace. We developed a computational tool that assigns a ToR to every measured locus and determines its replication activity type (CTR versus TTR. Our algorithm, ARTO (Analysis of Replication Timing and Organization, uses signal processing methods to fit a constant piece-wise linear curve to the measured raw data. We tested our algorithm and provide performance and usability results. A Matlab implementation of ARTO is available at http://bioinfo.cs.technion.ac.il/people/zohar/ARTO/. Applying our algorithm to ToR data measured in multiple mouse and human samples allowed precise genome-wide ToR determination and replication activity type characterization. Analysis of the results highlighted the plasticity of the replication program. For example, we observed significant ToR differences in 10-25% of the genome when comparing different tissue types. Our analyses also provide evidence for activity type differences in up to 30% of the probes. Integration of the ToR data with multiple aspects of chromosome organization characteristics suggests that ToR plays a role in shaping the regional chromatin structure. Namely, repressive chromatin marks, are associated with late ToR both in TTRs and CTRs. Finally, characterization of the differences between TTRs and CTRs, with matching ToR, revealed that TTRs are

  1. Optorsim: A Grid Simulator for Studying Dynamic Data Replication Strategies

    CERN Document Server

    Bell, William H; Millar, A Paul; Capozza, Luigi; Stockinger, Kurt; Zini, Floriano

    2003-01-01

    Computational grids process large, computationally intensive problems on small data sets. In contrast, data grids process large computational problems that in turn require evaluating, mining and producing large amounts of data. Replication, creating geographically disparate identical copies of data, is regarded as one of the major optimization techniques for reducing data access costs. In this paper, several replication algorithms are discussed. These algorithms were studied using the Grid simulator: OptorSim. OptorSim provides a modular framework within which optimization strategies can be studied under different Grid configurations. The goal is to explore the stability and transient behaviour of selected optimization techniques. We detail the design and implementation of OptorSim and analyze various replication algorithms based on different Grid workloads.

  2. Methodology of problem-based learning engineering and technology and of its implementation with modern computer resources

    Science.gov (United States)

    Lebedev, A. A.; Ivanova, E. G.; Komleva, V. A.; Klokov, N. M.; Komlev, A. A.

    2017-01-01

    The considered method of learning the basics of microelectronic circuits and systems amplifier enables one to understand electrical processes deeper, to understand the relationship between static and dynamic characteristics and, finally, bring the learning process to the cognitive process. The scheme of problem-based learning can be represented by the following sequence of procedures: the contradiction is perceived and revealed; the cognitive motivation is provided by creating a problematic situation (the mental state of the student), moving the desire to solve the problem, to raise the question "why?", the hypothesis is made; searches for solutions are implemented; the answer is looked for. Due to the complexity of architectural schemes in the work the modern methods of computer analysis and synthesis are considered in the work. Examples of engineering by students in the framework of students' scientific and research work of analog circuits with improved performance based on standard software and software developed at the Department of Microelectronics MEPhI.

  3. NACSA Charter School Replication Guide: The Spectrum of Replication Options. Authorizing Matters. Replication Brief 1

    Science.gov (United States)

    O'Neill, Paul

    2010-01-01

    One of the most important and high-profile issues in public education reform today is the replication of successful public charter school programs. With more than 5,000 failing public schools in the United States, there is a tremendous need for strong alternatives for parents and students. Replicating successful charter school models is an…

  4. Assessment Planning and Evaluation of Renewable Energy Resources: an Interactive Computer Assisted Procedure. [hydroelectricity, biomass, and windpower in the Pittsfield metropolitan region, Massachusetts

    Science.gov (United States)

    Aston, T. W.; Fabos, J. G.; Macdougall, E. B.

    1982-01-01

    Adaptation and derivation were used to develop a procedure for assessing the availability of renewable energy resources on the landscape while simultaneously accounting for the economic, legal, social, and environmental issues required. Done in a step-by-step fashion, the procedure can be used interactively at the computer terminals. Its application in determining the hydroelectricity, biomass, and windpower in a 40,000 acre study area of Western Massachusetts shows that: (1) three existing dam sites are physically capable of being retrofitted for hydropower; (2) each of three general areas has a mean annual windspeed exceeding 14 mph and is conductive to windpower; and (3) 20% of the total land area consists of prime agricultural biomass while 30% of the area is prime forest biomass land.

  5. Assessment Planning and Evaluation of Renewable Energy Resources: an Interactive Computer Assisted Procedure. [hydroelectricity, biomass, and windpower in the Pittsfield metropolitan region, Massachusetts

    Science.gov (United States)

    Aston, T. W.; Fabos, J. G.; Macdougall, E. B.

    1982-01-01

    Adaptation and derivation were used to develop a procedure for assessing the availability of renewable energy resources on the landscape while simultaneously accounting for the economic, legal, social, and environmental issues required. Done in a step-by-step fashion, the procedure can be used interactively at the computer terminals. Its application in determining the hydroelectricity, biomass, and windpower in a 40,000 acre study area of Western Massachusetts shows that: (1) three existing dam sites are physically capable of being retrofitted for hydropower; (2) each of three general areas has a mean annual windspeed exceeding 14 mph and is conductive to windpower; and (3) 20% of the total land area consists of prime agricultural biomass while 30% of the area is prime forest biomass land.

  6. Distributed Computing.

    Science.gov (United States)

    Ryland, Jane N.

    1988-01-01

    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  7. International Expansion through Flexible Replication

    DEFF Research Database (Denmark)

    Jonsson, Anna; Foss, Nicolai Juul

    2011-01-01

    to local environments and under the impact of new learning. To illuminate these issues, we draw on a longitudinal in-depth study of Swedish home furnishing giant IKEA, involving more than 70 interviews. We find that IKEA has developed organizational mechanisms that support an ongoing learning process aimed......, etc.) are replicated in a uniform manner across stores, and change only very slowly (if at all) in response to learning (“flexible replication”). We conclude by discussing the factors that influence the approach to replication adopted by an international replicator....

  8. The Psychology of Replication and Replication in Psychology.

    Science.gov (United States)

    Francis, Gregory

    2012-11-01

    Like other scientists, psychologists believe experimental replication to be the final arbiter for determining the validity of an empirical finding. Reports in psychology journals often attempt to prove the validity of a hypothesis or theory with multiple experiments that replicate a finding. Unfortunately, these efforts are sometimes misguided because in a field like experimental psychology, ever more successful replication does not necessarily ensure the validity of an empirical finding. When psychological experiments are analyzed with statistics, the rules of probability dictate that random samples should sometimes be selected that do not reject the null hypothesis, even if an effect is real. As a result, it is possible for a set of experiments to have too many successful replications. When there are too many successful replications for a given set of experiments, a skeptical scientist should be suspicious that null or negative findings have been suppressed, the experiments were run improperly, or the experiments were analyzed improperly. This article describes the implications of this observation and demonstrates how to test for too much successful replication by using a set of experiments from a recent research paper.

  9. Regulation of Replication Recovery and Genome Integrity

    DEFF Research Database (Denmark)

    Colding, Camilla Skettrup

    Preserving genome integrity is essential for cell survival. To this end, mechanisms that supervise DNA replication and respond to replication perturbations have evolved. One such mechanism is the replication checkpoint, which responds to DNA replication stress and acts to ensure replication pausing...

  10. Exploring Tradeoffs in Demand-side and Supply-side Management of Urban Water Resources using Agent-based Modeling and Evolutionary Computation

    Science.gov (United States)

    Kanta, L.; Berglund, E. Z.

    2015-12-01

    Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS) framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger (1) increases in the volume of water pumped through inter-basin transfers from an external reservoir and (2) drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  11. 基于Web的计算资源发布的研究与实践%Research and practice on web-based computing resource publishing

    Institute of Scientific and Technical Information of China (English)

    吴志刚; 方滨兴; 马涛

    2001-01-01

    迅速发展的World Wide Web(Web)为Web计算资源发布提供了一个开放的、一致的平台。文中提出了Web计算资源代理发布模型,为提高这个模型中代理的可用性和可靠性,设计了两级的树代理结构和主-从代理结构,并在此基础上实现了一个原型系统WCRPS。%Fast developing World Wide Web (Web) provides an open, coincident platform for Web computing resource publishing. We put forward Web computing agent publishing model. To improve availa-bility and reliability of agents in the model, we designed two-level tree agent structure and primary-slave agent structure. And we realized a prototype system WCRPS on the basis of them.

  12. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    Science.gov (United States)

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system.

  13. Biomarkers of replicative senescence revisited

    DEFF Research Database (Denmark)

    Nehlin, Jan

    2016-01-01

    Biomarkers of replicative senescence can be defined as those ultrastructural and physiological variations as well as molecules whose changes in expression, activity or function correlate with aging, as a result of the gradual exhaustion of replicative potential and a state of permanent cell cycle...... with their chronological age and present health status, help define their current rate of aging and contribute to establish personalized therapy plans to reduce, counteract or even avoid the appearance of aging biomarkers....

  14. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  15. Using Computer Resources (Spreadsheet to Comprehend Rational Numbers Utilizando recursos computacionais (planilha na compreensão dos Números Racionais

    Directory of Open Access Journals (Sweden)

    Rosane Ratzlaff da Rosa

    2008-12-01

    Full Text Available This article reports on an investigation which sought to determine if the use of spreadsheets in the teaching of rational numbers in elementary education contributes to learning and improved learning retention. The study was carried out with a sample of students from two sixth-grade classes in a public school in Porto Alegre. Results indicated that the use of spreadsheets favored learning and made the classes more participatory for the students, who were able to visualize the processes they were working with. A second test applied five months after the first test showed that students who used the spreadsheets had greater learning retention of the contents. The results also show that the students felt comfortable with the technology, and almost all reported that they were more motivated by the use of computers in the classroom, despite less-than-ideal laboratory conditions. Key-words: Rational Numbers. Teaching with Spreadsheet. Teaching Rational Numbers using Spreadsheet.Este artigo relata uma investigação que procurou determinar se o uso de planilha como recurso no ensino dos números racionais na Educação Básica contribui para a aprendizagem e uma maior retenção dessa aprendizagem a médio prazo. A investigação foi realizada com uma amostra de alunos de duas turmas da sexta série de uma escola pública de Porto Alegre. Os resultados indicaram que o uso da planilha favorece a aprendizagem e torna as aulas mais participativas para os alunos, que conseguiram visualizar os processos com os quais trabalharam. Um segundo teste aplicado cinco meses após o primeiro mostrou que os alunos que utilizaram a planilha apresentaram uma maior retenção do conteúdo. Os resultados apontam ainda que os alunos se sentem à vontade com a tecnologia e quase todos disseram ficarem mais motivados com as aulas utilizando o computador apesar das condições do laboratório utilizado não ser a ideal. Palavras-chave: Números Racionais. Ensino com a

  16. Image microarrays derived from tissue microarrays (IMA-TMA): New resource for computer-aided diagnostic algorithm development.

    Science.gov (United States)

    Hipp, Jennifer A; Hipp, Jason D; Lim, Megan; Sharma, Gaurav; Smith, Lauren B; Hewitt, Stephen M; Balis, Ulysses G J

    2012-01-01

    Conventional tissue microarrays (TMAs) consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD) algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE), and image microarray maker (iMAM) enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA). We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ) algorithm. Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM) appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic bodies, was subsequently carried out on the

  17. Nucleotide Metabolism and DNA Replication.

    Science.gov (United States)

    Warner, Digby F; Evans, Joanna C; Mizrahi, Valerie

    2014-10-01

    The development and application of a highly versatile suite of tools for mycobacterial genetics, coupled with widespread use of "omics" approaches to elucidate the structure, function, and regulation of mycobacterial proteins, has led to spectacular advances in our understanding of the metabolism and physiology of mycobacteria. In this article, we provide an update on nucleotide metabolism and DNA replication in mycobacteria, highlighting key findings from the past 10 to 15 years. In the first section, we focus on nucleotide metabolism, ranging from the biosynthesis, salvage, and interconversion of purine and pyrimidine ribonucleotides to the formation of deoxyribonucleotides. The second part of the article is devoted to DNA replication, with a focus on replication initiation and elongation, as well as DNA unwinding. We provide an overview of replication fidelity and mutation rates in mycobacteria and summarize evidence suggesting that DNA replication occurs during states of low metabolic activity, and conclude by suggesting directions for future research to address key outstanding questions. Although this article focuses primarily on observations from Mycobacterium tuberculosis, it is interspersed, where appropriate, with insights from, and comparisons with, other mycobacterial species as well as better characterized bacterial models such as Escherichia coli. Finally, a common theme underlying almost all studies of mycobacterial metabolism is the potential to identify and validate functions or pathways that can be exploited for tuberculosis drug discovery. In this context, we have specifically highlighted those processes in mycobacterial DNA replication that might satisfy this critical requirement.

  18. Plasmid Rolling-Circle Replication.

    Science.gov (United States)

    Ruiz-Masó, J A; MachóN, C; Bordanaba-Ruiseco, L; Espinosa, M; Coll, M; Del Solar, G

    2015-02-01

    Plasmids are DNA entities that undergo controlled replication independent of the chromosomal DNA, a crucial step that guarantees the prevalence of the plasmid in its host. DNA replication has to cope with the incapacity of the DNA polymerases to start de novo DNA synthesis, and different replication mechanisms offer diverse solutions to this problem. Rolling-circle replication (RCR) is a mechanism adopted by certain plasmids, among other genetic elements, that represents one of the simplest initiation strategies, that is, the nicking by a replication initiator protein on one parental strand to generate the primer for leading-strand initiation and a single priming site for lagging-strand synthesis. All RCR plasmid genomes consist of a number of basic elements: leading strand initiation and control, lagging strand origin, phenotypic determinants, and mobilization, generally in that order of frequency. RCR has been mainly characterized in Gram-positive bacterial plasmids, although it has also been described in Gram-negative bacterial or archaeal plasmids. Here we aim to provide an overview of the RCR plasmids' lifestyle, with emphasis on their characteristic traits, promiscuity, stability, utility as vectors, etc. While RCR is one of the best-characterized plasmid replication mechanisms, there are still many questions left unanswered, which will be pointed out along the way in this review.

  19. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  20. Enhanced Three Tier Security Architecture for WSN Against Mobile Sink Replication Attacks Using Mutual Authentication Scheme

    Directory of Open Access Journals (Sweden)

    Linciya.T

    2013-05-01

    Full Text Available Recent developments on Wireless Sensor Networks have made their application in a wide range such as military sensing and tracking, health monitoring, traffic monitoring, video surveillance and so on.Wireless sensor nodes are restricted to computational resources, and are always deployed in a harsh,unattended or unfriendly environment. Therefore, network security becomes a tough task and it involves the authorization of admittance to data in a network. The problem of authentication and pair wise keyestablishment in sensor networks with mobile sink is still not solved in the mobile sink replication attacks.In q-composite key pre distribution scheme, a large number of keys are compromised by capturing a small fraction of sensor nodes by the attacker. Theattacker can easily take a control of the entire network by deploying a replicated mobile sinks. Those mobile sinks which are preloaded with compromised keys are used authenticate and initiate data communication with sensor node. To determine the above problem the system adduces the three-tier security framework for authentication and pair wise key establishmentbetween mobile sinks and sensor nodes. The previous system used the polynomial key pre distribution scheme for the sensor networks which handles sink mobility and continuous data delivery to the neighbouring nodes and sinks, but this scheme makeshigh computational cost and reduces the life time of sensors. In order to overcome this problem a random pair wise key pre distribution scheme is suggested and further it helps to improve the network resilience. In addition to this an Identity Based Encryption is used to encrypt the data and Mutual authentication scheme is proposed for the identification and isolation of replicated mobile sink from the network.

  1. The Interstellar Ethics of Self-Replicating Probes

    Science.gov (United States)

    Cooper, K.

    Robotic spacecraft have been our primary means of exploring the Universe for over 50 years. Should interstellar travel become reality it seems unlikely that humankind will stop using robotic probes. These probes will be able to replicate themselves ad infinitum by extracting raw materials from the space resources around them and reconfiguring them into replicas of themselves, using technology such as 3D printing. This will create a colonising wave of probes across the Galaxy. However, such probes could have negative as well as positive consequences and it is incumbent upon us to factor self-replicating probes into our interstellar philosophies and to take responsibility for their actions.

  2. Research on digital education resources management in colleges and universities based on cloud computing%基于云计算的高校数字化教育资源管理研究

    Institute of Scientific and Technical Information of China (English)

    王凤领

    2016-01-01

    The research aims at realizing digital educational resources integration management, to reduce the cost of digital educational resources construction and maintenance, and improve the overall efficiency of the digital education resource management. Combined with the present situation of digital education resource management of universities,the paper analyzes the existing problems in the management of college education resources, and uses cloud computing technology to put forward feasibility analysis and the cloud computing resource management education scheme of the digital education management in colleges and universities, which could promote the unified management of digital educational resources, improve the utilization rate of digital educational resources, and implement digital educational resources sharing, therefore enhance the digital education resource management.%为实现高校数字化教育资源整合管理,降低数字化教育资源建设和维护成本,提升高校整体的数字化教育资源管理效率。采用云计算技术,结合高校数字化教育资源管理现状,分析目前高校教育资源管理中存在的问题,提出云计算的高校数字化教育管理的可行性分析和教育资源管理方案,促进高校数字化教育资源的统一管理,提高数字化教育资源利用率,实现数字化教育资源共享,提升高校数字化教育资源管理水平。

  3. Exploring Tradeoffs in Demand-Side and Supply-Side Management of Urban Water Resources Using Agent-Based Modeling and Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Lufthansa Kanta

    2015-11-01

    Full Text Available Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger: (1 increases in the volume of water pumped through inter-basin transfers from an external reservoir; and (2 drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  4. Identification of Proteins at Active, Stalled, and Collapsed Replication Forks Using Isolation of Proteins on Nascent DNA (iPOND) Coupled with Mass Spectrometry*

    Science.gov (United States)

    Sirbu, Bianca M.; McDonald, W. Hayes; Dungrawala, Huzefa; Badu-Nkansah, Akosua; Kavanaugh, Gina M.; Chen, Yaoyi; Tabb, David L.; Cortez, David

    2013-01-01

    Both DNA and chromatin need to be duplicated during each cell division cycle. Replication happens in the context of defects in the DNA template and other forms of replication stress that present challenges to both genetic and epigenetic inheritance. The replication machinery is highly regulated by replication stress responses to accomplish this goal. To identify important replication and stress response proteins, we combined isolation of proteins on nascent DNA (iPOND) with quantitative mass spectrometry. We identified 290 proteins enriched on newly replicated DNA at active, stalled, and collapsed replication forks. Approximately 16% of these proteins are known replication or DNA damage response proteins. Genetic analysis indicates that several of the newly identified proteins are needed to facilitate DNA replication, especially under stressed conditions. Our data provide a useful resource for investigators studying DNA replication and the replication stress response and validate the use of iPOND combined with mass spectrometry as a discovery tool. PMID:24047897

  5. 计算资源共享平台中工作流任务调度研究%Task scheduler of workflows in computing resource sharing platform

    Institute of Scientific and Technical Information of China (English)

    周智刚

    2011-01-01

    A new scalable scheduler for task workflows with time constraints in computing resource sharing platform is proposed and described.It' s built upon a tree-based P2P overlay that supports efficient and fast aggregation of resource availability information.A twolayered architecture with a local and global scheduler is also presented.Local scheduling defines policies at execution node level, while global scheduling matched workflow tasks with suitable execution nodes.A local scheduler in each node provides its available time intervals to the distributed global scheduler, which summarizes them in the aggregation process.Constraints for deadlines and the correct timing of tasks in workflows are guaranteed with a suitable distributed management of availability time intervals of resources.The simulation result show that fast response times and low overhead in a system with hundreds of nodes are also obtained in the fork-join model and equation solver like applications.%提出了计算资源共享平台中具有时间约束的工作流任务调度方法,该方法利用了非集中式的树型应用层覆盖网络拓扑结构,从而可以高效而快速的收集资源的可用信息.采用全局调度器与本地调度器结合的方式,通过定义资源的收集功能过程,使每个节点中的本地调度器能够把自身的资源可用信息提供给全局的调度器,工作流中任务的最后期限时间约束和任务的恢复时间以一种时间间隙的机制来完成.仿真结果表明,分治模式和解方程类的迭代模式的工作流任务能够在平台上成功调度运行,具有比较快的响应时间和低的通信负载.

  6. Defects of mitochondrial DNA replication.

    Science.gov (United States)

    Copeland, William C

    2014-09-01

    Mitochondrial DNA is replicated by DNA polymerase γ in concert with accessory proteins such as the mitochondrial DNA helicase, single-stranded DNA binding protein, topoisomerase, and initiating factors. Defects in mitochondrial DNA replication or nucleotide metabolism can cause mitochondrial genetic diseases due to mitochondrial DNA deletions, point mutations, or depletion, which ultimately cause loss of oxidative phosphorylation. These genetic diseases include mitochondrial DNA depletion syndromes such as Alpers or early infantile hepatocerebral syndromes, and mitochondrial DNA deletion disorders, such as progressive external ophthalmoplegia, ataxia-neuropathy, or mitochondrial neurogastrointestinal encephalomyopathy. This review focuses on our current knowledge of genetic defects of mitochondrial DNA replication (POLG, POLG2, C10orf2, and MGME1) that cause instability of mitochondrial DNA and mitochondrial disease.

  7. Regulation of beta cell replication

    DEFF Research Database (Denmark)

    Lee, Ying C; Nielsen, Jens Høiriis

    2008-01-01

    Beta cell mass, at any given time, is governed by cell differentiation, neogenesis, increased or decreased cell size (cell hypertrophy or atrophy), cell death (apoptosis), and beta cell proliferation. Nutrients, hormones and growth factors coupled with their signalling intermediates have been...... suggested to play a role in beta cell mass regulation. In addition, genetic mouse model studies have indicated that cyclins and cyclin-dependent kinases that determine cell cycle progression are involved in beta cell replication, and more recently, menin in association with cyclin-dependent kinase...... inhibitors has been demonstrated to be important in beta cell growth. In this review, we consider and highlight some aspects of cell cycle regulation in relation to beta cell replication. The role of cell cycle regulation in beta cell replication is mostly from studies in rodent models, but whether...

  8. Shell Separation for Mirror Replication

    Science.gov (United States)

    1999-01-01

    NASA's Space Optics Manufacturing Center has been working to expand our view of the universe via sophisticated new telescopes. The Optics Center's goal is to develop low-cost, advanced space optics technologies for the NASA program in the 21st century - including the long-term goal of imaging Earth-like planets in distant solar systems. To reduce the cost of mirror fabrication, Marshall Space Flight Center (MSFC) has developed replication techniques, the machinery, and materials to replicate electro-formed nickel mirrors. Optics replication uses reusable forms, called mandrels, to make telescope mirrors ready for final finishing. MSFC optical physicist Bill Jones monitors a device used to chill a mandrel, causing it to shrink and separate from the telescope mirror without deforming the mirror's precisely curved surface.

  9. GRID COMPUTING AND CHECKPOINT APPROACH

    Directory of Open Access Journals (Sweden)

    Pankaj gupta

    2011-05-01

    Full Text Available Grid computing is a means of allocating the computational power of alarge number of computers to complex difficult computation or problem. Grid computing is a distributed computing paradigm thatdiffers from traditional distributed computing in that it is aimed toward large scale systems that even span organizational boundaries. In this paper we investigate the different techniques of fault tolerance which are used in many real time distributed systems. The main focus is on types of fault occurring in the system, fault detection techniques and the recovery techniques used. A fault can occur due to link failure, resource failure or by any other reason is to be tolerated for working the system smoothly and accurately. These faults can be detected and recovered by many techniques used accordingly. An appropriate fault detector can avoid loss due to system crash and reliable fault tolerance technique can save from system failure. This paper provides how these methods are applied to detect and tolerate faults from various Real Time Distributed Systems. The advantages of utilizing the check pointing functionality are obvious; however so far the Grid community has notdeveloped a widely accepted standard that would allow the Gridenvironment to consciously utilize low level check pointing packages.Therefore, such a standard named Grid Check pointing Architecture isbeing designed. The fault tolerance mechanism used here sets the jobcheckpoints based on the resource failure rate. If resource failureoccurs, the job is restarted from its last successful state using acheckpoint file from another grid resource. A critical aspect for anautomatic recovery is the availability of checkpoint files. A strategy to increase the availability of checkpoints is replication. Grid is a form distributed computing mainly to virtualizes and utilize geographically distributed idle resources. A grid is a distributed computational and storage environment often composed of

  10. Multiprocessor Real-Time Locking Protocols for Replicated Resources

    Science.gov (United States)

    2016-07-01

    Unnecessary s- blocking can be reduced by employing a cutting-ahead mechanism [8] that sometimes allows a newly issued re- quest to be ordered before...of replicas re- quested and released, respectively, up to the current time. These counters are updated atomically using fetch&add in- structions.4 As...than blocking, pro - vided we are able to reap the benfits of rare blocking analyt- ically. In particular, if worst-case s-blocking is pessimisti- cally

  11. Personality and Academic Motivation: Replication, Extension, and Replication

    Science.gov (United States)

    Jones, Martin H.; McMichael, Stephanie N.

    2015-01-01

    Previous work examines the relationships between personality traits and intrinsic/extrinsic motivation. We replicate and extend previous work to examine how personality may relate to achievement goals, efficacious beliefs, and mindset about intelligence. Approximately 200 undergraduates responded to the survey with a 150 participants replicating…

  12. Replication of proto-RNAs sustained by ligase-helicase cycle in oligomer world

    OpenAIRE

    Sato, Daisuke; Narikiyo, Osamu

    2013-01-01

    A mechanism of the replication of proto-RNAs in oligomer world is proposed. The replication is carried out by a minimum cycle which is sustained by a ligase and a helicase. We expect that such a cycle actually worked in the primordial soup and can be constructed in vitro. By computer simulation the products of the replication acquires diversity and complexity. Such diversity and complexity are the bases of the evolution.

  13. 基于私有云服务的水资源管理系统的基础架构设计%Infrastructure Design of Water Resources Management System Based on a Private Cloud Computing

    Institute of Scientific and Technical Information of China (English)

    杨汝洁; 杨云江

    2013-01-01

    针对传统水资源管理系统对水利、水库数据利用不充分,各地的数据库分散并且不规范,很难进行统一管理和调度的问题,云计算无疑给出了更好的解决方法.本文提出了基于私有云服务的水资源管理系统的基础架构,并根据此架构搭建了基于私有云服务的水资源管理系统的实验平台.%The traditional system of water resources management couldn‘t make full use of the water conservancy and the reservoir data and the databases of water resources is scattered and irregular,thus it is difficult to carry out centralized management and scheduling.For those problems,water resources management system based on a cloud computing undoubtedly gives a better solution.A infrastructure of water resources management system was proposed based on a private cloud computing,and according to the infrastructure,an experimental platform of water resources management system was built based on a private cloud computing.

  14. Regulation of Replication Recovery and Genome Integrity

    DEFF Research Database (Denmark)

    Colding, Camilla Skettrup

    facilitate replication recovery after MMS-induced replication stress. Our data reveal that control of Mrc1 turnover through the interplay between posttranslational modifications and INQ localization adds another layer of regulation to the replication checkpoint. We also add replication recovery to the list...... is mediated by Mrc1, which ensures Mec1 presence at the stalled replication fork thus facilitating Rad53 phosphorylation. When replication can be resumed safely, the replication checkpoint is deactivated and replication forks restart. One mechanism for checkpoint deactivation is the ubiquitin......-targeted proteasomal degradation of Mrc1. In this study, we describe a novel nuclear structure, the intranuclear quality control compartment (INQ), which regulates protein turnover and is important for recovery after replication stress. We find that upon methyl methanesulfonate (MMS)-induced replication stress, INQ...

  15. Verifying likelihoods for low template DNA profiles using multiple replicates

    Science.gov (United States)

    Steele, Christopher D.; Greenhalgh, Matthew; Balding, David J.

    2014-01-01

    To date there is no generally accepted method to test the validity of algorithms used to compute likelihood ratios (LR) evaluating forensic DNA profiles from low-template and/or degraded samples. An upper bound on the LR is provided by the inverse of the match probability, which is the usual measure of weight of evidence for standard DNA profiles not subject to the stochastic effects that are the hallmark of low-template profiles. However, even for low-template profiles the LR in favour of a true prosecution hypothesis should approach this bound as the number of profiling replicates increases, provided that the queried contributor is the major contributor. Moreover, for sufficiently many replicates the standard LR for mixtures is often surpassed by the low-template LR. It follows that multiple LTDNA replicates can provide stronger evidence for a contributor to a mixture than a standard analysis of a good-quality profile. Here, we examine the performance of the likeLTD software for up to eight replicate profiling runs. We consider simulated and laboratory-generated replicates as well as resampling replicates from a real crime case. We show that LRs generated by likeLTD usually do exceed the mixture LR given sufficient replicates, are bounded above by the inverse match probability and do approach this bound closely when this is expected. We also show good performance of likeLTD even when a large majority of alleles are designated as uncertain, and suggest that there can be advantages to using different profiling sensitivities for different replicates. Overall, our results support both the validity of the underlying mathematical model and its correct implementation in the likeLTD software. PMID:25082140

  16. Research on digital teaching resource library mode in distance education based on cloud computing%基于云计算的远程教育数字化教学资源库模式研究

    Institute of Scientific and Technical Information of China (English)

    刘晓丹; 蒋漪涟

    2015-01-01

    In order to realize the sharing of learning resources in the distance education and meet the needs of dif -ferent learners , it analyzes the digital teaching resource library in distance education from the cloud computing technology , proposes the teaching resource library model and the key technology , discusses the construction of teaching resource library mode , establishes the logical structure and platform structure of resource library mode , points out the broad prospect of the application of cloud computing technology in the construction of distance edu -cation resources .%为实现远程教育中的学习资源共享,满足不同学习者对资源的需求,对远程教育数字化教学资源库模式进行了研究. 从云计算技术、教学资源库模式、关键技术3个方面探讨了教学资源库的模式建设,阐述了资源库模式的逻辑架构和平台架构,指出了云计算技术在远程教育资源库建设中应用的广阔前景.

  17. Hyperthermia stimulates HIV-1 replication.

    Directory of Open Access Journals (Sweden)

    Ferdinand Roesch

    Full Text Available HIV-infected individuals may experience fever episodes. Fever is an elevation of the body temperature accompanied by inflammation. It is usually beneficial for the host through enhancement of immunological defenses. In cultures, transient non-physiological heat shock (42-45°C and Heat Shock Proteins (HSPs modulate HIV-1 replication, through poorly defined mechanisms. The effect of physiological hyperthermia (38-40°C on HIV-1 infection has not been extensively investigated. Here, we show that culturing primary CD4+ T lymphocytes and cell lines at a fever-like temperature (39.5°C increased the efficiency of HIV-1 replication by 2 to 7 fold. Hyperthermia did not facilitate viral entry nor reverse transcription, but increased Tat transactivation of the LTR viral promoter. Hyperthermia also boosted HIV-1 reactivation in a model of latently-infected cells. By imaging HIV-1 transcription, we further show that Hsp90 co-localized with actively transcribing provirus, and this phenomenon was enhanced at 39.5°C. The Hsp90 inhibitor 17-AAG abrogated the increase of HIV-1 replication in hyperthermic cells. Altogether, our results indicate that fever may directly stimulate HIV-1 replication, in a process involving Hsp90 and facilitation of Tat-mediated LTR activity.

  18. Hyperthermia stimulates HIV-1 replication.

    Science.gov (United States)

    Roesch, Ferdinand; Meziane, Oussama; Kula, Anna; Nisole, Sébastien; Porrot, Françoise; Anderson, Ian; Mammano, Fabrizio; Fassati, Ariberto; Marcello, Alessandro; Benkirane, Monsef; Schwartz, Olivier

    2012-01-01

    HIV-infected individuals may experience fever episodes. Fever is an elevation of the body temperature accompanied by inflammation. It is usually beneficial for the host through enhancement of immunological defenses. In cultures, transient non-physiological heat shock (42-45°C) and Heat Shock Proteins (HSPs) modulate HIV-1 replication, through poorly defined mechanisms. The effect of physiological hyperthermia (38-40°C) on HIV-1 infection has not been extensively investigated. Here, we show that culturing primary CD4+ T lymphocytes and cell lines at a fever-like temperature (39.5°C) increased the efficiency of HIV-1 replication by 2 to 7 fold. Hyperthermia did not facilitate viral entry nor reverse transcription, but increased Tat transactivation of the LTR viral promoter. Hyperthermia also boosted HIV-1 reactivation in a model of latently-infected cells. By imaging HIV-1 transcription, we further show that Hsp90 co-localized with actively transcribing provirus, and this phenomenon was enhanced at 39.5°C. The Hsp90 inhibitor 17-AAG abrogated the increase of HIV-1 replication in hyperthermic cells. Altogether, our results indicate that fever may directly stimulate HIV-1 replication, in a process involving Hsp90 and facilitation of Tat-mediated LTR activity.

  19. Cellular Responses to Replication Problems

    NARCIS (Netherlands)

    M. Budzowska (Magdalena)

    2008-01-01

    textabstractDuring every S-phase cells need to duplicate their genomes so that both daughter cells inherit complete copies of genetic information. It is a tremendous task, given the large sizes of mammalian genomes and the required precision of DNA replication. A major threat to the accuracy and eff

  20. Covert Reinforcement: A Partial Replication.

    Science.gov (United States)

    Ripstra, Constance C.; And Others

    A partial replication of an investigation of the effect of covert reinforcement on a perceptual estimation task is described. The study was extended to include an extinction phase. There were five treatment groups: covert reinforcement, neutral scene reinforcement, noncontingent covert reinforcement, and two control groups. Each subject estimated…

  1. A Review on Modern Distributed Computing Paradigms: Cloud Computing, Jungle Computing and Fog Computing

    OpenAIRE

    Hajibaba, Majid; Gorgin, Saeid

    2014-01-01

    The distributed computing attempts to improve performance in large-scale computing problems by resource sharing. Moreover, rising low-cost computing power coupled with advances in communications/networking and the advent of big data, now enables new distributed computing paradigms such as Cloud, Jungle and Fog computing.Cloud computing brings a number of advantages to consumers in terms of accessibility and elasticity. It is based on centralization of resources that possess huge processing po...

  2. Crinivirus replication and host interactions

    Directory of Open Access Journals (Sweden)

    Zsofia A Kiss

    2013-05-01

    Full Text Available Criniviruses comprise one of the genera within the family Closteroviridae. Members in this family are restricted to the phloem and rely on whitefly vectors of the genera Bemisia and/or Trialeurodes for plant-to-plant transmission. All criniviruses have bipartite, positive-sense ssRNA genomes, although there is an unconfirmed report of one having a tripartite genome. Lettuce infectious yellows virus (LIYV is the type species of the genus, the best studied so far of the criniviruses and the first for which a reverse genetics system was available. LIYV RNA 1 encodes for proteins predicted to be involved in replication, and alone is competent for replication in protoplasts. Replication results in accumulation of cytoplasmic vesiculated membranous structures which are characteristic of most studied members of the Closteroviridae. These membranous structures, often referred to as BYV-type vesicles, are likely sites of RNA replication. LIYV RNA 2 is replicated in trans when co-infecting cells with RNA 1, but is temporally delayed relative to RNA1. Efficient RNA 2 replication also is dependent on the RNA 1-encoded RNA binding protein, P34. No LIYV RNA 2-encoded proteins have been shown to affect RNA replication, but at least four, CP, CPm, Hsp70h, and p59 are virion structural components and CPm is a determinant of whitefly transmissibility. Roles of other LIYV RNA 2-encoded proteins are largely as yet unknown, but P26 is a non-virion protein that accumulates in cells as characteristic plasmalemma deposits which in plants are localized within phloem parenchyma and companion cells over plasmodesmata connections to sieve elements. The two remaining crinivirus-conserved RNA 2-encoded proteins are P5 and P9. P5 is 39 amino acid protein and is encoded at the 5’ end of RNA 2 as ORF1 and is part of the hallmark closterovirus gene array. The orthologous gene in BYV has been shown to play a role in cell-to-cell movement and indicated to be localized to the

  3. Genome-wide alterations of the DNA replication program during tumor progression

    Science.gov (United States)

    Arneodo, A.; Goldar, A.; Argoul, F.; Hyrien, O.; Audit, B.

    2016-08-01

    Oncogenic stress is a major driving force in the early stages of cancer development. Recent experimental findings reveal that, in precancerous lesions and cancers, activated oncogenes may induce stalling and dissociation of DNA replication forks resulting in DNA damage. Replication timing is emerging as an important epigenetic feature that recapitulates several genomic, epigenetic and functional specificities of even closely related cell types. There is increasing evidence that chromosome rearrangements, the hallmark of many cancer genomes, are intimately associated with the DNA replication program and that epigenetic replication timing changes often precede chromosomic rearrangements. The recent development of a novel methodology to map replication fork polarity using deep sequencing of Okazaki fragments has provided new and complementary genome-wide replication profiling data. We review the results of a wavelet-based multi-scale analysis of genomic and epigenetic data including replication profiles along human chromosomes. These results provide new insight into the spatio-temporal replication program and its dynamics during differentiation. Here our goal is to bring to cancer research, the experimental protocols and computational methodologies for replication program profiling, and also the modeling of the spatio-temporal replication program. To illustrate our purpose, we report very preliminary results obtained for the chronic myelogeneous leukemia, the archetype model of cancer. Finally, we discuss promising perspectives on using genome-wide DNA replication profiling as a novel efficient tool for cancer diagnosis, prognosis and personalized treatment.

  4. Replication-Uncoupled Histone Deposition during Adenovirus DNA Replication

    OpenAIRE

    Komatsu, Tetsuro; Nagata, Kyosuke

    2012-01-01

    In infected cells, the chromatin structure of the adenovirus genome DNA plays critical roles in its genome functions. Previously, we reported that in early phases of infection, incoming viral DNA is associated with both viral core protein VII and cellular histones. Here we show that in late phases of infection, newly synthesized viral DNA is also associated with histones. We also found that the knockdown of CAF-1, a histone chaperone that functions in the replication-coupled deposition of his...

  5. REPLICATION TOOL AND METHOD OF PROVIDING A REPLICATION TOOL

    DEFF Research Database (Denmark)

    2016-01-01

    structured master surface (3a, 3b, 3c, 3d) having a lateral master pattern and a vertical master profile. The microscale structured master surface (3a, 3b, 3c, 3d) has been provided by localized pulsed laser treatment to generate microscale phase explosions. A method for producing a part with microscale......The invention relates to a replication tool (1, 1a, 1b) for producing a part (4) with a microscale textured replica surface (5a, 5b, 5c, 5d). The replication tool (1, 1a, 1b) comprises a tool surface (2a, 2b) defining a general shape of the item. The tool surface (2a, 2b) comprises a microscale...... energy directors on flange portions thereof uses the replication tool (1, 1a, 1b) to form an item (4) with a general shape as defined by the tool surface (2a, 2b). The formed item (4) comprises a microscale textured replica surface (5a, 5b, 5c, 5d) with a lateral arrangement of polydisperse microscale...

  6. Security in a Replicated Metadata Catalogue

    CERN Document Server

    Koblitz, B

    2007-01-01

    The gLite-AMGA metadata has been developed by NA4 to provide simple relational metadata access for the EGEE user community. As advanced features, which will be the focus of this presentation, AMGA provides very fine-grained security also in connection with the built-in support for replication and federation of metadata. AMGA is extensively used by the biomedical community to store medical images metadata, digital libraries, in HEP for logging and bookkeeping data and in the climate community. The biomedical community intends to deploy a distributed metadata system for medical images consisting of various sites, which range from hospitals to computing centres. Only safe sharing of the highly sensitive metadata as provided in AMGA makes such a scenario possible. Other scenarios are digital libraries, which federate copyright protected (meta-) data into a common catalogue. The biomedical and digital libraries have been deployed using a centralized structure already for some time. They now intend to decentralize ...

  7. 构建基于移动云计算的微课教学资源平台%Construction of the micro-lecture teaching resource platform based on mobile cloud computing

    Institute of Scientific and Technical Information of China (English)

    朱静宜

    2015-01-01

    移动云计算是指移动终端通过移动网络以按需、易扩展的方式获得所需的基础设施、平台、软件或应用的一种信息资源服务的交付与使用模式,具有高效的数据存储和计算能力,对微课教学资源平台建设产生了积极的作用.基于目前微课教学资源平台建设的背景,结合移动云计算和微课的特点,分析了教学资源平台总体结构并对其进行了构建.%Mobile cloud computing is a kind of information resource service delivery and usage mode, in which the mobile terminals gain the required infrastructure, platform, software or application through mobile network in an on-demand, scalable way. With efficient data storage and computing power, it has a positive effect on the construction of micro-lecture teaching resources platform. In the background of current micro-lecture teaching resource platform construction, according to the characteristics of mobile cloud computing and micro-lecture, this paper analyzes the architecture of the teaching resource platform and makes it constructed.

  8. Replication of urban innovations - prioritization of strategies for the replication of Dhaka's community-based decentralized composting model.

    Science.gov (United States)

    Yedla, Sudhakar

    2012-01-01

    Dhaka's community-based decentralized composting (DCDC) is a successful demonstration of solid waste management by adopting low-cost technology, local resources community participation and partnerships among the various actors involved. This paper attempts to understand the model, necessary conditions, strategies and their priorities to replicate DCDC in the other developing cities of Asia. Thirteen strategies required for its replication are identified and assessed based on various criteria, namely transferability, longevity, economic viability, adaptation and also overall replication. Priority setting by multi-criteria analysis by applying analytic hierarchy process revealed that immediate transferability without long-term and economic viability consideration is not advisable as this would result in unsustainable replication of DCDC. Based on the analysis, measures to ensure the product quality control; partnership among stakeholders (public-private-community); strategies to achieve better involvement of the private sector in solid waste management (entrepreneurship in approach); simple and low-cost technology; and strategies to provide an effective interface among the complementing sectors are identified as important strategies for its replication.

  9. Delay Scheduling Based Replication Scheme for Hadoop Distributed File System

    Directory of Open Access Journals (Sweden)

    S. Suresh

    2015-03-01

    Full Text Available The data generated and processed by modern computing systems burgeon rapidly. MapReduce is an important programming model for large scale data intensive applications. Hadoop is a popular open source implementation of MapReduce and Google File System (GFS. The scalability and fault-tolerance feature of Hadoop makes it as a standard for BigData processing. Hadoop uses Hadoop Distributed File System (HDFS for storing data. Data reliability and faulttolerance is achieved through replication in HDFS. In this paper, a new technique called Delay Scheduling Based Replication Algorithm (DSBRA is proposed to identify and replicate (dereplicate the popular (unpopular files/blocks in HDFS based on the information collected from the scheduler. Experimental results show that, the proposed method achieves 13% and 7% improvements in response time and locality over existing algorithms respectively.

  10. LHCb: Self managing experiment resources

    CERN Multimedia

    Stagni, F

    2013-01-01

    Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System ( Resource Status System ) delivering real time informatio...

  11. Psychological Analysis of Public Library Readers in the Environment of Computer Network Resources%计算机网络资源环境下公共图书馆读者心理分析

    Institute of Scientific and Technical Information of China (English)

    梁佳

    2012-01-01

    Public libraries as a nonprofit cultural educational institution,is open for the public,all members of society who use the library resources,library services object.The Public Library of the nature of its readers with a wide range of social and mass characteristics.Understanding of computer network resources and the environment,public library readers psychology,master of readers of different ages tend to provide a basis for public library services.Public Library audience composition and reading motivation and purpose,read the psychological differences.Readers 'psychology,and the corresponding collection of literature data and network information resources services to meet readers' needs to improve the collection of documents and network resource utilization,of great significance for readers of different age groups.The computer network resources and the environment,public library readers psychological to make analysis,to further explore the computer network resources and the environment,how public libraries should better carry out the reader service work.%公共图书馆作为公益性文化教育机构,是面向社会公众开放的,凡是具有利用图书馆资源条件的一切社会成员,都是图书馆服务对象。公共图书馆的性质决定了它的读者具有广泛的社会性和群众性特点。了解计算机网络资源环境下公共图书馆的读者心理,掌握不同年龄读者的阅读倾向,能为公共图书馆读者服务工作提供依据。

  12. Replicator dynamics in value chains

    DEFF Research Database (Denmark)

    Cantner, Uwe; Savin, Ivan; Vannuccini, Simone

    2016-01-01

    The pure model of replicator dynamics though providing important insights in the evolution of markets has not found much of empirical support. This paper extends the model to the case of firms vertically integrated in value chains. We show that i) by taking value chains into account, the replicator...... dynamics may revert its effect. In these regressive developments of market selection, firms with low fitness expand because of being integrated with highly fit partners, and the other way around; ii) allowing partner's switching within a value chain illustrates that periods of instability in the early...... stage of industry life-cycle may be the result of an 'optimization' of partners within a value chain providing a novel and simple explanation to the evidence discussed by Mazzucato (1998); iii) there are distinct differences in the contribution to market selection between the layers of a value chain...

  13. Research on the Solutions of Cloud Computing to Teaching Resources in Senior Vocational Colleges%云计算对高职院校教学资源整合的影响

    Institute of Scientific and Technical Information of China (English)

    乔晓刚

    2011-01-01

    Cloud computing is the core technology for the online computing platform of the new generation, which shows the following features such as strong computing competence, safe and reliable data storage, fast and easy cloud service, and sharing different data. By applying cloud computing, we can solve several problems in senior vocational colleges. For example, the education resource is distributed unequally ; the education resource can not be renewed in time; the shared program is at low level; the operation of education resources is not effective, etc.%云计算是下一代网络计算平台的核心技术,具有强大的计算能力、安全可靠的数据存储、快捷的云服务和不同数据共享的特点,利用云计算可以解决目前高职院校教育资源中分配不均、资源无法及时更新、共享程序低和教学资源的运营不足等问题。

  14. Therapeutic targeting of replicative immortality

    OpenAIRE

    Yaswen, Paul; MacKenzie, Karen L.; Keith, W. Nicol; Hentosh, Patricia; Rodier, Francis; Zhu, Jiyue; Firestone, Gary L.; Matheu, Ander; Carnero, Amancio; Bilsland, Alan; Sundin, Tabetha; Honoki, Kanya; Fujii, Hiromasa; Georgakilas, Alexandros G.; Amedei, Amedeo

    2015-01-01

    One of the hallmarks of malignant cell populations is the ability to undergo continuous proliferation. This property allows clonal lineages to acquire sequential aberrations that can fuel increasingly autonomous growth, invasiveness, and therapeutic resistance. Innate cellular mechanisms have evolved to regulate replicative potential as a hedge against malignant progression. When activated in the absence of normal terminal differentiation cues, these mechanisms can result in a state of persis...

  15. Alphavirus polymerase and RNA replication.

    Science.gov (United States)

    Pietilä, Maija K; Hellström, Kirsi; Ahola, Tero

    2017-01-16

    Alphaviruses are typically arthropod-borne, and many are important pathogens such as chikungunya virus. Alphaviruses encode four nonstructural proteins (nsP1-4), initially produced as a polyprotein P1234. nsP4 is the core RNA-dependent RNA polymerase but all four nsPs are required for RNA synthesis. The early replication complex (RC) formed by the polyprotein P123 and nsP4 synthesizes minus RNA strands, and the late RC composed of fully processed nsP1-nsP4 is responsible for the production of genomic and subgenomic plus strands. Different parts of nsP4 recognize the promoters for minus and plus strands but the binding also requires the other nsPs. The alphavirus polymerase has been purified and is capable of de novo RNA synthesis only in the presence of the other nsPs. The purified nsP4 also has terminal adenylyltransferase activity, which may generate the poly(A) tail at the 3' end of the genome. Membrane association of the nsPs is vital for replication, and alphaviruses induce membrane invaginations called spherules, which form a microenvironment for RNA synthesis by concentrating replication components and protecting double-stranded RNA intermediates. The RCs isolated as crude membrane preparations are active in RNA synthesis in vitro, but high-resolution structure of the RC has not been achieved, and thus the arrangement of viral and possible host components remains unknown. For some alphaviruses, Ras-GTPase-activating protein (Src-homology 3 (SH3) domain)-binding proteins (G3BPs) and amphiphysins have been shown to be essential for RNA replication and are present in the RCs. Host factors offer an additional target for antivirals, as only few alphavirus polymerase inhibitors have been described.

  16. Dynamic replication of Web contents

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The phenomenal growth of the World Wide Web has brought huge increase in the traffic to the popular web sites.Long delays and denial of service experienced by the end-users,especially during the peak hours,continues to be the common problem while accessing popular sites.Replicating some of the objects at multiple sites in a distributed web-server environment is one of the possible solutions to improve the response time/Iatency. The decision of what and where to replicate requires solving a constraint optimization problem,which is NP-complete in general.In this paper, we consider the problem of placing copies of objects in a distributed web server system to minimize the cost of serving read and write requests when the web servers have Iimited storage capacity.We formulate the problem as a 0-1 optimization problem and present a polynomial time greedy algorithm with backtracking to dynamically replicate objects at the appropriate sites to minimize a cost function.To reduce the solution search space,we present necessary condi tions for a site to have a replica of an object jn order to minimize the cost function We present simulation resuIts for a variety of problems to illustrate the accuracy and efficiency of the proposed algorithms and compare them with those of some well-known algorithms.The simulation resuIts demonstrate the superiority of the proposed algorithms.

  17. 计算机应用技术专业教学资源库建设与研究%A study on the construction of teaching resources warehouse for computer application technology

    Institute of Scientific and Technical Information of China (English)

    黄力明

    2014-01-01

    The construction of teaching resources warehouses for computer application specialty is conducive for u-sing excellent teaching resources,enhancing specialty teaching quality and building a good learning environment for independent learning.In order to create good and convenient environments for students and create good course preparation and resources sharing environments for teachers,we have built the specialty teaching resources ware-house in the aspects of teaching resources libraries and teaching system support platform.Practice shows that com-puter application technology professional teaching resources have improved the utilization of teaching resources and better serve teaching.%创建计算机应用技术专业教学资源库有利于优质教学资源的充分利用,能提高计算机应用技术专业教学质量,为学生自主学习创建一个良好的学习环境。以“为学生创造良好、方便的学习环境”“为教师创造良好的备课与资源分享环境”为主要目标,从教学资源库和教学系统支持平台方面建设计算机应用技术专业教学资源库。实践表明:计算机应用技术专业教学资源库的建设提高了教学资源的利用率,促进教学资源更好地为教学服务。

  18. Evolutionary dynamics of RNA-like replicator systems: A bioinformatic approach to the origin of life.

    Science.gov (United States)

    Takeuchi, Nobuto; Hogeweg, Paulien

    2012-09-01

    We review computational studies on prebiotic evolution, focusing on informatic processes in RNA-like replicator systems. In particular, we consider the following processes: the maintenance of information by replicators with and without interactions, the acquisition of information by replicators having a complex genotype-phenotype map, the generation of information by replicators having a complex genotype-phenotype-interaction map, and the storage of information by replicators serving as dedicated templates. Focusing on these informatic aspects, we review studies on quasi-species, error threshold, RNA-folding genotype-phenotype map, hypercycle, multilevel selection (including spatial self-organization, classical group selection, and compartmentalization), and the origin of DNA-like replicators. In conclusion, we pose a future question for theoretical studies on the origin of life.

  19. RESOURCE SCHEDULING STRATEGY BASED OPTIMIZED GENERIC ALGORITHM IN CLOUD COMPUTING ENVIRONMENT%云计算环境中优化遗传算法的资源调度策略

    Institute of Scientific and Technical Information of China (English)

    刘愉; 赵志文; 李小兰; 孔令荣; 于淑环; 于妍芳

    2012-01-01

    Cloud computing is an emerging distributed computing method which integrates heterogeneous, distributed resources on the internet into a supercomputer to provide services for users by virtualization technology. The basic scheme is that complex and large computing tasks are divided into smaller sub-tasks, which will be first executed by cloud resources and then the executed results will be send back to users, so resources scheduling is the core problem in cloud computing environment. Traditional generic algorithm (GA), sufferage algorithm can both be used for resources scheduling in a cloud computing environment, traditional generic algorithm has the disadvantage of slow convergence and prematurity, sufferage performs worse in case of data-intensive applications in multiple cluster environments. Since characteristics of dynamic, heterogeneous and large-scale tasks need to be processed in cloud computing environment, we propose here an improved generic algorithm (IGA) based on chromosome encoded mode and fitness function, to emulate the three algorithms on CloudSim. Simulation data showed that the improved algorithm performed better than GA and sufferage method in regard to performance and QoS (Quality of Service), which would be better applicable for resource scheduling in a cloud computing environment.%资源调度是云计算的核心问题,传统遗传算法(GA)、Sufferage算法等都可以用于云计算环境中的资源调度,但传统遗传算法存在收敛慢、易早熟等缺点,Sufferage算法则不适用于多聚类环境的密集型任务调度.本文在充分考虑云计算环境的动态异构性和大规模任务处理特性的基础上,提出了一种基于染色体编码方式和适应度函数的改进遗传算法(IGA),并在云仿真器CloudSim上对3种算法进行了仿真.仿真结果表明,该算法在性能和服务质量QoS(Qualityof Service)方面都优于传统遗传算法和Sufferage,能更好

  20. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.