WorldWideScience

Sample records for huge computing resources

  1. Huge cystic craniopharyngioma. Changes of cyst density on computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Takamura, Seishi; Fukumura, Akinobu; Ito, Yoshihiro; Itoyama, Yoichi; Matsukado, Yasuhiko

    1986-06-01

    The findings of computed tomography (CT) of a huge cystic craniopharyngioma in a 57-year-old woman are described. Cyst density varied from low to high levels in a short duration. Follow-up CT scans were regarded as important to diagnose craniopharyngioma. The mechanism of increment of cyst density was discussed.

  2. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  3. A huge cystic craniopharyngioma

    International Nuclear Information System (INIS)

    Takamura, Seishi; Fukumura, Akinobu; Ito, Yoshihiro; Itoyama, Yoichi; Matsukado, Yasuhiko.

    1986-01-01

    The findings of computed tomography (CT) of a huge cystic craniopharyngioma in a 57-year-old woman are described. Cyst density varied from low to high levels in a short duration. Follow-up CT scans were regarded as important to diagnose craniopharyngioma. The mechanism of increment of cyst density was discussed. (author)

  4. Aggregated Computational Toxicology Resource (ACTOR)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aggregated Computational Toxicology Resource (ACTOR) is a database on environmental chemicals that is searchable by chemical name and other identifiers, and by...

  5. Aggregated Computational Toxicology Online Resource

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...

  6. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  7. Computer Resources | College of Engineering & Applied Science

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  8. A tale of two countries : blessed with huge heavy oil resources, Canada and Venezuela pursue different paths

    International Nuclear Information System (INIS)

    Ball, C.

    2005-01-01

    Both Canada and Venezuela are rich in heavy oil resources. This article presented an overview of current development activities in both countries. International interest in the oil sands region has been highlighted by the French oil company Total's acquisition of Deer Creek Energy Ltd in Alberta for $1.35 billion. The acquisition supports the company's strategy of expanding heavy oil operations in the Athabasca region. With 47 per cent participation in the Sincor project, Total is already a major player in Venezuela. Although the Sincor project is one of the world's largest developments, future investment is in jeopardy due to an unpredictable government and shifts in policy by the state-run oil company Petroleos de Venezuela S.A. (PDVSA). The country's energy minister has recently announced that all existing agreements will be terminated as of December 31, 2005. The government has allowed 6 months for companies to enter into new agreements with new terms. Under revised rules, foreign companies will be required to pay income tax at a rate of 50 per cent. The rate will be applied retroactively to profits made over the last 5 years. Under the new law, agreements could be established under the terms of mixed companies, where Venezuela will have majority equity in the company that exploits the oil. In addition, the government has accused companies of not paying the required income tax levels on contracts, and some companies have been fined as much as $100 million. It was suggested that current difficulties are the result of an incoherent energy policy and an unstable regime. The international oil and gas community is watching developments, and it was anticipated that parties previously considering Venezuela as an investment opportunity will now reconsider. By contrast, Alberta has been praised by oil companies for its stable regulatory regime and its reasonable royalty structure. Thanks to a purge of 18,000 employees from PDVSA by Venezuelan president, Alberta is now

  9. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  10. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  11. Three pillars for achieving quantum mechanical molecular dynamics simulations of huge systems: Divide-and-conquer, density-functional tight-binding, and massively parallel computation.

    Science.gov (United States)

    Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi

    2016-08-05

    The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Resource Management in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  13. Statistics Online Computational Resource for Education

    Science.gov (United States)

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  14. Resource allocation in grid computing

    NARCIS (Netherlands)

    Koole, Ger; Righter, Rhonda

    2007-01-01

    Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of

  15. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resourcesresources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  16. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  17. Safety of huge systems

    International Nuclear Information System (INIS)

    Kondo, Jiro.

    1995-01-01

    Recently accompanying the development of engineering technology, huge systems tend to be constructed. The disaster countermeasures of huge cities become large problems as the concentration of population into cities is conspicuous. To make the expected value of loss small, the knowledge of reliability engineering is applied. In reliability engineering, even if a part of structures fails, the safety as a whole system must be ensured, therefore, the design having margin is carried out. The degree of margin is called redundancy. However, such design concept makes the structure of a system complex, and as the structure is complex, the possibility of causing human errors becomes high. At the time of huge system design, the concept of fail-safe is effective, but simple design must be kept in mind. The accident in Mihama No. 2 plant of Kansai Electric Power Co. and the accident in Chernobyl nuclear power station, and the accident of Boeing B737 airliner and the fatigue breakdown are described. The importance of safety culture was emphasized as the method of preventing human errors. Man-system interface and management system are discussed. (K.I.)

  18. Exploitation of heterogeneous resources for ATLAS Computing

    CERN Document Server

    Chudoba, Jiri; The ATLAS collaboration

    2018-01-01

    LHC experiments require significant computational resources for Monte Carlo simulations and real data processing and the ATLAS experiment is not an exception. In 2017, ATLAS exploited steadily almost 3M HS06 units, which corresponds to about 300 000 standard CPU cores. The total disk and tape capacity managed by the Rucio data management system exceeded 350 PB. Resources are provided mostly by Grid computing centers distributed in geographically separated locations and connected by the Grid middleware. The ATLAS collaboration developed several systems to manage computational jobs, data files and network transfers. ATLAS solutions for job and data management (PanDA and Rucio) were generalized and now are used also by other collaborations. More components are needed to include new resources such as private and public clouds, volunteers' desktop computers and primarily supercomputers in major HPC centers. Workflows and data flows significantly differ for these less traditional resources and extensive software re...

  19. SOCR: Statistics Online Computational Resource

    OpenAIRE

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis...

  20. Huge interparietal posterior fontanel meningohydroencephalocele

    Directory of Open Access Journals (Sweden)

    Jorge Félix Companioni Rosildo

    2015-03-01

    Full Text Available Congenital encephalocele is a neural tube defect characterized by a sac-like protrusion of the brain, meninges, and other intracranial structures through the skull, which is caused by an embryonic development abnormality. The most common location is at the occipital bone, and its incidence varies according to different world regions. We report a case of an 1-month and 7-day-old male child with a huge interparietal-posterior fontanel meningohydroencephalocele, a rare occurrence. Physical examination and volumetric computed tomography were diagnostic. The encephalocele was surgically resected. Intradural and extradural approaches were performed; the bone defect was not primarily closed. Two days after surgery, the patient developed hydrocephaly requiring ventriculoperitoneal shunting. The surgical treatment of the meningohydroencephalocele of the interparietal-posterior fontanel may be accompanied by technical challenges and followed by complications due to the presence of large blood vessels under the overlying skin. In these cases, huge sacs herniate through large bone defects including meninges, brain, and blood vessels. The latter present communication with the superior sagittal sinus and ventricular system. A favorable surgical outcome generally follows an accurate strategy taking into account individual features of the lesion.

  1. Framework of Resource Management for Intercloud Computing

    Directory of Open Access Journals (Sweden)

    Mohammad Aazam

    2014-01-01

    Full Text Available There has been a very rapid increase in digital media content, due to which media cloud is gaining importance. Cloud computing paradigm provides management of resources and helps create extended portfolio of services. Through cloud computing, not only are services managed more efficiently, but also service discovery is made possible. To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible for standalone clouds to handle everything with the increasing user demands. For scalability and better service provisioning, at times, clouds have to communicate with other clouds and share their resources. This scenario is called Intercloud computing or cloud federation. The study on Intercloud computing is still in its start. Resource management is one of the key concerns to be addressed in Intercloud computing. Already done studies discuss this issue only in a trivial and simplistic way. In this study, we present a resource management model, keeping in view different types of services, different customer types, customer characteristic, pricing, and refunding. The presented framework was implemented using Java and NetBeans 8.0 and evaluated using CloudSim 3.0.3 toolkit. Presented results and their discussion validate our model and its efficiency.

  2. Optimised resource construction for verifiable quantum computation

    International Nuclear Information System (INIS)

    Kashefi, Elham; Wallden, Petros

    2017-01-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. (paper)

  3. Big Data in Cloud Computing: A Resource Management Perspective

    Directory of Open Access Journals (Sweden)

    Saeed Ullah

    2018-01-01

    Full Text Available The modern day advancement is increasingly digitizing our lives which has led to a rapid growth of data. Such multidimensional datasets are precious due to the potential of unearthing new knowledge and developing decision-making insights from them. Analyzing this huge amount of data from multiple sources can help organizations to plan for the future and anticipate changing market trends and customer requirements. While the Hadoop framework is a popular platform for processing larger datasets, there are a number of other computing infrastructures, available to use in various application domains. The primary focus of the study is how to classify major big data resource management systems in the context of cloud computing environment. We identify some key features which characterize big data frameworks as well as their associated challenges and issues. We use various evaluation metrics from different aspects to identify usage scenarios of these platforms. The study came up with some interesting findings which are in contradiction with the available literature on the Internet.

  4. VECTR: Virtual Environment Computational Training Resource

    Science.gov (United States)

    Little, William L.

    2018-01-01

    The Westridge Middle School Curriculum and Community Night is an annual event designed to introduce students and parents to potential employers in the Central Florida area. NASA participated in the event in 2017, and has been asked to come back for the 2018 event on January 25. We will be demonstrating our Microsoft Hololens Virtual Rovers project, and the Virtual Environment Computational Training Resource (VECTR) virtual reality tool.

  5. LHCb Computing Resource usage in 2017

    CERN Document Server

    Bozzi, Concezio

    2018-01-01

    This document reports the usage of computing resources by the LHCb collaboration during the period January 1st – December 31st 2017. The data in the following sections have been compiled from the EGI Accounting portal: https://accounting.egi.eu. For LHCb specific information, the data is taken from the DIRAC Accounting at the LHCb DIRAC Web portal: http://lhcb-portal-dirac.cern.ch.

  6. Function Package for Computing Quantum Resource Measures

    Science.gov (United States)

    Huang, Zhiming

    2018-05-01

    In this paper, we present a function package for to calculate quantum resource measures and dynamics of open systems. Our package includes common operators and operator lists, frequently-used functions for computing quantum entanglement, quantum correlation, quantum coherence, quantum Fisher information and dynamics in noisy environments. We briefly explain the functions of the package and illustrate how to use the package with several typical examples. We expect that this package is a useful tool for future research and education.

  7. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  8. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  9. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  10. Huge cystic craniopharyngioma with unusual extensions

    Energy Technology Data Exchange (ETDEWEB)

    Kitano, I.; Yoneda, K.; Yamakawa, Y.; Fukui, M.; Kinoshita, K.

    1981-09-01

    The findings on computed tomography (CT) of a huge cystic craniopharyngioma in a 3-year-old girl are described. The cyst occupied both anterior cranial fossae and a part of it extended to the region of the third ventricle which was displaced posteriorly. The tumor showed no contrast enhancement after the intravenous administration of contrast medium.

  11. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Tupputi, S A; Girolamo, A Di; Kouba, T; Schovancová, J

    2014-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  12. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  13. NMRbox: A Resource for Biomolecular NMR Computation.

    Science.gov (United States)

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  14. ACToR - Aggregated Computational Toxicology Resource

    International Nuclear Information System (INIS)

    Judson, Richard; Richard, Ann; Dix, David; Houck, Keith; Elloumi, Fathi; Martin, Matthew; Cathey, Tommy; Transue, Thomas R.; Spencer, Richard; Wolf, Maritja

    2008-01-01

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast TM

  15. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    Science.gov (United States)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  16. Contract on using computer resources of another

    Directory of Open Access Journals (Sweden)

    Cvetković Mihajlo

    2016-01-01

    Full Text Available Contractual relations involving the use of another's property are quite common. Yet, the use of computer resources of others over the Internet and legal transactions arising thereof certainly diverge from the traditional framework embodied in the special part of contract law dealing with this issue. Modern performance concepts (such as: infrastructure, software or platform as high-tech services are highly unlikely to be described by the terminology derived from Roman law. The overwhelming novelty of high-tech services obscures the disadvantageous position of contracting parties. In most cases, service providers are global multinational companies which tend to secure their own unjustified privileges and gain by providing lengthy and intricate contracts, often comprising a number of legal documents. General terms and conditions in these service provision contracts are further complicated by the '.service level agreement', rules of conduct and (nonconfidentiality guarantees. Without giving the issue a second thought, users easily accept the pre-fabricated offer without reservations, unaware that such a pseudo-gratuitous contract actually conceals a highly lucrative and mutually binding agreement. The author examines the extent to which the legal provisions governing sale of goods and services, lease, loan and commodatum may apply to 'cloud computing' contracts, and analyses the scope and advantages of contractual consumer protection, as a relatively new area in contract law. The termination of a service contract between the provider and the user features specific post-contractual obligations which are inherent to an online environment.

  17. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  18. LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents the computing resources needed by LHCb in 2019 and a reassessment of the 2018 requests, as resulting from the current experience of Run2 data taking and minor changes in the LHCb computing model parameters.

  19. Some issues of creation of belarusian language computer resources

    OpenAIRE

    Rubashko, N.; Nevmerjitskaia, G.

    2003-01-01

    The main reason for creation of computer resources of natural language is the necessity to bring into accord the ways of language normalization with the form of its existence - the computer form of language usage should correspond to the computer form of language standards fixation. This paper discusses various aspects of the creation of Belarusian language computer resources. It also briefly gives an overview of the objectives of the project involved.

  20. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  1. Resource management in utility and cloud computing

    CERN Document Server

    Zhao, Han

    2013-01-01

    This SpringerBrief reviews the existing market-oriented strategies for economically managing resource allocation in distributed systems. It describes three new schemes that address cost-efficiency, user incentives, and allocation fairness with regard to different scheduling contexts. The first scheme, taking the Amazon EC2? market as a case of study, investigates the optimal resource rental planning models based on linear integer programming and stochastic optimization techniques. This model is useful to explore the interaction between the cloud infrastructure provider and the cloud resource c

  2. Improving ATLAS computing resource utilization with HammerCloud

    CERN Document Server

    Schovancova, Jaroslava; The ATLAS collaboration

    2018-01-01

    HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which have yet to be automated, and improve utilization of available computing resources. We present recent evolution of the auto-exclusion/recovery tools: faster inclusion of new resources in testing machinery, machine learning algorithms for anomaly detection, categorized resources as master vs. slave for the purpose of blacklisting, and a tool for auto-exclusion/recovery of resources triggered by Event Service job failures that is being extended to other workflows besides the Event Service. We describe how HammerCloud helped commissioning various concepts and components of distributed systems: simplified configuration of qu...

  3. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  4. Huge music archives on mobile devices

    DEFF Research Database (Denmark)

    Blume, H.; Bischl, B.; Botteck, M.

    2011-01-01

    The availability of huge nonvolatile storage capacities such as flash memory allows large music archives to be maintained even in mobile devices. With the increase in size, manual organization of these archives and manual search for specific music becomes very inconvenient. Automated dynamic...... organization enables an attractive new class of applications for managing ever-increasing music databases. For these types of applications, extraction of music features as well as subsequent feature processing and music classification have to be performed. However, these are computationally intensive tasks...... and difficult to tackle on mobile platforms. Against this background, we provided an overview of algorithms for music classification as well as their computation times and other hardware-related aspects, such as power consumption on various hardware architectures. For mobile platforms such as smartphones...

  5. Decentralized Resource Management in Distributed Computer Systems.

    Science.gov (United States)

    1982-02-01

    directly exchanging user state information. Eventcounts and sequencers correspond to semaphores in the sense that synchronization primitives are used to...and techniques are required to achieve synchronization in distributed computers without reliance on any centralized entity such as a semaphore ...known solutions to the access synchronization problem was Dijkstra’s semaphore [12]. The importance of the semaphore is that it correctly addresses the

  6. Physical-resource requirements and the power of quantum computation

    International Nuclear Information System (INIS)

    Caves, Carlton M; Deutsch, Ivan H; Blume-Kohout, Robin

    2004-01-01

    The primary resource for quantum computation is the Hilbert-space dimension. Whereas Hilbert space itself is an abstract construction, the number of dimensions available to a system is a physical quantity that requires physical resources. Avoiding a demand for an exponential amount of these resources places a fundamental constraint on the systems that are suitable for scalable quantum computation. To be scalable, the number of degrees of freedom in the computer must grow nearly linearly with the number of qubits in an equivalent qubit-based quantum computer. These considerations rule out quantum computers based on a single particle, a single atom, or a single molecule consisting of a fixed number of atoms or on classical waves manipulated using the transformations of linear optics

  7. Development of Computer-Based Resources for Textile Education.

    Science.gov (United States)

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  8. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  9. ResourceGate: A New Solution for Cloud Computing Resource Allocation

    OpenAIRE

    Abdullah A. Sheikh

    2012-01-01

    Cloud computing has taken place to be focused by educational and business communities. These concerns include their needs to improve the Quality of Services (QoS) provided, also services such as reliability, performance and reducing costs. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring these benefits is considered to be the major factor in the cloud computing environment. This paper surveys recent research related to cloud computing resource al...

  10. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  11. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  12. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  13. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  14. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  15. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  16. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  17. Shared-resource computing for small research labs.

    Science.gov (United States)

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  18. Using OSG Computing Resources with (iLC)Dirac

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Petric, Marko

    2017-01-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called 'SiteDirectors', which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional sitespecific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were develo...

  19. Integration of Openstack cloud resources in BES III computing cluster

    Science.gov (United States)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  20. Computer-aided resource planning and scheduling for radiological services

    Science.gov (United States)

    Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.

    1996-05-01

    There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.

  1. A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking

    Directory of Open Access Journals (Sweden)

    Fang-Yie Leu

    2012-03-01

    Full Text Available In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short, which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET, users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.

  2. Active resources concept of computation for enterprise software

    Directory of Open Access Journals (Sweden)

    Koryl Maciej

    2017-06-01

    Full Text Available Traditional computational models for enterprise software are still to a great extent centralized. However, rapid growing of modern computation techniques and frameworks causes that contemporary software becomes more and more distributed. Towards development of new complete and coherent solution for distributed enterprise software construction, synthesis of three well-grounded concepts is proposed: Domain-Driven Design technique of software engineering, REST architectural style and actor model of computation. As a result new resources-based framework arises, which after first cases of use seems to be useful and worthy of further research.

  3. GridFactory - Distributed computing on ephemeral resources

    DEFF Research Database (Denmark)

    Orellana, Frederik; Niinimaki, Marko

    2011-01-01

    A novel batch system for high throughput computing is presented. The system is specifically designed to leverage virtualization and web technology to facilitate deployment on cloud and other ephemeral resources. In particular, it implements a security model suited for forming collaborations...

  4. Can the Teachers' Creativity Overcome Limited Computer Resources?

    Science.gov (United States)

    Nikolov, Rumen; Sendova, Evgenia

    1988-01-01

    Describes experiences of the Research Group on Education (RGE) at the Bulgarian Academy of Sciences and the Ministry of Education in using limited computer resources when teaching informatics. Topics discussed include group projects; the use of Logo; ability grouping; and out-of-class activities, including publishing a pupils' magazine. (13…

  5. Recent development of computational resources for new antibiotics discovery

    DEFF Research Database (Denmark)

    Kim, Hyun Uk; Blin, Kai; Lee, Sang Yup

    2017-01-01

    Understanding a complex working mechanism of biosynthetic gene clusters (BGCs) encoding secondary metabolites is a key to discovery of new antibiotics. Computational resources continue to be developed in order to better process increasing volumes of genome and chemistry data, and thereby better...

  6. Computing Resource And Work Allocations Using Social Profiles

    Directory of Open Access Journals (Sweden)

    Peter Lavin

    2013-01-01

    Full Text Available If several distributed and disparate computer resources exist, many of whichhave been created for different and diverse reasons, and several large scale com-puting challenges also exist with similar diversity in their backgrounds, then oneproblem which arises in trying to assemble enough of these resources to addresssuch challenges is the need to align and accommodate the different motivationsand objectives which may lie behind the existence of both the resources andthe challenges. Software agents are offered as a mainstream technology formodelling the types of collaborations and relationships needed to do this. Asan initial step towards forming such relationships, agents need a mechanism toconsider social and economic backgrounds. This paper explores addressing so-cial and economic differences using a combination of textual descriptions knownas social profiles and search engine technology, both of which are integrated intoan agent technology.

  7. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    OpenAIRE

    Cirasella, Jill

    2009-01-01

    This article is an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news.

  8. Towards minimal resources of measurement-based quantum computation

    International Nuclear Information System (INIS)

    Perdrix, Simon

    2007-01-01

    We improve the upper bound on the minimal resources required for measurement-only quantum computation (M A Nielsen 2003 Phys. Rev. A 308 96-100; D W Leung 2004 Int. J. Quantum Inform. 2 33; S Perdrix 2005 Int. J. Quantum Inform. 3 219-23). Minimizing the resources required for this model is a key issue for experimental realization of a quantum computer based on projective measurements. This new upper bound also allows one to reply in the negative to the open question presented by Perdrix (2004 Proc. Quantum Communication Measurement and Computing) about the existence of a trade-off between observable and ancillary qubits in measurement-only QC

  9. Computing Bounds on Resource Levels for Flexible Plans

    Science.gov (United States)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow

  10. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    International Nuclear Information System (INIS)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  11. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  12. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  13. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    Science.gov (United States)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  14. An Overview of Cloud Computing in Distributed Systems

    Science.gov (United States)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  15. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    Science.gov (United States)

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  16. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  17. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  18. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  19. Mobile devices and computing cloud resources allocation for interactive applications

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2017-06-01

    Full Text Available Using mobile devices such as smartphones or iPads for various interactive applications is currently very common. In the case of complex applications, e.g. chess games, the capabilities of these devices are insufficient to run the application in real time. One of the solutions is to use cloud computing. However, there is an optimization problem of mobile device and cloud resources allocation. An iterative heuristic algorithm for application distribution is proposed. The algorithm minimizes the energy cost of application execution with constrained execution time.

  20. Negative quasi-probability as a resource for quantum computation

    International Nuclear Information System (INIS)

    Veitch, Victor; Ferrie, Christopher; Emerson, Joseph; Gross, David

    2012-01-01

    A central problem in quantum information is to determine the minimal physical resources that are required for quantum computational speed-up and, in particular, for fault-tolerant quantum computation. We establish a remarkable connection between the potential for quantum speed-up and the onset of negative values in a distinguished quasi-probability representation, a discrete analogue of the Wigner function for quantum systems of odd dimension. This connection allows us to resolve an open question on the existence of bound states for magic state distillation: we prove that there exist mixed states outside the convex hull of stabilizer states that cannot be distilled to non-stabilizer target states using stabilizer operations. We also provide an efficient simulation protocol for Clifford circuits that extends to a large class of mixed states, including bound universal states. (paper)

  1. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  2. Next Generation Computer Resources: Reference Model for Project Support Environments (Version 2.0)

    National Research Council Canada - National Science Library

    Brown, Alan

    1993-01-01

    The objective of the Next Generation Computer Resources (NGCR) program is to restructure the Navy's approach to acquisition of standard computing resources to take better advantage of commercial advances and investments...

  3. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  4. Huge Thornwaldt's Cyst: A Case Report

    Directory of Open Access Journals (Sweden)

    Jia-Hau Lin

    2006-10-01

    Full Text Available Thornwaldt's bursa, also known as nasopharyngeal bursa, is a recess in the midline of the nasopharynx that is produced by persistent notochord remnants. If its opening becomes obstructed, possibly due to infection or a complication from adenoidectomy, a Thornwaldt's cyst might develop. Here, we present a 53-year-old man who complained of nasal obstruction that had progressed for 1 year. Nasopharyngoscopy showed a huge nasopharyngeal mass. Thornwaldt's cyst was suspected. Magnetic resonance imaging showed a lesion measuring 3.6 × 3.4 cm, intermediate on T1-weighted and high signal intensity on T2-weighted imaging, neither bony destruction nor connection to the brain. The patient underwent endoscopic surgery for this huge mass. Afterwards, his symptoms improved significantly. We present the treatment and differential diagnosis of a nasopharyngeal cyst.

  5. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  6. From tiny microalgae to huge biorefineries

    OpenAIRE

    Gouveia, L.

    2014-01-01

    Microalgae are an emerging research field due to their high potential as a source of several biofuels in addition to the fact that they have a high-nutritional value and contain compounds that have health benefits. They are also highly used for water stream bioremediation and carbon dioxide mitigation. Therefore, the tiny microalgae could lead to a huge source of compounds and products, giving a good example of a real biorefinery approach. This work shows and presents examples of experimental...

  7. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  8. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  9. A huge renal capsular leiomyoma mimicking retroperitoneal sarcoma

    Directory of Open Access Journals (Sweden)

    Lal Anupam

    2009-01-01

    Full Text Available A huge left renal capsular leiomyoma mimicking retroperitoneal sarcoma presented in a patient as an abdominal mass. Computed tomography displayed a large heterogeneous retro-peritoneal mass in the left side of the abdomen with inferior and medial displacement as well as loss of fat plane with the left kidney. Surgical exploration revealed a capsulated mass that was tightly adherent to the left kidney; therefore, total tumor resection with radical left nephrectomy was performed. Histopathology ultimately confirmed the benign nature of the mass. This is the largest leiomyoma reported in literature to the best of our knowledge.

  10. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  11. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  12. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  13. Research on elastic resource management for multi-queue under cloud computing environment

    Science.gov (United States)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  14. SYSTEMATIC LITERATURE REVIEW ON RESOURCE ALLOCATION AND RESOURCE SCHEDULING IN CLOUD COMPUTING

    OpenAIRE

    B. Muni Lavanya; C. Shoba Bindu

    2016-01-01

    The objective the work is intended to highlight the key features and afford finest future directions in the research community of Resource Allocation, Resource Scheduling and Resource management from 2009 to 2016. Exemplifying how research on Resource Allocation, Resource Scheduling and Resource management has progressively increased in the past decade by inspecting articles, papers from scientific and standard publications. Survey materialized in three-fold process. Firstly, investigate on t...

  15. Huge Tongue Lipoma: A Case Report

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Damghani

    2015-03-01

    Full Text Available Introduction: Lipomas are among the most common tumors of the human body. However, they are uncommon in the oral cavity and are observed as slow growing, painless, and asymptomatic yellowish submucosal masses. Surgical excision is the treatment of choice and recurrence is not expected.    Case Report: The case of a 30-year-old woman with a huge lipoma on the tip of her tongue since 3 years, is presented. She had difficulty with speech and mastication because the tongue tumor was filling the oral cavity. Clinical examination revealed a yellowish lesion, measuring 8 cm in maximum diameter, protruding from the lingual surface. The tumor was surgically excised with restoration of normal tongue function and histopathological examination of the tumor confirmed that it was a lipoma.   Conclusion:  Tongue lipoma is rarely seen and can be a cause of macroglossia. Surgical excision for lipoma is indicated for symptomatic relief and exclusion of associated malignancy.

  16. Nanocellulose, a tiny fiber with huge applications.

    Science.gov (United States)

    Abitbol, Tiffany; Rivkin, Amit; Cao, Yifeng; Nevo, Yuval; Abraham, Eldho; Ben-Shalom, Tal; Lapidot, Shaul; Shoseyov, Oded

    2016-06-01

    Nanocellulose is of increasing interest for a range of applications relevant to the fields of material science and biomedical engineering due to its renewable nature, anisotropic shape, excellent mechanical properties, good biocompatibility, tailorable surface chemistry, and interesting optical properties. We discuss the main areas of nanocellulose research: photonics, films and foams, surface modifications, nanocomposites, and medical devices. These tiny nanocellulose fibers have huge potential in many applications, from flexible optoelectronics to scaffolds for tissue regeneration. We hope to impart the readers with some of the excitement that currently surrounds nanocellulose research, which arises from the green nature of the particles, their fascinating physical and chemical properties, and the diversity of applications that can be impacted by this material. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    This paper analyzes the decision-making problem confronting SMEs considering the adoption of cloud computing as an alternative to in-house computing services provision. The economics of choosing between in-house computing and a cloud alternative is analyzed by comparing the total economic costs...... in determining the relative value of cloud computing....

  18. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  19. Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony Optimization Algorithm

    OpenAIRE

    Lingna He; Qingshui Li; Linan Zhu

    2012-01-01

    In order to replace the traditional Internet software usage patterns and enterprise management mode, this paper proposes a new business calculation mode- cloud computing, resources scheduling strategy is the key technology in cloud computing, Based on the study of cloud computing system structure and the mode of operation, The key research for cloud computing the process of the work scheduling and resource allocation problems based on ant colony algorithm , Detailed analysis and design of the...

  20. Discovery of resources using MADM approaches for parallel and distributed computing

    Directory of Open Access Journals (Sweden)

    Mandeep Kaur

    2017-06-01

    Full Text Available Grid, a form of parallel and distributed computing, allows the sharing of data and computational resources among its users from various geographical locations. The grid resources are diverse in terms of their underlying attributes. The majority of the state-of-the-art resource discovery techniques rely on the static resource attributes during resource selection. However, the matching resources based on the static resource attributes may not be the most appropriate resources for the execution of user applications because they may have heavy job loads, less storage space or less working memory (RAM. Hence, there is a need to consider the current state of the resources in order to find the most suitable resources. In this paper, we have proposed a two-phased multi-attribute decision making (MADM approach for discovery of grid resources by using P2P formalism. The proposed approach considers multiple resource attributes for decision making of resource selection and provides the best suitable resource(s to grid users. The first phase describes a mechanism to discover all matching resources and applies SAW method to shortlist the top ranked resources, which are communicated to the requesting super-peer. The second phase of our proposed methodology applies integrated MADM approach (AHP enriched PROMETHEE-II on the list of selected resources received from different super-peers. The pairwise comparison of the resources with respect to their attributes is made and the rank of each resource is determined. The top ranked resource is then communicated to the grid user by the grid scheduler. Our proposed methodology enables the grid scheduler to allocate the most suitable resource to the user application and also reduces the search complexity by filtering out the less suitable resources during resource discovery.

  1. Huge hydrocephalus: definition, management, and complications.

    Science.gov (United States)

    Faghih Jouibari, Morteza; Baradaran, Nazanin; Shams Amiri, Rouzbeh; Nejat, Farideh; El Khashab, Mostafa

    2011-01-01

    Lack of comprehensive knowledge and numerous socioeconomic problems may make the parents leave hydrocephalic children untreated, leading to progressive hydrocephalus and eventual unordinary big head. Management of huge hydrocephalus (HH) differs from common hydrocephalus. We present our experience in the management of these children. HH is defined as head circumference larger than the height of the infant. Nine infants with HH have been shunted in Children's Hospital Medical Center and followed up for 0.5 to 7 years. The most common cause of hydrocephalus was aqueductal stenosis. The mean age of patients during shunting was 3 months. The head circumference ranged from 56 to 94 cm with the average of 67 cm. Cognitive statuses were appropriate based on their age in five patients. Motor development was normal only in one patient. Complications were found in most cases which included subdural effusion (six patients), shunt infection (four patients), skin injury (three patients), proximal catheter coming out of ventricle to the subdural space (two patients), and shunt exposure (one patient). Three patients died due to shunt infection and sepsis. Numerous complications may occur in patients with HH after shunt operation such as subdural effusion, ventricular collapse, electrolyte disturbance, skull deformity, scalp injury, and shunt infection. Mental and motor disabilities are very common in patients with HH. Many of these complications can be related to overdrainage; therefore, drainage control using programmable shunts is advisable.

  2. An interactive computer approach to performing resource analysis for a multi-resource/multi-project problem. [Spacelab inventory procurement planning

    Science.gov (United States)

    Schlagheck, R. A.

    1977-01-01

    New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.

  3. Resource-Aware Load Balancing Scheme using Multi-objective Optimization in Cloud Computing

    OpenAIRE

    Kavita Rana; Vikas Zandu

    2016-01-01

    Cloud computing is a service based, on-demand, pay per use model consisting of an interconnected and virtualizes resources delivered over internet. In cloud computing, usually there are number of jobs that need to be executed with the available resources to achieve optimal performance, least possible total time for completion, shortest response time, and efficient utilization of resources etc. Hence, job scheduling is the most important concern that aims to ensure that use’s requirement are ...

  4. Impact of changing computer technology on hydrologic and water resource modeling

    OpenAIRE

    Loucks, D.P.; Fedra, K.

    1987-01-01

    The increasing availability of substantial computer power at relatively low costs and the increasing ease of using computer graphics, of communicating with other computers and data bases, and of programming using high-level problem-oriented computer languages, is providing new opportunities and challenges for those developing and using hydrologic and water resources models. This paper reviews some of the progress made towards the development and application of computer support systems designe...

  5. Process control upgrades yield huge operational improvements

    International Nuclear Information System (INIS)

    Fitzgerald, W.V.

    2001-01-01

    Most nuclear plants in North America were designed and built in the late 60 and 70. The regulatory nature of this industry over the years has made design changes at the plant level difficult, if not impossible, to implement. As a result, many plants in this world region have been getting by on technology that is over 40 years behind the times. What this translates into is that the plants have not been able to take advantage of the huge technology gains that have been made in process control during this period. As a result, most of these plants are much less efficient and productive than they could be. One particular area of the plant that is receiving a lot of attention is the feedwater heaters. These systems were put in place to improve efficiency, but most are not operating correctly. This paper will present a case study where one progressive mid-western utility decided that enough was enough and implemented a process control audit of their heater systems. The audit clearly pointed out the existing problems with the current process control system. It resulted in a proposal for the implementation of a state of the art, digital distributed process control system for the heaters along with a complete upgrade of the level controls and field devices that will stabilize heater levels, resulting in significant efficiency gains and lower maintenance bills. Overall the payback period for this investment should be less than 6 months and the plant is now looking for more opportunities that can provide even bigger gains. (author)

  6. LHCb Computing Resources: 2011 re-assessment, 2012 request and 2013 forecast

    CERN Document Server

    Graciani, R

    2011-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2011 data taking period, request of computing resource needs for 2012 data taking period and a first forecast of the 2013 needs, when no data taking is foreseen. Estimates are based on 2010 experienced and last updates from LHC schedule, as well as on a new implementation of the computing model simulation tool. Differences in the model and deviations in the estimates from previous presented results are stressed.

  7. LHCb Computing Resources: 2012 re-assessment, 2013 request and 2014 forecast

    CERN Document Server

    Graciani Diaz, Ricardo

    2012-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2012 data-taking period, request of computing resource needs for 2013, and a first forecast of the 2014 needs, when restart of data-taking is foreseen. Estimates are based on 2011 experience, as well as on the results of a simulation of the computing model described in the document. Differences in the model and deviations in the estimates from previous presented results are stressed.

  8. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  9. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  10. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  11. Science and Technology Resources on the Internet: Computer Security.

    Science.gov (United States)

    Kinkus, Jane F.

    2002-01-01

    Discusses issues related to computer security, including confidentiality, integrity, and authentication or availability; and presents a selected list of Web sites that cover the basic issues of computer security under subject headings that include ethics, privacy, kids, antivirus, policies, cryptography, operating system security, and biometrics.…

  12. Computer Simulation and Digital Resources for Plastic Surgery Psychomotor Education.

    Science.gov (United States)

    Diaz-Siso, J Rodrigo; Plana, Natalie M; Stranix, John T; Cutting, Court B; McCarthy, Joseph G; Flores, Roberto L

    2016-10-01

    Contemporary plastic surgery residents are increasingly challenged to learn a greater number of complex surgical techniques within a limited period. Surgical simulation and digital education resources have the potential to address some limitations of the traditional training model, and have been shown to accelerate knowledge and skills acquisition. Although animal, cadaver, and bench models are widely used for skills and procedure-specific training, digital simulation has not been fully embraced within plastic surgery. Digital educational resources may play a future role in a multistage strategy for skills and procedures training. The authors present two virtual surgical simulators addressing procedural cognition for cleft repair and craniofacial surgery. Furthermore, the authors describe how partnerships among surgical educators, industry, and philanthropy can be a successful strategy for the development and maintenance of digital simulators and educational resources relevant to plastic surgery training. It is our responsibility as surgical educators not only to create these resources, but to demonstrate their utility for enhanced trainee knowledge and technical skills development. Currently available digital resources should be evaluated in partnership with plastic surgery educational societies to guide trainees and practitioners toward effective digital content.

  13. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    Science.gov (United States)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  14. Quantum computing with incoherent resources and quantum jumps.

    Science.gov (United States)

    Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R

    2012-04-27

    Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.

  15. A parallel solver for huge dense linear systems

    Science.gov (United States)

    Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.

    2011-11-01

    HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system

  16. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Nan Zhang

    Full Text Available Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  17. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Science.gov (United States)

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  18. Surgical resource utilization in urban terrorist bombing: a computer simulation.

    Science.gov (United States)

    Hirshberg, A; Stein, M; Walden, R

    1999-09-01

    The objective of this study was to analyze the utilization of surgical staff and facilities during an urban terrorist bombing incident. A discrete-event computer model of the emergency room and related hospital facilities was constructed and implemented, based on cumulated data from 12 urban terrorist bombing incidents in Israel. The simulation predicts that the admitting capacity of the hospital depends primarily on the number of available surgeons and defines an optimal staff profile for surgeons, residents, and trauma nurses. The major bottlenecks in the flow of critical casualties are the shock rooms and the computed tomographic scanner but not the operating rooms. The simulation also defines the number of reinforcement staff needed to treat noncritical casualties and shows that radiology is the major obstacle to the flow of these patients. Computer simulation is an important new tool for the optimization of surgical service elements for a multiple-casualty situation.

  19. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  20. Optimal Computing Resource Management Based on Utility Maximization in Mobile Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Haoyu Meng

    2017-01-01

    Full Text Available Mobile crowdsourcing, as an emerging service paradigm, enables the computing resource requestor (CRR to outsource computation tasks to each computing resource provider (CRP. Considering the importance of pricing as an essential incentive to coordinate the real-time interaction among the CRR and CRPs, in this paper, we propose an optimal real-time pricing strategy for computing resource management in mobile crowdsourcing. Firstly, we analytically model the CRR and CRPs behaviors in form of carefully selected utility and cost functions, based on concepts from microeconomics. Secondly, we propose a distributed algorithm through the exchange of control messages, which contain the information of computing resource demand/supply and real-time prices. We show that there exist real-time prices that can align individual optimality with systematic optimality. Finally, we also take account of the interaction among CRPs and formulate the computing resource management as a game with Nash equilibrium achievable via best response. Simulation results demonstrate that the proposed distributed algorithm can potentially benefit both the CRR and CRPs. The coordinator in mobile crowdsourcing can thus use the optimal real-time pricing strategy to manage computing resources towards the benefit of the overall system.

  1. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    Science.gov (United States)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in

  2. Energy-efficient cloud computing : autonomic resource provisioning for datacenters

    OpenAIRE

    Tesfatsion, Selome Kostentinos

    2018-01-01

    Energy efficiency has become an increasingly important concern in data centers because of issues associated with energy consumption, such as capital costs, operating expenses, and environmental impact. While energy loss due to suboptimal use of facilities and non-IT equipment has largely been reduced through the use of best-practice technologies, addressing energy wastage in IT equipment still requires the design and implementation of energy-aware resource management systems. This thesis focu...

  3. TOWARDS NEW COMPUTATIONAL ARCHITECTURES FOR MASS-COLLABORATIVE OPENEDUCATIONAL RESOURCES

    OpenAIRE

    Ismar Frango Silveira; Xavier Ochoa; Antonio Silva Sprock; Pollyana Notargiacomo Mustaro; Yosly C. Hernandez Bieluskas

    2011-01-01

    Open Educational Resources offer several benefits mostly in education and training. Being potentially reusable, their use can reduce time and cost of developing educational programs, so that these savings could be transferred directly to students through the production of a large range of open, freely available content, which vary from hypermedia to digital textbooks. This paper discuss this issue and presents a project and a research network that, in spite of being directed to Latin America'...

  4. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  5. Computer System Resource Requirements of Novice Programming Students.

    Science.gov (United States)

    Nutt, Gary J.

    The characteristics of jobs that constitute the mix for lower division FORTRAN classes in a university were investigated. Samples of these programs were also benchmarked on a larger central site computer and two minicomputer systems. It was concluded that a carefully chosen minicomputer system could offer service at least the equivalent of the…

  6. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    Directory of Open Access Journals (Sweden)

    Yonghua Xiong

    2014-01-01

    Full Text Available This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU virtualization and mobile agent for mobile transparent computing (MTC to devise a method of managing shared resources and services management (SRSM. It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user’s requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  7. A novel resource management method of providing operating system as a service for mobile transparent computing.

    Science.gov (United States)

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  8. Logical and physical resource management in the common node of a distributed function laboratory computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-01-01

    A scheme for managing resources required for transaction processing in the common node of a distributed function computer system has been given. The scheme has been found to be satisfactory for all common node services provided so far

  9. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    OpenAIRE

    Buyya, Rajkumar; Beloglazov, Anton; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational cos...

  10. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  11. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  12. MCPLOTS: a particle physics resource based on volunteer computing

    CERN Document Server

    Karneyeu, A; Prestel, S; Skands, P Z

    2014-01-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC@HOME platform.

  13. MCPLOTS. A particle physics resource based on volunteer computing

    Energy Technology Data Exchange (ETDEWEB)

    Karneyeu, A. [Joint Inst. for Nuclear Research, Moscow (Russian Federation); Mijovic, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Irfu/SPP, CEA-Saclay, Gif-sur-Yvette (France); Prestel, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Lund Univ. (Sweden). Dept. of Astronomy and Theoretical Physics; Skands, P.Z. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2013-07-15

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  14. MCPLOTS: a particle physics resource based on volunteer computing

    International Nuclear Information System (INIS)

    Karneyeu, A.; Mijovic, L.; Prestel, S.; Skands, P.Z.

    2014-01-01

    The mcplots.cern.ch web site (mcplots) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the hepdata online database of experimental results and on the rivet Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the lhc rate at home 2.0 platform. (orig.)

  15. MCPLOTS. A particle physics resource based on volunteer computing

    International Nuclear Information System (INIS)

    Karneyeu, A.; Mijovic, L.; Prestel, S.

    2013-07-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  16. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  17. Campus Grids: Bringing Additional Computational Resources to HEP Researchers

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Bockelman, Brian; Swanson, David

    2012-01-01

    It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize these resources by forming a lightweight campus grid. The campus grids framework enables jobs that are submitted to one cluster to overflow, when necessary, to other clusters within the campus using whatever authentication mechanisms are available on campus. This framework is currently being used on several campuses to run HEP and other science jobs. Further, the framework has in some cases been expanded beyond the campus boundary by bridging campus grids into a regional grid, and can even be used to integrate resources from a national cyberinfrastructure such as the Open Science Grid. This paper will highlight 18 months of operational experiences creating campus grids in the US, and the different campus configurations that have successfully utilized the campus grid infrastructure.

  18. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  19. Decision making in water resource planning: Models and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Fedra, K; Carlsen, A J [ed.

    1987-01-01

    This paper describes some basic concepts of simulation-based decision support systems for water resources management and the role of symbolic, graphics-based user interfaces. Designed to allow direct and easy access to advanced methods of analysis and decision support for a broad and heterogeneous group of users, these systems combine data base management, system simulation, operations research techniques such as optimization, interactive data analysis, elements of advanced decision technology, and artificial intelligence, with a friendly and conversational, symbolic display oriented user interface. Important features of the interface are the use of several parallel or alternative styles of interaction and display, indlucing colour graphics and natural language. Combining quantitative numerical methods with qualitative and heuristic approaches, and giving the user direct and interactive control over the systems function, human knowledge, experience and judgement are integrated with formal approaches into a tightly coupled man-machine system through an intelligent and easily accessible user interface. 4 drawings, 42 references.

  20. Monitoring of computing resource utilization of the ATLAS experiment

    International Nuclear Information System (INIS)

    Rousseau, David; Vukotic, Ilija; Schaffer, RD; Dimitrov, Gancho; Aidel, Osman; Albrand, Solveig

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  1. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud

  2. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  3. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  4. Analysis On Security Of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Muhammad Zunnurain Hussain

    2017-01-01

    Full Text Available In this paper Author will be discussing the security issues and challenges faced by the industry in securing the cloud computing and how these problems can be tackled. Cloud computing is modern technique of sharing resources like data sharing file sharing basically sharing of resources without launching own infrastructure and using some third party resources to avoid huge investment . It is very challenging these days to secure the communication between two users although people use different encryption techniques 1.

  5. Decentralized vs. centralized economic coordination of resource allocation in grids

    OpenAIRE

    Eymann, Torsten; Reinicke, Michael; Ardáiz Villanueva, Óscar; Artigas Vidal, Pau; Díaz de Cerio Ripalda, Luis Manuel; Freitag, Fèlix; Meseguer Pallarès, Roc; Navarro Moldes, Leandro; Royo Vallés, María Dolores; Sanjeevan, Kanapathipillai

    2003-01-01

    Application layer networks are software architectures that allow the provisioning of services requiring a huge amount of resources by connecting large numbers of individual computers, like in Grid or Peer-to-Peer computing. Controlling the resource allocation in those networks is nearly impossible using a centralized arbitrator. The network simulation project CATNET will evaluate a decentralized mechanism for resource allocation, which is based on the economic paradigm of th...

  6. Sensor and computing resource management for a small satellite

    Science.gov (United States)

    Bhatia, Abhilasha; Goehner, Kyle; Sand, John; Straub, Jeremy; Mohammad, Atif; Korvald, Christoffer; Nervold, Anders Kose

    A small satellite in a low-Earth orbit (e.g., approximately a 300 to 400 km altitude) has an orbital velocity in the range of 8.5 km/s and completes an orbit approximately every 90 minutes. For a satellite with minimal attitude control, this presents a significant challenge in obtaining multiple images of a target region. Presuming an inclination in the range of 50 to 65 degrees, a limited number of opportunities to image a given target or communicate with a given ground station are available, over the course of a 24-hour period. For imaging needs (where solar illumination is required), the number of opportunities is further reduced. Given these short windows of opportunity for imaging, data transfer, and sending commands, scheduling must be optimized. In addition to the high-level scheduling performed for spacecraft operations, payload-level scheduling is also required. The mission requires that images be post-processed to maximize spatial resolution and minimize data transfer (through removing overlapping regions). The payload unit includes GPS and inertial measurement unit (IMU) hardware to aid in image alignment for the aforementioned. The payload scheduler must, thus, split its energy and computing-cycle budgets between determining an imaging sequence (required to capture the highly-overlapping data required for super-resolution and adjacent areas required for mosaicking), processing the imagery (to perform the super-resolution and mosaicking) and preparing the data for transmission (compressing it, etc.). This paper presents an approach for satellite control, scheduling and operations that allows the cameras, GPS and IMU to be used in conjunction to acquire higher-resolution imagery of a target region.

  7. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  8. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  9. [A Case of Huge Colon Cancer Accompanied with Severe Hypoproteinemia].

    Science.gov (United States)

    Hiraki, Sakurao; Kanesada, Kou; Harada, Toshio; Tada, Kousuke; Fukuda, Shintaro

    2017-11-01

    We report a case of huge colon cancer accompanied with severe hypoproteinemia. A7 4-year-old woman was referred to our hospital because of abdominal fullness. Blood examinations revealed anemia(hemoglobin 8.8 g/dL)and sever hypopro- teinemia(total protein 4.5 g/dL, albumin 1.1 g/dL). Computed tomography examination of abdomen revealed ascites and large tumor(12.5×10.5 cm)at the right side colon. By further examinations ascending colon cancer without distant metastasis was diagnosed, then we performed right hemicolectomy and primary intestinal anastomosis by open surgery. Ahuge type 1 tumor(18×12 cm)was observed in the excised specimen, which invaded to terminal ileum directly. The tumor was diagnosed moderately differentiated adenocarcinoma without lymph node metastasis(pT3N0M0, fStage II ). Postoperative course was uneventful and serum protein concentration recovered gradually to normal range. Protein leakage from the tumor cannot be proved by this case, so we can't diagnose as protein-losing enteropathy, but we strongly doubt this etiology from postoperative course in this case.

  10. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  11. Huge maternal hydronephrosis: a rare complication in pregnancy.

    Science.gov (United States)

    Peng, Hsiu-Huei; Wang, Chin-Jung; Yen, Chih-Feng; Chou, Chien-Chung; Lee, Chyi-Long

    2003-06-10

    A huge maternal hydronephrosis is uncommon in pregnancy and might be mistaken as a pelvic mass. A 21-year-old primigravida was noted at 25th week of gestation to have a visible bulging mass on her left flank. The mass was originally mistaken as a large ovarian cyst but later proved to be a huge hydronephrosis. Retrograde insertion of ureteroscope and a ureteric stent failed, so we performed repeated ultrasound-guided needle aspiration to decompress the huge hydronephrosis, which enabled the patient to proceed to a successful term vaginal delivery. Nephrectomy was performed after delivery and proved the diagnosis of congenital ureteropelvic junction obstruction.

  12. Computational resources for ribosome profiling: from database to Web server and software.

    Science.gov (United States)

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  14. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  15. Open Educational Resources: The Role of OCW, Blogs and Videos in Computer Networks Classroom

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2012-09-01

    Full Text Available This paper analyzes the learning experiences and opinions obtained from a group of undergraduate students in their interaction with several on-line multimedia resources included in a free on-line course about Computer Networks. These new educational resources employed are based on the Web2.0 approach such as blogs, videos and virtual labs which have been added in a web-site for distance self-learning.

  16. AN ENHANCED METHOD FOREXTENDING COMPUTATION AND RESOURCES BY MINIMIZING SERVICE DELAY IN EDGE CLOUD COMPUTING

    OpenAIRE

    B.Bavishna*1, Mrs.M.Agalya2 & Dr.G.Kavitha3

    2018-01-01

    A lot of research has been done in the field of cloud computing in computing domain. For its effective performance, variety of algorithms has been proposed. The role of virtualization is significant and its performance is dependent on VM Migration and allocation. More of the energy is absorbed in cloud; therefore, the utilization of numerous algorithms is required for saving energy and efficiency enhancement in the proposed work. In the proposed work, green algorithm has been considered with ...

  17. Load/resource matching for period-of-record computer simulation

    International Nuclear Information System (INIS)

    Lindsey, E.D. Jr.; Robbins, G.E. III

    1991-01-01

    The Southwestern Power Administration (Southwestern), an agency of the Department of Energy, is responsible for marketing the power and energy produced at Federal hydroelectric power projects developed by the U.S. Army Corps of Engineers in the southwestern United States. This paper reports that in order to maximize benefits from limited resources, to evaluate proposed changes in the operation of existing projects, and to determine the feasibility and marketability of proposed new projects, Southwestern utilizes a period-of-record computer simulation model created in the 1960's. Southwestern is constructing a new computer simulation model to take advantage of changes in computers, policy, and procedures. Within all hydroelectric power reservoir systems, the ability of the resources to match the load demand is critical and presents complex problems. Therefore, the method used to compare available energy resources to energy load demands is a very important aspect of the new model. Southwestern has developed an innovative method which compares a resource duration curve with a load duration curve, adjusting the resource duration curve to make the most efficient use of the available resources

  18. An integrated system for land resources supervision based on the IoT and cloud computing

    Science.gov (United States)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  19. Aggressive angiomyxoma presenting with huge abdominal lump: A case report

    Science.gov (United States)

    Kumar, Sanjeev; Agrawal, Nikhil; Khanna, Rahul; Khanna, AK

    2008-01-01

    Agressive angiomyxoma is a rare mesenchymal neoplasm. It mainly presents in females. We here present a case of angiomyxoma presenting as huge abdominal lump along with gluteal swelling. Case note is described along with brief review of literature. PMID:18755035

  20. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  1. Huge magnetoresistance effect of highly oriented pyrolytic graphite

    International Nuclear Information System (INIS)

    Du Youwei; Wang Zhiming; Ni Gang; Xing Dingyu; Xu Qingyu

    2004-01-01

    Graphite is a quasi-two-dimensional semimetal. However, for usual graphite the magnetoresistance is not so high due to its small crystal size and no preferred orientation. Huge positive magnetoresistance up to 85300% at 4.2 K and 4950% at 300 K under 8.15 T magnetic field was found in highly oriented pyrolytic graphite. The mechanism of huge positive magnetoresistance is not only due to ordinary magnetoresistance but also due to magnetic-field-driven semimetal-insulator transition

  2. Dynamic provisioning of local and remote compute resources with OpenStack

    Science.gov (United States)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  3. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  4. Resource-constrained project scheduling: computing lower bounds by solving minimum cut problems

    NARCIS (Netherlands)

    Möhring, R.H.; Nesetril, J.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen

    1999-01-01

    We present a novel approach to compute Lagrangian lower bounds on the objective function value of a wide class of resource-constrained project scheduling problems. The basis is a polynomial-time algorithm to solve the following scheduling problem: Given a set of activities with start-time dependent

  5. Selecting, Evaluating and Creating Policies for Computer-Based Resources in the Behavioral Sciences and Education.

    Science.gov (United States)

    Richardson, Linda B., Comp.; And Others

    This collection includes four handouts: (1) "Selection Critria Considerations for Computer-Based Resources" (Linda B. Richardson); (2) "Software Collection Policies in Academic Libraries" (a 24-item bibliography, Jane W. Johnson); (3) "Circulation and Security of Software" (a 19-item bibliography, Sara Elizabeth Williams); and (4) "Bibliography of…

  6. The Usage of informal computer based communication in the context of organization’s technological resources

    OpenAIRE

    Raišienė, Agota Giedrė; Jonušauskas, Steponas

    2011-01-01

    Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization's technological resources. Methodology - meta analysis, survey and descriptive analysis. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the ...

  7. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  8. Computer Processing 10-20-30. Teacher's Manual. Senior High School Teacher Resource Manual.

    Science.gov (United States)

    Fisher, Mel; Lautt, Ray

    Designed to help teachers meet the program objectives for the computer processing curriculum for senior high schools in the province of Alberta, Canada, this resource manual includes the following sections: (1) program objectives; (2) a flowchart of curriculum modules; (3) suggestions for short- and long-range planning; (4) sample lesson plans;…

  9. Photonic entanglement as a resource in quantum computation and quantum communication

    OpenAIRE

    Prevedel, Robert; Aspelmeyer, Markus; Brukner, Caslav; Jennewein, Thomas; Zeilinger, Anton

    2008-01-01

    Entanglement is an essential resource in current experimental implementations for quantum information processing. We review a class of experiments exploiting photonic entanglement, ranging from one-way quantum computing over quantum communication complexity to long-distance quantum communication. We then propose a set of feasible experiments that will underline the advantages of photonic entanglement for quantum information processing.

  10. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  11. Universal resources for approximate and stochastic measurement-based quantum computation

    International Nuclear Information System (INIS)

    Mora, Caterina E.; Piani, Marco; Miyake, Akimasa; Van den Nest, Maarten; Duer, Wolfgang; Briegel, Hans J.

    2010-01-01

    We investigate which quantum states can serve as universal resources for approximate and stochastic measurement-based quantum computation in the sense that any quantum state can be generated from a given resource by means of single-qubit (local) operations assisted by classical communication. More precisely, we consider the approximate and stochastic generation of states, resulting, for example, from a restriction to finite measurement settings or from possible imperfections in the resources or local operations. We show that entanglement-based criteria for universality obtained in M. Van den Nest et al. [New J. Phys. 9, 204 (2007)] for the exact, deterministic case can be lifted to the much more general approximate, stochastic case. This allows us to move from the idealized situation (exact, deterministic universality) considered in previous works to the practically relevant context of nonperfect state preparation. We find that any entanglement measure fulfilling some basic requirements needs to reach its maximum value on some element of an approximate, stochastic universal family of resource states, as the resource size grows. This allows us to rule out various families of states as being approximate, stochastic universal. We prove that approximate, stochastic universality is in general a weaker requirement than deterministic, exact universality and provide resources that are efficient approximate universal, but not exact deterministic universal. We also study the robustness of universal resources for measurement-based quantum computation under realistic assumptions about the (imperfect) generation and manipulation of entangled states, giving an explicit expression for the impact that errors made in the preparation of the resource have on the possibility to use it for universal approximate and stochastic state preparation. Finally, we discuss the relation between our entanglement-based criteria and recent results regarding the uselessness of states with a high

  12. Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research.

    Science.gov (United States)

    Erdemir, Ahmet; Hunter, Peter J; Holzapfel, Gerhard A; Loew, Leslie M; Middleton, John; Jacobs, Christopher R; Nithiarasu, Perumal; Löhner, Rainlad; Wei, Guowei; Winkelstein, Beth A; Barocas, Victor H; Guilak, Farshid; Ku, Joy P; Hicks, Jennifer L; Delp, Scott L; Sacks, Michael; Weiss, Jeffrey A; Ateshian, Gerard A; Maas, Steve A; McCulloch, Andrew D; Peng, Grace C Y

    2018-02-01

    The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate

  13. Integrating GRID tools to build a computing resource broker: activities of DataGrid WP1

    International Nuclear Information System (INIS)

    Anglano, C.; Barale, S.; Gaido, L.; Guarise, A.; Lusso, S.; Werbrouck, A.

    2001-01-01

    Resources on a computational Grid are geographically distributed, heterogeneous in nature, owned by different individuals or organizations with their own scheduling policies, have different access cost models with dynamically varying loads and availability conditions. This makes traditional approaches to workload management, load balancing and scheduling inappropriate. The first work package (WP1) of the EU-funded DataGrid project is addressing the issue of optimizing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of-date (collected in a finite amount of time at a very loosely coupled site). The authors describe the DataGrid approach in integrating existing software components (from Condor, Globus, etc.) to build a Grid Resource Broker, and the early efforts to define a workable scheduling strategy

  14. Measuring the impact of computer resource quality on the software development process and product

    Science.gov (United States)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  15. Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.

    Science.gov (United States)

    Li, Min; Zhang, John Zenghui; Xia, Fei

    2016-04-12

    Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.

  16. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  17. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  18. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  19. The NILE system architecture: fault-tolerant, wide-area access to computing and data resources

    International Nuclear Information System (INIS)

    Ricciardi, Aleta; Ogg, Michael; Rothfus, Eric

    1996-01-01

    NILE is a multi-disciplinary project building a distributed computing environment for HEP. It provides wide-area, fault-tolerant, integrated access to processing and data resources for collaborators of the CLEO experiment, though the goals and principles are applicable to many domains. NILE has three main objectives: a realistic distributed system architecture design, the design of a robust data model, and a Fast-Track implementation providing a prototype design environment which will also be used by CLEO physicists. This paper focuses on the software and wide-area system architecture design and the computing issues involved in making NILE services highly-available. (author)

  20. Huge Mesenteric Lymphangioma – A Rare Cause of Acute Abdomen

    African Journals Online (AJOL)

    Lymphangiomas are benign congenital masses which occur most commonly in head and neck of children and incidence of mesenteric lymphangiomas is very rare. We report such a case of huge mesenteric lymphangioma in a 20 year old male who presented to us with acute abdomen. Pre-operative diagnosis is difficult ...

  1. 61 HUGE BENIGN GRANULOSA CELL TUMOUR IN A 61 YEAR ...

    African Journals Online (AJOL)

    Dr. E. P. Gharoro

    peritoneal cavity, huge right ovarian cyst measuring 37cm/29cm as in figure 1a, weighing 8.3 kg with a thick smooth wall without excrescences on surface. ... is released in the blood during pregnancy and is produced in other conditions such as endometriosis, fibroids and diverticulitis. It is useful in monitoring therapy.

  2. Umbilicoplasty in children with huge umbilical hernia | Komlatsè ...

    African Journals Online (AJOL)

    With a mean follow-up of 10 months, we had 10 excellent results and two fair results according to our criteria. Conclusion: Our two lateral fl aps umbilicoplasty is well-adapted to HUH in children. Itis simple and assures a satisfactory anatomical and cosmetic result. Key words: Children, huge umbilical hernia, Togo, umbilical ...

  3. A Huge Ovarian Dermoid Cyst: Successful Laparoscopic Total Excision

    OpenAIRE

    Uyanikoglu, Hacer; Dusak, Abdurrahim

    2017-01-01

    Giant ovarian cysts, ≥15 cm in diameter, are quite rare in women of reproductive age. Here, we present a case of ovarian cyst with unusual presentation treated by laparoscopic surgery. On histology, mass was found to be mature cystic teratoma. The diagnostic and management challenges posed by this huge ovarian cyst were discussed in the light of the literature.

  4. Huge mucinous cystadenoma of the pancreas mistaken for a ...

    African Journals Online (AJOL)

    Cystic tumors of the pancreas are rare and can be confused with pseudocysts.We present a 50 year old woman with a huge mucinous cystadenoma of the pancreas initially diagnosed and managed with a cystojejunostomy and cyst wall biopsy. She required another laparotomy and tumor excision after histological ...

  5. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    OpenAIRE

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  6. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  7. Piping data bank and erection system of Angra 2: structure, computational resources and systems

    International Nuclear Information System (INIS)

    Abud, P.R.; Court, E.G.; Rosette, A.C.

    1992-01-01

    The Piping Data Bank of Angra 2 called - Erection Management System - Was developed to manage the piping erection of the Nuclear Power Plant of Angra 2. Beyond the erection follow-up of piping and supports, it manages: the piping design, the material procurement, the flow of the fabrication documents, testing of welds and material stocks at the Warehouse. The works developed in the sense of defining the structure of the Data Bank, Computational Resources and System are here described. (author)

  8. Blockchain-Empowered Fair Computational Resource Sharing System in the D2D Network

    Directory of Open Access Journals (Sweden)

    Zhen Hong

    2017-11-01

    Full Text Available Device-to-device (D2D communication is becoming an increasingly important technology in future networks with the climbing demand for local services. For instance, resource sharing in the D2D network features ubiquitous availability, flexibility, low latency and low cost. However, these features also bring along challenges when building a satisfactory resource sharing system in the D2D network. Specifically, user mobility is one of the top concerns for designing a cooperative D2D computational resource sharing system since mutual communication may not be stably available due to user mobility. A previous endeavour has demonstrated and proven how connectivity can be incorporated into cooperative task scheduling among users in the D2D network to effectively lower average task execution time. There are doubts about whether this type of task scheduling scheme, though effective, presents fairness among users. In other words, it can be unfair for users who contribute many computational resources while receiving little when in need. In this paper, we propose a novel blockchain-based credit system that can be incorporated into the connectivity-aware task scheduling scheme to enforce fairness among users in the D2D network. Users’ computational task cooperation will be recorded on the public blockchain ledger in the system as transactions, and each user’s credit balance can be easily accessible from the ledger. A supernode at the base station is responsible for scheduling cooperative computational tasks based on user mobility and user credit balance. We investigated the performance of the credit system, and simulation results showed that with a minor sacrifice of average task execution time, the level of fairness can obtain a major enhancement.

  9. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  10. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  11. The Trope Tank: A Laboratory with Material Resources for Creative Computing

    Directory of Open Access Journals (Sweden)

    Nick Montfort

    2014-12-01

    Full Text Available http://dx.doi.org/10.5007/1807-9288.2014v10n2p53 Principles for organizing and making use of a laboratory with material computing resources are articulated. This laboratory, the Trope Tank, is a facility for teaching, research, and creative collaboration and offers hardware (in working condition and set up for use from the 1970s, 1980s, and 1990s, including videogame systems, home computers, and an arcade cabinet. To aid in investigating the material history of texts, the lab has a small 19th century letterpress, a typewriter, a print terminal, and dot-matrix printers. Other resources include controllers, peripherals, manuals, books, and software on physical media. These resources are used for teaching, loaned for local exhibitions and presentations, and accessed by researchers and artists. The space is primarily a laboratory (rather than a library, studio, or museum, so materials are organized by platform and intended use. Textual information about the historical contexts of the available systems, and resources are set up to allow easy operation, and even casual use, by researchers, teachers, students, and artists.

  12. Testing a computer-based ostomy care training resource for staff nurses.

    Science.gov (United States)

    Bales, Isabel

    2010-05-01

    Fragmented teaching and ostomy care provided by nonspecialized clinicians unfamiliar with state-of-the-art care and products have been identified as problems in teaching ostomy care to the new ostomate. After conducting a literature review of theories and concepts related to the impact of nurse behaviors and confidence on ostomy care, the author developed a computer-based learning resource and assessed its effect on staff nurse confidence. Of 189 staff nurses with a minimum of 1 year acute-care experience employed in the acute care, emergency, and rehabilitation departments of an acute care facility in the Midwestern US, 103 agreed to participate and returned completed pre- and post-tests, each comprising the same eight statements about providing ostomy care. F and P values were computed for differences between pre- and post test scores. Based on a scale where 1 = totally disagree and 5 = totally agree with the statement, baseline confidence and perceived mean knowledge scores averaged 3.8 and after viewing the resource program post-test mean scores averaged 4.51, a statistically significant improvement (P = 0.000). The largest difference between pre- and post test scores involved feeling confident in having the resources to learn ostomy skills independently. The availability of an electronic ostomy care resource was rated highly in both pre- and post testing. Studies to assess the effects of increased confidence and knowledge on the quality and provision of care are warranted.

  13. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    International Nuclear Information System (INIS)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-01-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi–Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources. (paper)

  14. Churn prediction on huge telecom data using hybrid firefly based classification

    Directory of Open Access Journals (Sweden)

    Ammar A.Q. Ahmed

    2017-11-01

    Full Text Available Churn prediction in telecom has become a major requirement due to the increase in the number of telecom providers. However due to the hugeness, sparsity and imbalanced nature of the data, churn prediction in telecom has always been a complex task. This paper presents a metaheuristic based churn prediction technique that performs churn prediction on huge telecom data. A hybridized form of Firefly algorithm is used as the classifier. It has been identified that the compute intensive component of the Firefly algorithm is the comparison block, where every firefly is compared with every other firefly to identify the one with the highest light intensity. This component is replaced by Simulated Annealing and the classification process is carried out. Experiments were conducted on the Orange dataset. It was observed that Firefly algorithm works best on churn data and the hybridized Firefly algorithm provides effective and faster results.

  15. Huge uterine-cervical diverticulum mimicking as a cyst

    Directory of Open Access Journals (Sweden)

    S Chufal

    2012-01-01

    Full Text Available Here we report an incidental huge uterine-cervical diverticulum from a total abdominal hysterectomy specimen in a perimenopausal woman who presented with acute abdominal pain. The diverticulum was mimicking with various cysts present in the lateral side of the female genital tract. Histopathological examination confirmed this to be a cervical diverticulum with communication to uterine cavity through two different openings. They can attain huge size if left ignored for long duration and present a diagnostic challenge to clinicians, radiologists, as well as pathologists because of its extreme rarity. Therefore, diverticula should also be included as a differential diagnosis. Its histopathological confirmation also highlights that diverticula can present as an acute abdomen, requiring early diagnosis with appropriate timely intervention. Immunohistochemistry CD 10 has also been used to differentiate it from a mesonephric cyst.

  16. It was huge! Nursing students' first experience at AORN Congress.

    Science.gov (United States)

    Byrne, Michelle; Cantrell, Kelly; Fletcher, Daphne; McRaney, David; Morris, Kelly

    2004-01-01

    AN EXPERIENTIAL KNOWLEDGE of mentoring through nursing students' perspectives may enhance AORN's ability to recruit students to perioperative nursing and aid future planning for student involvement in the Association. IN 2003, four first-year nursing students attended the AORN Congress in Chicago with their nursing instructor and mentor. The students' experiences were captured using a thematic analysis to analyze their journals. THE FIVE COMMON THEMES identified were "it was huge," "exhibits," "student program," "exploring the city," and "suggestions for future planning."

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  18. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  19. A Safety Resource Allocation Mechanism against Connection Fault for Vehicular Cloud Computing

    Directory of Open Access Journals (Sweden)

    Tianpeng Ye

    2016-01-01

    Full Text Available The Intelligent Transportation System (ITS becomes an important component of the smart city toward safer roads, better traffic control, and on-demand service by utilizing and processing the information collected from sensors of vehicles and road side infrastructure. In ITS, Vehicular Cloud Computing (VCC is a novel technology balancing the requirement of complex services and the limited capability of on-board computers. However, the behaviors of the vehicles in VCC are dynamic, random, and complex. Thus, one of the key safety issues is the frequent disconnections between the vehicle and the Vehicular Cloud (VC when this vehicle is computing for a service. More important, the connection fault will disturb seriously the normal services of VCC and impact the safety works of the transportation. In this paper, a safety resource allocation mechanism is proposed against connection fault in VCC by using a modified workflow with prediction capability. We firstly propose the probability model for the vehicle movement which satisfies the high dynamics and real-time requirements of VCC. And then we propose a Prediction-based Reliability Maximization Algorithm (PRMA to realize the safety resource allocation for VCC. The evaluation shows that our mechanism can improve the reliability and guarantee the real-time performance of the VCC.

  20. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  1. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  2. DrugSig: A resource for computational drug repositioning utilizing gene expression signatures.

    Directory of Open Access Journals (Sweden)

    Hongyu Wu

    Full Text Available Computational drug repositioning has been proved as an effective approach to develop new drug uses. However, currently existing strategies strongly rely on drug response gene signatures which scattered in separated or individual experimental data, and resulted in low efficient outputs. So, a fully drug response gene signatures database will be very helpful to these methods. We collected drug response microarray data and annotated related drug and targets information from public databases and scientific literature. By selecting top 500 up-regulated and down-regulated genes as drug signatures, we manually established the DrugSig database. Currently DrugSig contains more than 1300 drugs, 7000 microarray and 800 targets. Moreover, we developed the signature based and target based functions to aid drug repositioning. The constructed database can serve as a resource to quicken computational drug repositioning. Database URL: http://biotechlab.fudan.edu.cn/database/drugsig/.

  3. Exploiting short-term memory in soft body dynamics as a computational resource.

    Science.gov (United States)

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  4. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2011-12-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources.Methodology—meta analysis, survey and descriptive analysis.Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, thatsignificant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  5. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Agota Giedrė Raišienė

    2013-08-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, that significant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  6. Computer modelling of the UK wind energy resource. Phase 2. Application of the methodology

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Makari, M; Newton, K; Ravenscroft, F; Whittaker, J

    1993-12-31

    This report presents the results of the second phase of a programme to estimate the UK wind energy resource. The overall objective of the programme is to provide quantitative resource estimates using a mesoscale (resolution about 1km) numerical model for the prediction of wind flow over complex terrain, in conjunction with digitised terrain data and wind data from surface meteorological stations. A network of suitable meteorological stations has been established and long term wind data obtained. Digitised terrain data for the whole UK were obtained, and wind flow modelling using the NOABL computer program has been performed. Maps of extractable wind power have been derived for various assumptions about wind turbine characteristics. Validation of the methodology indicates that the results are internally consistent, and in good agreement with available comparison data. Existing isovent maps, based on standard meteorological data which take no account of terrain effects, indicate that 10m annual mean wind speeds vary between about 4.5 and 7 m/s over the UK with only a few coastal areas over 6 m/s. The present study indicates that 28% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. The results will be useful for broad resource studies and initial site screening. Detailed resource evaluation for local sites will require more detailed local modelling or ideally long term field measurements. (12 figures, 14 tables, 21 references). (Author)

  7. Partial ureterectomy for a huge primary leiomyoma of the ureter

    International Nuclear Information System (INIS)

    Nouralizadeh, A.; Tabib, A.; Taheri, M.; Torbati, P.M.

    2010-01-01

    A case of a huge primary leiomyoma of the ureter in which only partial ureterectomy was performed is presented. The benign nature of the mass was primarily confirmed with frozen section at the time of surgery and then with immunohistochemistry (IHC). To the best of our knowledge, this case is a unique form of leiomyoma of the ureter due to its large size. There have been only ten cases of primary leiomyoma of the ureter reported since 1955 and all of them were very small in size. Our case is considered to be the eleventh. (author)

  8. How a huge HEP experiment is designed course

    CERN Multimedia

    CERN. Geneva HR-FAS

    2007-01-01

    More than twenty years after the idea of building the LHC machine was discussed in a workshop in Lausanne in 1984 for the first time, it is instructive to look back on the historical process which has led the community to where we are today with four huge detectors being commissioned and eagerly awaiting first beam collisions in 2008. The main design principles, detector features and performance characteristics of the ATLAS and CMS detectors will be briefly covered in these two lectures with, as an interlude, a wonderful DVD from ATLAS outreach depicting how particles interact and are detected in the various components of the experiments.

  9. A young woman with a huge paratubal cyst

    Directory of Open Access Journals (Sweden)

    Ceren Golbasi

    2016-09-01

    Full Text Available Paratubal cysts are asymptomatic embryological remnants. These cysts are usually diagnosed during adolescence and reproductive age. In general, their sizes are small but can be complicated by rupture, torsion, or hemorrhage. Paratubal cysts are often discovered fortuitously on routine ultrasound examination. We report a 19-year-old female patient who presented with irregular menses and abdominal pain. Ultrasound examination revealed a huge cystic mass at the right adnexial area. The diagnosis was confirmed as paratubal cyst during laporotomy and, hence, cystectomy and right salpingectomy were performed. [Cukurova Med J 2016; 41(3.000: 573-576

  10. A Resource Service Model in the Industrial IoT System Based on Transparent Computing.

    Science.gov (United States)

    Li, Weimin; Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-03-26

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system.

  11. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  12. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  13. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  14. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  15. A Comparative Study of Load Balancing Algorithms in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Cloud Computing is a new trend emerging in IT environment with huge requirements of infrastructure and resources. Load Balancing is an important aspect of cloud computing environment. Efficient load balancing scheme ensures efficient resource utilization by provisioning of resources to cloud users on demand basis in pay as you say manner. Load Balancing may even support prioritizing users by applying appropriate scheduling criteria. This paper presents various load balancing schemes in differ...

  16. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  17. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  18. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  19. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    The general potential of computer games for teaching and learning is becoming widely recognized. In particular, within the application contexts of primary and lower secondary education, the relevance and value and computer games seem more accepted, and the possibility and willingness to incorporate...... computer games as a possible resource at the level of other educational resources seem more frequent. For some reason, however, to apply computer games in processes of teaching and learning at the high school level, seems an almost non-existent event. This paper reports on study of incorporating...... the learning game “Global Conflicts: Latin America” as a resource into the teaching and learning of a course involving the two subjects “English language learning” and “Social studies” at the final year in a Danish high school. The study adapts an explorative research design approach and investigates...

  20. Distributed and parallel approach for handle and perform huge datasets

    Science.gov (United States)

    Konopko, Joanna

    2015-12-01

    Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.

  1. Huge Intracanal lumbar Disc Herniation: a Review of Four Cases

    Directory of Open Access Journals (Sweden)

    Farzad Omidi-Kashani

    2016-01-01

    Full Text Available Lumbar disc herniation (LDH is the most common cause of sciatica and only in about 10% of the affected patients, surgical intervention is necessary. The side of the patient (the side of most prominent clinical complaints is usually consistent with the side of imaging (the side with most prominent disc herniation on imaging scans. In this case series, we presented our experience in four cases with huge intracanal LDH that a mismatch between the patient’s side and the imaging’s side was present. In these cases, for deciding to do the operation, the physicians need to rely more on clinical findings, but for deciding the side of discectomy, imaging characteristic (imaging side may be a more important criterion.

  2. Airway management of a rare huge-size supraglottic mass

    International Nuclear Information System (INIS)

    Abou-Zeid, Haitham A.; Al-Ghamdi, Abdel Mohsin A.; Al-Qurain, Abdel-Aziz A.; Mokhazy, Khalid M.

    2006-01-01

    Laser excision of a huge-sized supraglottic mass nearly obstructing the airway passage is a real challenge to anesthesiologists. Upper airway obstruction due to neoplasm in supraglottic region, is traditionally managed by preoperative tracheostomy, however, such a common procedure can potentially have an impact on long-term outcome. A 26-year-old patient presented with dysphagia caused by left cystic vallecular synovial sarcoma. The airway was successfully secured via fiberoptic bronchoscopy, followed by excision of the supraglottic tumor with CO2 laser surgery. Tracheostomy was not required. The patient was discharged from the hospital on the 4th day of surgery. This case, highlights the possibility to secure the airway passage without performing preoperative tracheostomy resulting in good outcome and short hospital stay. (author)

  3. Huge endometrioma mimicking mucinous cystadenoma on MR : A case report

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Im Kyung; Kim, Bong Soo; Nam, Kung Sook; Kim, Heung Cheol; Yoo, Yun Sik; Lee, Mee Ran; Hwang, Woo Chul [Hallym University, Chunchon (Korea, Republic of)

    2001-12-01

    Endometriosis is a relatively common gynecologic disease affecting women during their reproductive years. For its diagnosis, magnetic resonance imaging has been shown to have greater specificity than other modalities. Although lesions may show variable signal intensity due to numerous stages of bleeding, the characteristic finding of endometrioma which distinguishes it from other ovarian cystic masses is relatively high signal intensity on T1-weighted images and heterogeneous signal intensity with prominent shading on 72-weighted images. We report an atypical case involving a huge endometrioma. Because of varying signal intensity on T1- and T2-weighted images and scanty shading on T2-weighted images, the findings were misinterpreted and mucinous cystadenoma was diagnosed.

  4. Using multiple metaphors and multimodalities as a semiotic resource when teaching year 2 students computational strategies

    Science.gov (United States)

    Mildenhall, Paula; Sherriff, Barbara

    2017-06-01

    Recent research indicates that using multimodal learning experiences can be effective in teaching mathematics. Using a social semiotic lens within a participationist framework, this paper reports on a professional learning collaboration with a primary school teacher designed to explore the use of metaphors and modalities in mathematics instruction. This video case study was conducted in a year 2 classroom over two terms, with the focus on building children's understanding of computational strategies. The findings revealed that the teacher was able to successfully plan both multimodal and multiple metaphor learning experiences that acted as semiotic resources to support the children's understanding of abstract mathematics. The study also led to implications for teaching when using multiple metaphors and multimodalities.

  5. Adaptive resource allocation scheme using sliding window subchannel gain computation: context of OFDMA wireless mobiles systems

    International Nuclear Information System (INIS)

    Khelifa, F.; Samet, A.; Ben Hassen, W.; Afif, M.

    2011-01-01

    Multiuser diversity combined with Orthogonal Frequency Division Multiple Access (OFDMA) are a promising technique for achieving high downlink capacities in new generation of cellular and wireless network systems. The total capacity of OFDMA based-system is maximized when each subchannel is assigned to the mobile station with the best channel to noise ratio for that subchannel with power is uniformly distributed between all subchannels. A contiguous method for subchannel construction is adopted in IEEE 802.16 m standard in order to reduce OFDMA system complexity. In this context, new subchannel gain computation method, can contribute, jointly with optimal assignment subchannel to maximize total system capacity. In this paper, two new methods have been proposed in order to achieve a better trade-off between fairness and efficiency use of resources. Numerical results show that proposed algorithms provide low complexity, higher total system capacity and fairness among users compared to others recent methods.

  6. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    Science.gov (United States)

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  7. Radiotherapy infrastructure and human resources in Switzerland : Present status and projected computations for 2020.

    Science.gov (United States)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-09-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology "Quantification of Radiation Therapy Infrastructure and Staffing" guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO "Health Economics in Radiation Oncology" (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland.

  8. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    International Nuclear Information System (INIS)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-01-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [de

  9. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  10. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction.

    Science.gov (United States)

    Nezarat, Amin; Dastghaibifard, G H

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.

  11. Computer modelling of the UK wind energy resource: UK wind speed data package and user manual

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Ravenscroft, F

    1993-12-31

    A software package has been developed for IBM-PC or true compatibles. It is designed to provide easy access to the results of a programme of work to estimate the UK wind energy resource. Mean wind speed maps and quantitative resource estimates were obtained using the NOABL mesoscale (1 km resolution) numerical model for the prediction of wind flow over complex terrain. NOABL was used in conjunction with digitised terrain data and wind data from surface meteorological stations for a ten year period (1975-1984) to provide digital UK maps of mean wind speed at 10m, 25m and 45m above ground level. Also included in the derivation of these maps was the use of the Engineering Science Data Unit (ESDU) method to model the effect on wind speed of the abrupt change in surface roughness that occurs at the coast. With the wind speed software package, the user is able to obtain a display of the modelled wind speed at 10m, 25m and 45m above ground level for any location in the UK. The required co-ordinates are simply supplied by the user, and the package displays the selected wind speed. This user manual summarises the methodology used in the generation of these UK maps and shows computer generated plots of the 25m wind speeds in 200 x 200 km regions covering the whole UK. The uncertainties inherent in the derivation of these maps are also described, and notes given on their practical usage. The present study indicated that 23% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. (18 figures, 3 tables, 6 references). (author)

  12. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  13. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  14. A comprehensive overview of computational resources to aid in precision genome editing with engineered nucleases.

    Science.gov (United States)

    Periwal, Vinita

    2017-07-01

    Genome editing with engineered nucleases (zinc finger nucleases, TAL effector nucleases s and Clustered regularly inter-spaced short palindromic repeats/CRISPR-associated) has recently been shown to have great promise in a variety of therapeutic and biotechnological applications. However, their exploitation in genetic analysis and clinical settings largely depends on their specificity for the intended genomic target. Large and complex genomes often contain highly homologous/repetitive sequences, which limits the specificity of genome editing tools and could result in off-target activity. Over the past few years, various computational approaches have been developed to assist the design process and predict/reduce the off-target activity of these nucleases. These tools could be efficiently used to guide the design of constructs for engineered nucleases and evaluate results after genome editing. This review provides a comprehensive overview of various databases, tools, web servers and resources for genome editing and compares their features and functionalities. Additionally, it also describes tools that have been developed to analyse post-genome editing results. The article also discusses important design parameters that could be considered while designing these nucleases. This review is intended to be a quick reference guide for experimentalists as well as computational biologists working in the field of genome editing with engineered nucleases. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  16. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  18. Black hole firewalls require huge energy of measurement

    Science.gov (United States)

    Hotta, Masahiro; Matsumoto, Jiro; Funo, Ken

    2014-06-01

    The unitary moving mirror model is one of the best quantum systems for checking the reasoning of the original firewall paradox of Almheiri et al. [J. High Energy Phys. 02 (2013) 062] in quantum black holes. Though the late-time part of radiations emitted from the mirror is fully entangled with the early part, no firewall exists with a deadly, huge average energy flux in this model. This is because the high-energy entanglement structure of the discretized systems in almost maximally entangled states is modified so as to yield the correct description of low-energy effective field theory. Furthermore, the strong subadditivity paradox of firewalls is resolved using nonlocality of general one-particle states and zero-point fluctuation entanglement. Due to the Reeh-Schlieder theorem in quantum field theory, another firewall paradox is inevitably raised with quantum remote measurements in the model. We resolve this paradox from the viewpoint of the energy cost of measurements. No firewall appears, as long as the energy for the measurement is much smaller than the ultraviolet cutoff scale.

  19. Progressive skin necrosis of a huge occipital encephalocele

    Science.gov (United States)

    Andarabi, Yasir; Nejat, Farideh; El-Khashab, Mostafa

    2008-01-01

    Objects: Progressive skin necrosis of giant occipital encephalocoele is an extremely rare complication found in neonates. Infection and ulceration of the necrosed skin may lead to meningitis or sepsis. We present here a neonate with giant occipital encephalocoele showing progressive necrosis during the first day of his life. Methods: A newborn baby was found to have a huge mass in the occipital region, which was covered by normal pink-purplish skin. During the last hours of the first day of his life, the sac started becoming ulcerated accompanied with a rapid color change in the skin, gradually turning darker and then black. The neonate was taken up for urgent excision and repair of the encephalocele. Two years after the operation, he appears to be well-developed without any neurological problems. Conclusion: Necrosis may have resulted from arterial or venous compromise caused by torsion of the pedicle during delivery or after birth. The high pressure inside the sac associated with the thin skin of the encephalocoele may be another predisposing factor. In view of the risk of ulceration and subsequent infection, urgent surgery of the necrotizing encephalocele is suggested. PMID:19753210

  20. Progressive skin necrosis of a huge occipital encephalocele

    Directory of Open Access Journals (Sweden)

    Andarabi Yasir

    2008-01-01

    Full Text Available Objects: Progressive skin necrosis of giant occipital encephalocoele is an extremely rare complication found in neonates. Infection and ulceration of the necrosed skin may lead to meningitis or sepsis. We present here a neonate with giant occipital encephalocoele showing progressive necrosis during the first day of his life. Methods: A newborn baby was found to have a huge mass in the occipital region, which was covered by normal pink-purplish skin. During the last hours of the first day of his life, the sac started becoming ulcerated accompanied with a rapid color change in the skin, gradually turning darker and then black. The neonate was taken up for urgent excision and repair of the encephalocele. Two years after the operation, he appears to be well-developed without any neurological problems. Conclusion: Necrosis may have resulted from arterial or venous compromise caused by torsion of the pedicle during delivery or after birth. The high pressure inside the sac associated with the thin skin of the encephalocoele may be another predisposing factor. In view of the risk of ulceration and subsequent infection, urgent surgery of the necrotizing encephalocele is suggested.

  1. A huge bladder calculus causing acute renal failure.

    Science.gov (United States)

    Komeya, Mitsuru; Sahoda, Tamami; Sugiura, Shinpei; Sawada, Takuto; Kitami, Kazuo

    2013-02-01

    A 81-year-old male was referred to our emergency outpatient unit due to acute renal failure. The level of serum creatinine was 276 μmol/l. A CT scan showed bilateral hydronephroureter, large bladder stone (7 cm × 6 cm × 6 cm) and bladder wall thickness. He was diagnosed as post renal failure due to bilateral hydronephroureter. Large bladder stone is thought to be the cause of bilateral hydronephroureter and renal failure. To improve renal failure, we performed open cystolithotomy and urethral catheterization. Three days after the surgery, the level of serum creatinine decreased to 224 μmol/l. He was discharged from our hospital with uneventful course. Bladder calculus is thought to be a rare cause of renal failure. We summarize the characteristics of bladder calculus causing renal failure. We should keep that long-term pyuria and urinary symptom, and repeated urinary tract infection can cause huge bladder calculus and renal failure in mind.

  2. Modeling of Groundwater Resources Heavy Metals Concentration Using Soft Computing Methods: Application of Different Types of Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Meysam Alizamir

    2017-09-01

    Full Text Available Nowadays, groundwater resources play a vital role as a source of drinking water in arid and semiarid regions and forecasting of pollutants content in these resources is very important. Therefore, this study aimed to compare two soft computing methods for modeling Cd, Pb and Zn concentration in groundwater resources of Asadabad Plain, Western Iran. The relative accuracy of several soft computing models, namely multi-layer perceptron (MLP and radial basis function (RBF for forecasting of heavy metals concentration have been investigated. In addition, Levenberg-Marquardt, gradient descent and conjugate gradient training algorithms were utilized for the MLP models. The ANN models for this study were developed using MATLAB R 2014 Software program. The MLP performs better than the other models for heavy metals concentration estimation. The simulation results revealed that MLP model was able to model heavy metals concentration in groundwater resources favorably. It generally is effectively utilized in environmental applications and in the water quality estimations. In addition, out of three algorithms, Levenberg-Marquardt was better than the others were. This study proposed soft computing modeling techniques for the prediction and estimation of heavy metals concentration in groundwater resources of Asadabad Plain. Based on collected data from the plain, MLP and RBF models were developed for each heavy metal. MLP can be utilized effectively in applications of prediction of heavy metals concentration in groundwater resources of Asadabad Plain.

  3. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  4. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    Science.gov (United States)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  5. MRI Verification of a Case of Huge Infantile Rhabdomyoma.

    Science.gov (United States)

    Ramadani, Naser; Kreshnike, Kreshnike Dedushi; Muçaj, Sefedin; Kabashi, Serbeze; Hoxhaj, Astrit; Jerliu, Naim; Bejiçi, Ramush

    2016-04-01

    Cardiac rhabdomyoma is type of benign myocardial tumor that is the most common fetal cardiac tumor. Cardiac rhabdomyomas are usually detected before birth or during the first year of life. They account for over 60% of all primary cardiac tumors. A 6 month old child with coughing and obstruction in breathing, was hospitalized in the Pediatric Clinic in UCCK, Pristine. The difficulty of breathing was heard and the pathological noise of the heart was noticed from the pediatrician. In the echo of the heart at the posterior and apico-lateral part of the left ventricle a tumoral mass was presented with the dimensions of 56 × 54 mm that forwarded the contractions of the left ventricle, the mass involved also the left ventricle wall and was not vascularized. The right ventricle was deformed and with the shifting of the SIV on the right the contractility was preserved. Aorta, the left arch and AP were normal with laminar circulation. The pericard was presented free. Radiography of thoracic organs was made; it resulted on cardiomegaly and significant bronchovascular drawing. It was completed with an MRI and it resulted on: Cardiomegaly due to large tumoral mass lesion (60×34 mm) involving lateral wall of left ventricle. It was isointense to the muscle on T1W images, markedly hyperintense on T2W images. There were a few septa or bant like hypointensities within lesion. On postcontrast study it showed avid enhancement. The left ventricle volume was decreased. Mild pericardial effusion was also noted. Surgical intervention was performed and it resulted on the histopathological aspect as a huge infantile rhadbomyoma. In most cases no treatment is required and these lesions regress spontaneously. Patients with left ventricular outflow tract obstruction or refractory arrhythmias respond well to surgical excision. Rhabdomyomas are frequently diagnosed by means of fetal echocardiography during the prenatal period.

  6. MRI Verification of a Case of Huge Infantile Rhabdomyoma

    Science.gov (United States)

    Ramadani, Naser; Kreshnike, Kreshnike Dedushi; Muçaj, Sefedin; Kabashi, Serbeze; Hoxhaj, Astrit; Jerliu, Naim; Bejiçi, Ramush

    2016-01-01

    Introduction: Cardiac rhabdomyoma is type of benign myocardial tumor that is the most common fetal cardiac tumor. Cardiac rhabdomyomas are usually detected before birth or during the first year of life. They account for over 60% of all primary cardiac tumors. Case report: A 6 month old child with coughing and obstruction in breathing, was hospitalized in the Pediatric Clinic in UCCK, Pristine. The difficulty of breathing was heard and the pathological noise of the heart was noticed from the pediatrician. In the echo of the heart at the posterior and apico-lateral part of the left ventricle a tumoral mass was presented with the dimensions of 56 × 54 mm that forwarded the contractions of the left ventricle, the mass involved also the left ventricle wall and was not vascularized. The right ventricle was deformed and with the shifting of the SIV on the right the contractility was preserved. Aorta, the left arch and AP were normal with laminar circulation. The pericard was presented free. Radiography of thoracic organs was made; it resulted on cardiomegaly and significant bronchovascular drawing. It was completed with an MRI and it resulted on: Cardiomegaly due to large tumoral mass lesion (60×34 mm) involving lateral wall of left ventricle. It was isointense to the muscle on T1W images, markedly hyperintense on T2W images. There were a few septa or bant like hypointensities within lesion. On postcontrast study it showed avid enhancement. The left ventricle volume was decreased. Mild pericardial effusion was also noted. Surgical intervention was performed and it resulted on the histopathological aspect as a huge infantile rhadbomyoma. Conclusion: In most cases no treatment is required and these lesions regress spontaneously. Patients with left ventricular outflow tract obstruction or refractory arrhythmias respond well to surgical excision. Rhabdomyomas are frequently diagnosed by means of fetal echocardiography during the prenatal period. PMID:27147810

  7. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE Model of Water Resources and Water Environments

    Directory of Open Access Journals (Sweden)

    Guohua Fang

    2016-09-01

    Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.

  8. Disposal of waste computer hard disk drive: data destruction and resources recycling.

    Science.gov (United States)

    Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming

    2013-06-01

    An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.

  9. Huge Left Ventricular Thrombus and Apical Ballooning associated with Recurrent Massive Strokes in a Septic Shock Patient

    Directory of Open Access Journals (Sweden)

    Hyun-Jung Lee

    2016-02-01

    Full Text Available The most feared complication of left ventricular thrombus (LVT is the occurrence of systemic thromboembolic events, especially in the brain. Herein, we report a patient with severe sepsis who suffered recurrent devastating embolic stroke. Transthoracic echocardiography revealed apical ballooning of the left ventricle with a huge LVT, which had not been observed in chest computed tomography before the stroke. This case emphasizes the importance of serial cardiac evaluation in patients with stroke and severe medical illness.

  10. Increasing efficiency of job execution with resource co-allocation in distributed computer systems

    OpenAIRE

    Cankar, Matija

    2014-01-01

    The field of distributed computer systems, while not new in computer science, is still the subject of a lot of interest in both industry and academia. More powerful computers, faster and more ubiquitous networks, and complex distributed applications are accelerating the growth of distributed computing. Large numbers of computers interconnected in a single network provide additional computing power to users whenever required. Such systems are, however, expensive and complex to manage, which ca...

  11. The Development of an Individualized Instructional Program in Beginning College Mathematics Utilizing Computer Based Resource Units. Final Report.

    Science.gov (United States)

    Rockhill, Theron D.

    Reported is an attempt to develop and evaluate an individualized instructional program in pre-calculus college mathematics. Four computer based resource units were developed in the areas of set theory, relations and function, algebra, trigonometry, and analytic geometry. Objectives were determined by experienced calculus teachers, and…

  12. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    Science.gov (United States)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  13. Huge increases in bacterivores on freshly killed barley roots

    DEFF Research Database (Denmark)

    Christensen, S.; Griffiths, B.; Ekelund, Flemming

    1992-01-01

    Adding fresh roots to intact soil cores resulted in marked increases in microbial and microfaunal activity at the resource islands. Microbial activity increased in two phases following root addition. Respiratory activity and concentration of respiratory enzyme (dehydrogenase) in soil adhering to ...

  14. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng

    2018-02-06

    Experimental determination of membrane protein (MP) structures is challenging as they are often too large for nuclear magnetic resonance (NMR) experiments and difficult to crystallize. Currently there are only about 510 non-redundant MPs with solved structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology and secondary structure, two-dimensional (2D) prediction of the contact/distance map, together with three-dimensional (3D) modeling of the MP structure in the lipid bilayer, for each MP target from a given model organism. The precision of the computationally constructed MP structures is leveraged by state-of-the-art deep learning methods as well as cutting-edge modeling strategies. In particular, (i) we annotate 1D property via DeepCNF (Deep Convolutional Neural Fields) that not only models complex sequence-structure relationship but also interdependency between adjacent property labels; (ii) we predict 2D contact/distance map through Deep Transfer Learning which learns the patterns as well as the complex relationship between contacts/distances and protein features from non-membrane proteins; and (iii) we model 3D structure by feeding its predicted contacts and secondary structure to the Crystallography & NMR System (CNS) suite combined with a membrane burial potential that is residue-specific and depth-dependent. PredMP currently contains more than 2,200 multi-pass transmembrane proteins (length<700 residues) from Human. These transmembrane proteins are classified according to IUPHAR/BPS Guide, which provides a hierarchical organization of receptors, channels, transporters, enzymes and other drug targets according to their molecular relationships and physiological functions. Among these MPs, we estimated that our approach could predict correct folds for 1

  15. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); Zwahlen, Daniel [Kantonsspital Graubuenden, Department of Radiotherapy, Chur (Switzerland); Bodis, Stephan [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); University Hospital Zurich, Department of Radiation Oncology, Zurich (Switzerland)

    2016-09-15

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [German] Ziel dieser Studie war es, den aktuellen Stand der Infrastruktur und Personalausstattung der

  16. Computation of groundwater resources and recharge in Chithar River Basin, South India.

    Science.gov (United States)

    Subramani, T; Babu, Savithri; Elango, L

    2013-01-01

    Groundwater recharge and available groundwater resources in Chithar River basin, Tamil Nadu, India spread over an area of 1,722 km(2) have been estimated by considering various hydrological, geological, and hydrogeological parameters, such as rainfall infiltration, drainage, geomorphic units, land use, rock types, depth of weathered and fractured zones, nature of soil, water level fluctuation, saturated thickness of aquifer, and groundwater abstraction. The digital ground elevation models indicate that the regional slope of the basin is towards east. The Proterozoic (Post-Archaean) basement of the study area consists of quartzite, calc-granulite, crystalline limestone, charnockite, and biotite gneiss with or without garnet. Three major soil types were identified namely, black cotton, deep red, and red sandy soils. The rainfall intensity gradually decreases from west to east. Groundwater occurs under water table conditions in the weathered zone and fluctuates between 0 and 25 m. The water table gains maximum during January after northeast monsoon and attains low during October. Groundwater abstraction for domestic/stock and irrigational needs in Chithar River basin has been estimated as 148.84 MCM (million m(3)). Groundwater recharge due to monsoon rainfall infiltration has been estimated as 170.05 MCM based on the water level rise during monsoon period. It is also estimated as 173.9 MCM using rainfall infiltration factor. An amount of 53.8 MCM of water is contributed to groundwater from surface water bodies. Recharge of groundwater due to return flow from irrigation has been computed as 147.6 MCM. The static groundwater reserve in Chithar River basin is estimated as 466.66 MCM and the dynamic reserve is about 187.7 MCM. In the present scenario, the aquifer is under safe condition for extraction of groundwater for domestic and irrigation purposes. If the existing water bodies are maintained properly, the extraction rate can be increased in future about 10% to 15%.

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  19. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    International Nuclear Information System (INIS)

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered

  20. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Bigeleisen, Jacob; Berne, Bruce J.; Coton, F. Albert; Scheraga, Harold A.; Simmons, Howard E.; Snyder, Lawrence C.; Wiberg, Kenneth B.; Wipke, W. Todd

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered.

  1. National Uranium Resource Evaluation Program. Hydrogeochemical and Stream Sediment Reconnaissance Basic Data Reports Computer Program Requests Manual

    International Nuclear Information System (INIS)

    1980-01-01

    This manual is intended to aid those who are unfamiliar with ordering computer output for verification and preparation of Uranium Resource Evaluation (URE) Project reconnaissance basic data reports. The manual is also intended to help standardize the procedures for preparing the reports. Each section describes a program or group of related programs. The sections are divided into three parts: Purpose, Request Forms, and Requested Information

  2. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  3. A REVIEW ON SECURITY ISSUES AND CHALLENGES IN CLOUD COMPUTING MODEL OF RESOURCE MANAGEMENT

    OpenAIRE

    T. Vaikunth Pai; Dr. P. S. Aithal

    2017-01-01

    Cloud computing services refer to set of IT-enabled services delivered to a customer as services over the Internet on a leased basis and have the capability to extend up or down their service requirements or needs. Usually, cloud computing services are delivered by third party vendors who own the infrastructure. It has several advantages include scalability, elasticity, flexibility, efficiency and outsourcing non-core activities of an organization. Cloud computing offers an innovative busines...

  4. Huge-scale molecular dynamics simulation of multibubble nuclei

    KAUST Repository

    Watanabe, Hiroshi

    2013-12-01

    We have developed molecular dynamics codes for a short-range interaction potential that adopt both the flat-MPI and MPI/OpenMP hybrid parallelizations on the basis of a full domain decomposition strategy. Benchmark simulations involving up to 38.4 billion Lennard-Jones particles were performed on Fujitsu PRIMEHPC FX10, consisting of 4800 SPARC64 IXfx 1.848 GHz processors, at the Information Technology Center of the University of Tokyo, and a performance of 193 teraflops was achieved, which corresponds to a 17.0% execution efficiency. Cavitation processes were also simulated on PRIMEHPC FX10 and SGI Altix ICE 8400EX at the Institute of Solid State Physics of the University of Tokyo, which involved 1.45 billion and 22.9 million particles, respectively. Ostwald-like ripening was observed after the multibubble nuclei. Our results demonstrate that direct simulations of multiscale phenomena involving phase transitions from the atomic scale are possible and that the molecular dynamics method is a promising method that can be applied to petascale computers. © 2013 Elsevier B.V. All rights reserved.

  5. Using Free Computational Resources to Illustrate the Drug Design Process in an Undergraduate Medicinal Chemistry Course

    Science.gov (United States)

    Rodrigues, Ricardo P.; Andrade, Saulo F.; Mantoani, Susimaire P.; Eifler-Lima, Vera L.; Silva, Vinicius B.; Kawano, Daniel F.

    2015-01-01

    Advances in, and dissemination of, computer technologies in the field of drug research now enable the use of molecular modeling tools to teach important concepts of drug design to chemistry and pharmacy students. A series of computer laboratories is described to introduce undergraduate students to commonly adopted "in silico" drug design…

  6. University Students and Ethics of Computer Technology Usage: Human Resource Development

    Science.gov (United States)

    Iyadat, Waleed; Iyadat, Yousef; Ashour, Rateb; Khasawneh, Samer

    2012-01-01

    The primary purpose of this study was to determine the level of students' awareness about computer technology ethics at the Hashemite University in Jordan. A total of 180 university students participated in the study by completing the questionnaire designed by the researchers, named the Computer Technology Ethics Questionnaire (CTEQ). Results…

  7. Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

    2012-07-14

    The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  9. Virtual partitioning for robust resource sharing: computational techniques for heterogeneous traffic

    NARCIS (Netherlands)

    Borst, S.C.; Mitra, D.

    1998-01-01

    We consider virtual partitioning (VP), which is a scheme for sharing a resource among several traffic classes in an efficient, fair, and robust manner. In the preliminary design stage, each traffic class is allocated a nominal capacity, which is based on expected offered traffic and required quality

  10. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  11. Recommendations for protecting National Library of Medicine Computing and Networking Resources

    Energy Technology Data Exchange (ETDEWEB)

    Feingold, R.

    1994-11-01

    Protecting Information Technology (IT) involves a number of interrelated factors. These include mission, available resources, technologies, existing policies and procedures, internal culture, contemporary threats, and strategic enterprise direction. In the face of this formidable list, a structured approach provides cost effective actions that allow the organization to manage its risks. We face fundamental challenges that will persist for at least the next several years. It is difficult if not impossible to precisely quantify risk. IT threats and vulnerabilities change rapidly and continually. Limited organizational resources combined with mission restraints-such as availability and connectivity requirements-will insure that most systems will not be absolutely secure (if such security were even possible). In short, there is no technical (or administrative) {open_quotes}silver bullet.{close_quotes} Protection is employing a stratified series of recommendations, matching protection levels against information sensitivities. Adaptive and flexible risk management is the key to effective protection of IT resources. The cost of the protection must be kept less than the expected loss, and one must take into account that an adversary will not expend more to attack a resource than the value of its compromise to that adversary. Notwithstanding the difficulty if not impossibility to precisely quantify risk, the aforementioned allows us to avoid the trap of choosing a course of action simply because {open_quotes}it`s safer{close_quotes} or ignoring an area because no one had explored its potential risk. Recommendations for protecting IT resources begins with discussing contemporary threats and vulnerabilities, and then procedures from general to specific preventive measures. From a risk management perspective, it is imperative to understand that today, the vast majority of threats are against UNIX hosts connected to the Internet.

  12. Tracking the Flow of Resources in Electronic Waste - The Case of End-of-Life Computer Hard Disk Drives.

    Science.gov (United States)

    Habib, Komal; Parajuly, Keshav; Wenzel, Henrik

    2015-10-20

    Recovery of resources, in particular, metals, from waste flows is widely seen as a prioritized option to reduce their potential supply constraints in the future. The current waste electrical and electronic equipment (WEEE) treatment system is more focused on bulk metals, where the recycling rate of specialty metals, such as rare earths, is negligible compared to their increasing use in modern products, such as electronics. This study investigates the challenges in recovering these resources in the existing WEEE treatment system. It is illustrated by following the material flows of resources in a conventional WEEE treatment plant in Denmark. Computer hard disk drives (HDDs) containing neodymium-iron-boron (NdFeB) magnets were selected as the case product for this experiment. The resulting output fractions were tracked until their final treatment in order to estimate the recovery potential of rare earth elements (REEs) and other resources contained in HDDs. The results further show that out of the 244 kg of HDDs treated, 212 kg comprising mainly of aluminum and steel can be finally recovered from the metallurgic process. The results further demonstrate the complete loss of REEs in the existing shredding-based WEEE treatment processes. Dismantling and separate processing of NdFeB magnets from their end-use products can be a more preferred option over shredding. However, it remains a technological and logistic challenge for the existing system.

  13. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  14. Mobile clusters of single board computers: an option for providing resources to student projects and researchers.

    Science.gov (United States)

    Baun, Christian

    2016-01-01

    Clusters usually consist of servers, workstations or personal computers as nodes. But especially for academic purposes like student projects or scientific projects, the cost for purchase and operation can be a challenge. Single board computers cannot compete with the performance or energy-efficiency of higher-value systems, but they are an option to build inexpensive cluster systems. Because of the compact design and modest energy consumption, it is possible to build clusters of single board computers in a way that they are mobile and can be easily transported by the users. This paper describes the construction of such a cluster, useful applications and the performance of the single nodes. Furthermore, the clusters' performance and energy-efficiency is analyzed by executing the High Performance Linpack benchmark with a different number of nodes and different proportion of the systems total main memory utilized.

  15. Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key

    Science.gov (United States)

    Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.

    2017-01-01

    Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.

  16. Computational modeling as a tool for water resources management: an alternative approach to problems of multiple uses

    Directory of Open Access Journals (Sweden)

    Haydda Manolla Chaves da Hora

    2012-04-01

    Full Text Available Today in Brazil there are many cases of incompatibility regarding use of water and its availability. Due to the increase in required variety and volume, the concept of multiple uses was created, as stated by Pinheiro et al. (2007. The use of the same resource to satisfy different needs with several restrictions (qualitative and quantitative creates conflicts. Aiming to minimize these conflicts, this work was applied to the particular cases of Hydrographic Regions VI and VIII of Rio de Janeiro State, using computational modeling techniques (based on MOHID software – Water Modeling System as a tool for water resources management.

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  18. Attentional Resource Allocation and Cultural Modulation in a Computational Model of Ritualized Behavior

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Sørensen, Jesper

    2016-01-01

    studies have tried to answer by focusing on ritualized behavior instead of ritual. Ritualized behavior (i.e., a set of behavioral features embedded in rituals) increases attention to detail and induces cognitive resource depletion, which together support distinct modes of action categorization. While......How do cultural and religious rituals influence human perception and cognition, and what separates the highly patterned behaviors of communal ceremonies from perceptually similar precautionary and compulsive behaviors? These are some of the questions that recent theoretical models and empirical...... patterns and the simulation data were subjected to linear and non-linear analysis. The results are used to exemplify how action perception of ritualized behavior a) might influence allocation of attentional resources; and b) can be modulated by cultural priors. Further explorations of the model show why...

  19. Computer and Video Games in Family Life: The Digital Divide as a Resource in Intergenerational Interactions

    Science.gov (United States)

    Aarsand, Pal Andre

    2007-01-01

    In this ethnographic study of family life, intergenerational video and computer game activities were videotaped and analysed. Both children and adults invoked the notion of a digital divide, i.e. a generation gap between those who master and do not master digital technology. It is argued that the digital divide was exploited by the children to…

  20. Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.

    Science.gov (United States)

    Beltrametti, Monica; English, Will

    1994-01-01

    Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…

  1. Computer modelling of the UK wind energy resource: final overview report

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Ravenscroft, F

    1993-12-31

    This report describes the results of a programme of work to estimate the UK wind energy resource. Mean wind speed maps and quantitative resource estimates were obtained using the NOABL mesoscale (1 km resolution) numerical model for the prediction of wind flow over complex terrain. NOABL was used in conjunction with digitised terrain data and wind data from surface meteorological stations for a ten year period (1975-1984) to provide digital UK maps of mean wind speed at 10m, 25m and 45m above ground level. Also included in the derivation of these maps was the use of the Engineering Science Data Unit (ESDU) method to model the effect on wind speed of the abrupt change in surface roughness that occurs at the coast. Existing isovent maps, based on standard meteorological data which take no account of terrain effects, indicate that 10m annual mean wind speeds vary between about 4.5 and 7 m/s over the UK with only a few coastal areas over 6 m/s. The present study indicated that 23% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. (20 figures, 7 tables, 10 references). (author)

  2. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    International Nuclear Information System (INIS)

    Kirk, Bernadette Lugue

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries. An important activity of RSICC is its participation in international efforts on computational and experimental benchmarks. An example is the Shielding Integral Benchmarks Archival Database (SINBAD), which includes shielding benchmarks for fission, fusion and accelerators. RSICC is funded by the United States Department of Energy, Department of Homeland Security and Nuclear Regulatory Commission.

  3. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries.

  4. A resource letter CSSMD-1: computer simulation studies by the method of molecular dynamics

    International Nuclear Information System (INIS)

    Goel, S.P.; Hockney, R.W.

    1974-01-01

    A comprehensive bibliography on computer simulation studies by the method of Molecular Dynamics is presented. The bibliography includes references to relevant literature published up to mid 1973, starting from the first paper of Alder and Wainwright, published in 1957. The procedure of the method of Molecular Dynamics, the main fields of study in which it has been used, its limitations and how these have been overcome in some cases are also discussed [pt

  5. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    Science.gov (United States)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  6. Resource utilization and costs during the initial years of lung cancer screening with computed tomography in Canada.

    Science.gov (United States)

    Cressman, Sonya; Lam, Stephen; Tammemagi, Martin C; Evans, William K; Leighl, Natasha B; Regier, Dean A; Bolbocean, Corneliu; Shepherd, Frances A; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R; Mayo, John R; McWilliams, Annette; Couture, Christian; English, John C; Goffin, John; Hwang, David M; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J; Goss, Glenwood D; Nicholas, Garth; Seely, Jean M; Sekhon, Harmanjatinder S; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D; Tan, Wan C; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J

    2014-10-01

    It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer's perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400-$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553-$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254-$52,200; p = 0.061). In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure.

  7. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    CERN Document Server

    INSPIRE-00416173; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machin...

  8. Recent advances in computational optimization

    CERN Document Server

    2013-01-01

    Optimization is part of our everyday life. We try to organize our work in a better way and optimization occurs in minimizing time and cost or the maximization of the profit, quality and efficiency. Also many real world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization. This book presents recent advances in computational optimization. The volume includes important real world problems like parameter settings for con- trolling processes in bioreactor, robot skin wiring, strip packing, project scheduling, tuning of PID controller and so on. Some of them can be solved by applying traditional numerical methods, but others need a huge amount of computational resources. For them it is shown that is appropriate to develop algorithms based on metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming etc...

  9. Democratizing Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Ryoo, Jean J.

    2015-01-01

    Computer science programs are too often identified with a narrow stratum of the student population, often white or Asian boys who have access to computers at home. But because computers play such a huge role in our world today, all students can benefit from the study of computer science and the opportunity to build skills related to computing. The…

  10. I - Detector Simulation for the LHC and beyond: how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  11. II - Detector simulation for the LHC and beyond : how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  12. Menu-driven cloud computing and resource sharing for R and Bioconductor.

    Science.gov (United States)

    Bolouri, Hamid; Dulepet, Rajiv; Angerman, Michael

    2011-08-15

    We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. hbolouri@fhcrc.org.

  13. Energy efficiency models and optimization algoruthm to enhance on-demand resource delivery in a cloud computing environment / Thusoyaone Joseph Moemi

    OpenAIRE

    Moemi, Thusoyaone Joseph

    2013-01-01

    Online hosed services are what is referred to as Cloud Computing. Access to these services is via the internet. h shifts the traditional IT resource ownership model to renting. Thus, high cost of infrastructure cannot limit the less privileged from experiencing the benefits that this new paradigm brings. Therefore, c loud computing provides flexible services to cloud user in the form o f software, platform and infrastructure as services. The goal behind cloud computing is to provi...

  14. Computational resources to filter gravitational wave data with P-approximant templates

    International Nuclear Information System (INIS)

    Porter, Edward K

    2002-01-01

    The prior knowledge of the gravitational waveform from compact binary systems makes matched filtering an attractive detection strategy. This detection method involves the filtering of the detector output with a set of theoretical waveforms or templates. One of the most important factors in this strategy is knowing how many templates are needed in order to reduce the loss of possible signals. In this study, we calculate the number of templates and computational power needed for a one-step search for gravitational waves from inspiralling binary systems. We build on previous works by first expanding the post-Newtonian waveforms to 2.5-PN order and second, for the first time, calculating the number of templates needed when using P-approximant waveforms. The analysis is carried out for the four main first-generation interferometers, LIGO, GEO600, VIRGO and TAMA. As well as template number, we also calculate the computational cost of generating banks of templates for filtering GW data. We carry out the calculations for two initial conditions. In the first case we assume a minimum individual mass of 1 M o-dot and in the second, we assume a minimum individual mass of 5 M o-dot . We find that, in general, we need more P-approximant templates to carry out a search than if we use standard PN templates. This increase varies according to the order of PN-approximation, but can be as high as a factor of 3 and is explained by the smaller span of the P-approximant templates as we go to higher masses. The promising outcome is that for 2-PN templates, the increase is small and is outweighed by the known robustness of the 2-PN P-approximant templates

  15. SuperB R&D computing program: HTTP direct access to distributed resources

    Science.gov (United States)

    Fella, A.; Bianchi, F.; Ciaschini, V.; Corvo, M.; Delprete, D.; Diacono, D.; Di Simone, A.; Franchini, P.; Donvito, G.; Giacomini, F.; Gianoli, A.; Longo, S.; Luitz, S.; Luppi, E.; Manzali, M.; Pardi, S.; Perez, A.; Rama, M.; Russo, G.; Santeramo, B.; Stroili, R.; Tomassetti, L.

    2012-12-01

    The SuperB asymmetric energy e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a luminosity target of 1036cm-2s-1. The increasing network performance also in the Wide Area Network environment and the capability to read data remotely with good efficiency are providing new possibilities and opening new scenarios in the data access field. Subjects like data access and data availability in a distributed environment are key points in the definition of the computing model for an HEP experiment like SuperB. R&D efforts in such a field have been brought on during the last year in order to release the Computing Technical Design Report within 2013. WAN direct access to data has been identified as one of the more interesting viable option; robust and reliable protocols as HTTP/WebDAV and xrootd are the subjects of a specific R&D line in a mid-term scenario. In this work we present the R&D results obtained in the study of new data access technologies for typical HEP use cases, focusing on specific protocols such as HTTP and WebDAV in Wide Area Network scenarios. Reports on efficiency, performance and reliability tests performed in a data analysis context have been described. Future R&D plan includes HTTP and xrootd protocols comparison tests, in terms of performance, efficiency, security and features available.

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  17. Internet resources for dentistry: computer, Internet, reference, and sites for enhancing personal productivity of the dental professional.

    Science.gov (United States)

    Guest, G F

    2000-08-15

    At the onset of the new millennium the Internet has become the new standard means of distributing information. In the last two to three years there has been an explosion of e-commerce with hundreds of new web sites being created every minute. For most corporate entities, a web site is as essential as the phone book listing used to be. Twenty years ago technologist directed how computer-based systems were utilized. Now it is the end users of personal computers that have gained expertise and drive the functionality of software applications. The computer, initially invented for mathematical functions, has transitioned from this role to an integrated communications device that provides the portal to the digital world. The Web needs to be used by healthcare professionals, not only for professional activities, but also for instant access to information and services "just when they need it." This will facilitate the longitudinal use of information as society continues to gain better information access skills. With the demand for current "just in time" information and the standards established by Internet protocols, reference sources of information may be maintained in dynamic fashion. News services have been available through the Internet for several years, but now reference materials such as online journals and digital textbooks have become available and have the potential to change the traditional publishing industry. The pace of change should make us consider Will Rogers' advice, "It isn't good enough to be moving in the right direction. If you are not moving fast enough, you can still get run over!" The intent of this article is to complement previous articles on Internet Resources published in this journal, by presenting information about web sites that present information on computer and Internet technologies, reference materials, news information, and information that lets us improve personal productivity. Neither the author, nor the Journal endorses any of the

  18. A Two-Tier Energy-Aware Resource Management for Virtualized Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2016-01-01

    Full Text Available The economic costs caused by electric power take the most significant part in total cost of data center; thus energy conservation is an important issue in cloud computing system. One well-known technique to reduce the energy consumption is the consolidation of Virtual Machines (VMs. However, it may lose some performance points on energy saving and the Quality of Service (QoS for dynamic workloads. Fortunately, Dynamic Frequency and Voltage Scaling (DVFS is an efficient technique to save energy in dynamic environment. In this paper, combined with the DVFS technology, we propose a cooperative two-tier energy-aware management method including local DVFS control and global VM deployment. The DVFS controller adjusts the frequencies of homogenous processors in each server at run-time based on the practical energy prediction. On the other hand, Global Scheduler assigns VMs onto the designate servers based on the cooperation with the local DVFS controller. The final evaluation results demonstrate the effectiveness of our two-tier method in energy saving.

  19. Reconfiguration of Computation and Communication Resources in Multi-Core Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pezzarossa, Luca

    -core platform. Our approach is to associate reconfiguration with operational mode changes where the system, during normal operation, changes a subset of the executing tasks to adapt its behaviour to new conditions. Reconfiguration is therefore used during a mode change to modify the real-time guaranteed services...... of the communication channels between the tasks that are affected by the reconfiguration. This thesis investigates the use of reconfiguration in the context of multicore realtime systems targeting embedded applications. We address the reconfiguration of both the computation and the communication resources of a multi...... by the communication fabric between the cores of the platform. To support this, we present a new network on chip architecture, named Argo 2, that allows instantaneous and time-predictable reconfiguration of the communication channels. Our reconfiguration-capable architecture is prototyped using the existing time...

  20. Newtonian self-gravitating system in a relativistic huge void universe model

    Energy Technology Data Exchange (ETDEWEB)

    Nishikawa, Ryusuke; Nakao, Ken-ichi [Department of Mathematics and Physics, Graduate School of Science, Osaka City University, 3-3-138 Sugimoto, Sumiyoshi, Osaka 558-8585 (Japan); Yoo, Chul-Moon, E-mail: ryusuke@sci.osaka-cu.ac.jp, E-mail: knakao@sci.osaka-cu.ac.jp, E-mail: yoo@gravity.phys.nagoya-u.ac.jp [Division of Particle and Astrophysical Science, Graduate School of Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602 (Japan)

    2016-12-01

    We consider a test of the Copernican Principle through observations of the large-scale structures, and for this purpose we study the self-gravitating system in a relativistic huge void universe model which does not invoke the Copernican Principle. If we focus on the the weakly self-gravitating and slowly evolving system whose spatial extent is much smaller than the scale of the cosmological horizon in the homogeneous and isotropic background universe model, the cosmological Newtonian approximation is available. Also in the huge void universe model, the same kind of approximation as the cosmological Newtonian approximation is available for the analysis of the perturbations contained in a region whose spatial size is much smaller than the scale of the huge void: the effects of the huge void are taken into account in a perturbative manner by using the Fermi-normal coordinates. By using this approximation, we derive the equations of motion for the weakly self-gravitating perturbations whose elements have relative velocities much smaller than the speed of light, and show the derived equations can be significantly different from those in the homogeneous and isotropic universe model, due to the anisotropic volume expansion in the huge void. We linearize the derived equations of motion and solve them. The solutions show that the behaviors of linear density perturbations are very different from those in the homogeneous and isotropic universe model.

  1. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    Science.gov (United States)

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  2. Implementation of DFT application on ternary optical computer

    Science.gov (United States)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  3. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    Science.gov (United States)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  4. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    Science.gov (United States)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  5. A case report of surgical debulking for a huge mass of elephantiasis neuromatosa

    Science.gov (United States)

    Hoshi, Manabu; Ieguchi, Makoto; Taguchi, Susumu; Yamasaki, Shinya

    2009-01-01

    Achievement of a safe outcome for an extensive mass with hypervascularity in the extremities requires a surgical team skilled in musculoskeletal oncology. We report debulking surgery for a huge mass of elephantiasis neuromatosa in the right leg of a 56-year old man using the novel Ligasure® vessel sealing system. PMID:21139882

  6. A case report of surgical debulking for a huge mass of elephantiasis neuromatosa

    Directory of Open Access Journals (Sweden)

    Shinya Yamasaki

    2009-07-01

    Full Text Available Achievement of a safe outcome for an extensive mass with hypervascularity in the extremities requires a surgical team skilled in musculoskeletal oncology. We report debulking surgery for a huge mass of elephantiasis neuromatosa in the right leg of a 56-year old man using the novel Ligasure® vessel sealing system.

  7. A Huge Ovarian Cyst in a Middle-Aged Iranian Female

    Directory of Open Access Journals (Sweden)

    Mohammad Kazem Moslemi

    2010-05-01

    Full Text Available A 38-year-old Iranian woman was found to have a huge ovarian cystic mass. Her presenting symptom was vague abdominal pain and severe abdominal distention. She underwent laparotomy and after surgical removal, the mass was found to be mucinous cystadenoma on histology.

  8. Multimedia messages in genetics: design, development, and evaluation of a computer-based instructional resource for secondary school students in a Tay Sachs disease carrier screening program.

    Science.gov (United States)

    Gason, Alexandra A; Aitken, MaryAnne; Delatycki, Martin B; Sheffield, Edith; Metcalfe, Sylvia A

    2004-01-01

    Tay Sachs disease is a recessively inherited neurodegenerative disorder, for which carrier screening programs exist worldwide. Education for those offered a screening test is essential in facilitating informed decision-making. In Melbourne, Australia, we have designed, developed, and evaluated a computer-based instructional resource for use in the Tay Sachs disease carrier screening program for secondary school students attending Jewish schools. The resource entitled "Genetics in the Community: Tay Sachs disease" was designed on a platform of educational learning theory. The development of the resource included formative evaluation using qualitative data analysis supported by descriptive quantitative data. The final resource was evaluated within the screening program and compared with the standard oral presentation using a questionnaire. Knowledge outcomes were measured both before and after either of the educational formats. Data from the formative evaluation were used to refine the content and functionality of the final resource. The questionnaire evaluation of 302 students over two years showed the multimedia resource to be equally effective as an oral educational presentation in facilitating participants' knowledge construction. The resource offers a large number of potential benefits, which are not limited to the Tay Sachs disease carrier screening program setting, such as delivery of a consistent educational message, short delivery time, and minimum financial and resource commitment. This article outlines the value of considering educational theory and describes the process of multimedia development providing a framework that may be of value when designing genetics multimedia resources in general.

  9. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    Science.gov (United States)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  10. Huge gastric diospyrobezoars successfully treated by oral intake and endoscopic injection of Coca-Cola.

    Science.gov (United States)

    Chung, Y W; Han, D S; Park, Y K; Son, B K; Paik, C H; Jeon, Y C; Sohn, J H

    2006-07-01

    A diospyrobezoar is a type of phytobezoar that is considered to be harder than any other types of phytobezoars. Here, we describe a new treatment modality, which effectively and easily disrupted huge gastric diospyrobezoars. A 41-year-old man with a history of diabetes mellitus was admitted with lower abdominal pain and vomiting. Upper gastrointestinal endoscopy revealed three huge, round diospyrobezoars in the stomach. He was made to drink two cans of Coca-Cola every 6 h. At endoscopy the next day, the bezoars were partially dissolved and turned to be softened. We performed direct endoscopic injection of Coca-Cola into each bezoar. At repeated endoscopy the next day, the bezoars were completely dissolved.

  11. Successful Vaginal Delivery despite a Huge Ovarian Mucinous Cystadenoma Complicating Pregnancy: A Case Report

    Directory of Open Access Journals (Sweden)

    Dipak Mandi

    2013-12-01

    Full Text Available A 22-year-old patient with 9 months of amenorrhea and a huge abdominal swelling was admitted to our institution with an ultrasonography report of a multiloculated cystic space-occupying lesion, almost taking up the whole abdomen (probably of ovarian origin, along with a single live intrauterine fetus. She delivered vaginally a boy baby within 4 hours of admission without any maternal complication, but the baby had features of intrauterine growth restriction along with low birth weight. On the 8th postpartum day, the multiloculated cystic mass, which arose from the right ovary and weighed about 11 kg, was removed via laparotomy. A mucinous cystadenoma with no malignant cells in peritoneal washing was detected in histopathology examination. This report describes a rare case of a successful vaginal delivery despite a huge cystadenoma of the right ovary complicating the pregnancy.

  12. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    Full Text Available Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  13. A new pixels flipping method for huge watermarking capacity of the invoice font image.

    Science.gov (United States)

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Xu, Qishuai; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  14. The therapy for huge goiter together with hyperthyroidism through 131I case studies

    International Nuclear Information System (INIS)

    He Jianhua; Yu Wencai; Zeng Qingwen; Wu Congjun

    2001-01-01

    Objective: 214 cases of the treatment of huge goiter with hyperthyroidism are revised to collect clinic material for the improvement of therapy to hyperthyroidism indications through 131 I. Methods: In all of these cases, patients take a full dose of 131 I based on MC Garack's formula for one time. Results: Among them, 154 resolved, accounting for 72%, 139 of the cases were reduced to normal size, which accounted for 64.9% of the patients. Only 114 cases of patients had side-effect, and during one year 12.1% of them have symptoms of hypothyroidism. Conclusion: The statistics shows that 131 I is convenient, safe, well and with reduces suffering from treating huge goiter with hyperthyroidism

  15. Huge mucinous cystadenoma of ovary, describing a young patient: case report

    Directory of Open Access Journals (Sweden)

    Soheila Aminimoghaddam

    2017-08-01

    Conclusion: Ovarian cysts in young women who are associated with elevated levels of tumor markers and ascites require careful evaluation. Management of ovarian cysts depends on patient's age, size of the cyst, and its histopathological nature. Conservative surgery such as ovarian cystectomy or salpingo-oophorectomy is adequate in mucinous tumors of ovary. Multiple frozen sections are very important to know the malignant variation of this tumor and helps accurate patient management. Surgical expertise is required to prevent complications in huge tumors has distorted the anatomy, so gynecologic oncologist plays a prominent role in management. In this case, beside of the huge tumor and massive ascites uterine and ovaries were preserved by gynecologist oncologist and patient is well up to now.

  16. Transcatheter Closure of Bilateral Multiple Huge Pulmonary Arteriovenous Malformations with Homemade Double-Umbrella Occluders

    International Nuclear Information System (INIS)

    Zhong Hongshan; Xu Ke; Shao Haibo

    2008-01-01

    A 28-year-old man underwent successful transcatheter occlusion of three huge pulmonary arteriovenous malformations (PAVMs) using homemade double-umbrella occluders and stainless steel coils. Thoracic CT with three-dimensional reconstruction and pulmonary angiography were used for treatment planning and follow-up. The diameters of the feeding vessels were 11 mm, 13 mm, and 14 mm, respectively. This report demonstrates the novel design and utility of the double-umbrella occluder, an alternative tool for treatment of large PAVMs.

  17. On the huge Lie superalgebra of pseudo superdifferential operators and super KP-hierarchies

    International Nuclear Information System (INIS)

    Sedra, M.B.

    1995-08-01

    Lie superalgebraic methods are used to establish a connection between the huge Lie superalgebra Ξ of super (pseudo) differential operators and various super KP-hierarchies. We show in particular that Ξ splits into 5 = 2 x 2 + 1 graded algebras expected to correspond to five classes of super KP-hierarchies generalizing the well-known Manin-Radul and Figueroa O'Farrill-Ramos supersymmetric KP-hierarchies. (author). 10 refs

  18. Propranolol in Treatment of Huge and Complicated Infantile Hemangiomas in Egyptian Children

    OpenAIRE

    Hassan, Basheir A.; Shreef, Khalid S.

    2014-01-01

    Background. Infantile hemangiomas (IHs) are the most common benign tumours of infancy. Propranolol has recently been reported to be a highly effective treatment for IHs. This study aimed to evaluate the efficacy and side effects of propranolol for treatment of complicated cases of IHs. Patients and Methods. This prospective clinical study included 30 children with huge or complicated IHs; their ages ranged from 2 months to 1 year. They were treated by oral propranolol. Treatment outcomes were...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  20. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  1. Disaster Characteristics and Mitigation Measures of Huge Glacial Debris Flows along the Sichuan-Tibet Railway

    Science.gov (United States)

    Liu, Jinfeng; You, Yong; Zhang, Guangze; Wang, Dong; Chen, Jiangang; Chen, Huayong

    2017-04-01

    The Ranwu-Tongmai section of the Sichuan-Tibet Railway passes through the Palongzangbu River basin which locates in the southeast Qinghai-Tibetan Plateau. Due to widely distributed maritime glacier in this area, the huge glacier debris flows are very developed. Consequently, the disastrous glacier debris flows with huge scale (106-108 m3 for one debris flow event) and damage become one of the key influencing factors for the route alignment of the Sichuan-Tibet Railway. The research on disaster characteristics and mitigation measures of huge glacial debris flows in the study area were conducted by the remote sensing interpretation, field investigation, parameter calculation and numerical simulation. Firstly, the distribution of the glaciers, glacier lakes and glacier debris flows were identified and classified; and the disaster characteristics for the huge glacier debris flow were analyzed and summarized. Secondly, the dynamic parameters including the flood peak discharge, debris flow peak discharge, velocity, total volume of a single debris flow event were calculated. Based on the disaster characteristics and the spatial relation with the railway, some mitigation principles and measures were proposed. Finally, the Guxiang Gully, where a huge glacier debris flow with 2*108m3 in volume occurred in 1953, was selected as a typical case to analyze its disaster characteristics and mitigation measures. The interpretation results show that the glacier area is about 970 km2 which accounts for 19% of the total study area. 130 glacier lakes and 102 glacier debris flows were identified and classified. The Sichuan-Tibet Railway passes through 43 glacier debris flows in the study area. The specific disaster characteristics were analyzed and corresponding mitigation measures were proposed for the route selection of the railway. For the Guxiang Gully, a numerical simulation to simulate the deposition condition at the alluvial fan was conducted. the simulation results show that the

  2. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  3. The actual status of uranium ore resources at Eko Remaja Sector: the need of verification of resources computation and geometrical form of mineralization zone by mining test

    International Nuclear Information System (INIS)

    Johan Baratha; Muljono, D.S.; Agus Sumaryanto; Handoko Supalal

    1996-01-01

    Uranium ore resources calculation was done after ending all of geological work step. Estimation process of ore resources was started from evaluation drilling, continued with borehole logging. From logging, the result has presented in anomaly graphs, then was processed to determine thickness and grade value of ore. Those mineralization points were correlated one another to form mineralization zones which have direction of N 270 degree to N 285 degree with 70 degree dip to North. From Grouping the mineralization distribution, 19 mineralization planes was constructed which contain 553 ton of U 3 O 8 measured. It is suggested that before expanding measured ore deposit area, mining test should be done first at certain mineralization planes to prove the method applied to calculate the reserve. Results form mining test could be very useful to reevaluate all the work-step done. (author); 4 refs; 2 tabs; 8 figs

  4. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  5. A computer software system for integration and analysis of grid-based remote sensing data with other natural resource data. Remote Sensing Project

    Science.gov (United States)

    Tilmann, S. E.; Enslin, W. R.; Hill-Rowley, R.

    1977-01-01

    A computer-based information system is described designed to assist in the integration of commonly available spatial data for regional planning and resource analysis. The Resource Analysis Program (RAP) provides a variety of analytical and mapping phases for single factor or multi-factor analyses. The unique analytical and graphic capabilities of RAP are demonstrated with a study conducted in Windsor Township, Eaton County, Michigan. Soil, land cover/use, topographic and geological maps were used as a data base to develope an eleven map portfolio. The major themes of the portfolio are land cover/use, non-point water pollution, waste disposal, and ground water recharge.

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  8. The efficacy of stereotactic body radiation therapy on huge hepatocellular carcinoma unsuitable for other local modalities

    International Nuclear Information System (INIS)

    Que, Jenny Y; Lin, Li-Ching; Lin, Kuei-Li; Lin, Chia-Hui; Lin, Yu-Wei; Yang, Ching-Chieh

    2014-01-01

    To evaluate the safety and efficacy of Cyberknife stereotactic body radiation therapy (SBRT) and its effect on survival in patients with unresectable huge hepatocellular carcinoma (HCC) unsuitable of other standard treatment option. Between 2009 and 2011, 22 patients with unresectable huge HCC (≧10 cm) were treated with SBRT. dose ranged from 26 Gy to 40 Gy in five fractions. Overall survival (OS) and disease-progression free survival (DPFS) were determined by Kaplan-Meier analysis. Tumor response and toxicities were also assessed. After a median follow-up of 11.5 month (range 2–46 months). The objective response rate was achieved in 86.3% (complete response (CR): 22.7% and partial response (PR): 63.6%). The 1-yr. local control rate was 55.56%. The 1-year OS was 50% and median survival was 11 months (range 2–46 months). In univariate analysis, Child-Pugh stage (p = 0.0056) and SBRT dose (p = 0.0017) were significant factors for survival. However, in multivariate analysis, SBRT dose (p = 0.0072) was the most significant factor, while Child-Pugh stage of borderline significance. (p = 0.0514). Acute toxicities were mild and well tolerated. This study showed that SBRT can be delivered safely to huge HCC and achieved a substantial tumor regression and survival. The results suggest this technique should be considered a salvage treatment. However, local and regional recurrence remain the major cause of failure. Further studies of combination of SBRT and other treatment modalities may be reasonable

  9. A case of huge neurofibroma expanding extra- and intracranially through the enlarged jugular foramen

    International Nuclear Information System (INIS)

    Hanakita, Junya; Imataka, Kiyoharu; Handa, Hajime

    1984-01-01

    The surgical approach to the jugular foramen has been considered to be very difficult and troublesome, because of the location in which important structures, such as the internal jugular vein, internal carotid artery and lower cranial nerves, converge in the narrow deep space. A case of huge neurofibroma, which extended from the tentorium cerebelli through the dilated jugular foramen to the level of the vertebral body of C 3 was presented. A 12-year-old girl was admitted with complaints of visual disturbance and palsy of the V-XII cranial nerves of the left side. Plain skull film showed prominent widening of the cranial sutures and enlargement of the sella turcica. Horizontal CT scan with contrast showed symmetrical ventricular dilatation and a heterogeneously enhanced mass, which was situated mainly in the left CP angle. Coronal CT scan with contrast revealed a huge mass and enlarged jugular foramen, through which the tumor extended to the level of the vertebral body of C 3 . Occlusion of the sigmoid sinus and the internal jugular vein of the left side was noticed in the vertebral angiography. Two-stage approach, the first one for removal of the intracranial tumor and the second one for extracranial tumor, was performed for its huge tumor. Several authors have reported excellent surgical approaches for the tumors situated in the jugular foramen. By our approach, modifying Gardner's original one, a wide operative field was obtained to remove the tumor around the jugular foramen with success. Our approach for the jugular foramen was described with illustrations. (author)

  10. En bloc resection of huge cemento-ossifying fibroma of mandible: avoiding lower lip split incision.

    Science.gov (United States)

    Ayub, Tahera; Katpar, Shahjahan; Shafique, Salman; Mirza, Talat

    2011-05-01

    Cemento-ossifying Fibroma (COF) is an osteogenic benign neoplasm affecting the jaws and other craniofacial bones. It commonly presents as a progressively slow growing pathology, which can sometimes attain an enormous size, causing facial deformity. A case of a huge cemento-ossifying fibroma, appearing as a mandibular dumbell tumour in a male patient is documented, which caused massive bone destruction and deformity. It was surgically removed by performing en bloc resection of mandible avoiding the splitting of lower lip incision technique, thereby maintaining his normal facial appearance.

  11. Huge residual resistivity in the quantum critical region of CeAgSb2

    International Nuclear Information System (INIS)

    Nakashima, Miho; Kirita, Shingo; Asai, Rihito; Kobayashi, Tatsuo C; Okubo, Tomoyuki; Yamada, Mineko; Thamizhavel, Arumugam; Inada, Yoshihiko; Settai, Rikio; Galatanu, Andre; Yamamoto, Etsuji; Ebihara, Takao; Onuki, Yoshichika

    2003-01-01

    We have studied the effect of pressure on the electrical resistivity of a high-quality single crystal CeAgSb 2 which has a small net ferromagnetic moment of 0.4μ B /Ce. The magnetic ordering temperature T ord = 9.7 K decreases with increasing pressure p and disappears at a critical pressure p c ≅ 3.3 GPa. The residual resistivity, which is close to zero up to 3 GPa, increases steeply above 3 GPa, reaching 55μΩ cm at p c . A huge residual resistivity is found to appear when the magnetic order disappears. (letter to the editor)

  12. Acute abdomen in early pregnancy caused by torsion of bilateral huge multiloculated ovarian cysts

    OpenAIRE

    Sathiyakala Rajendran; Suthanthira Devi

    2015-01-01

    The association of pregnancy and torsion of bilateral huge benign ovarian cyst is rare. We report a case of multigravida at 13 weeks of pregnancy presenting with acute onset of lower abdominal pain. Ultrasound revealed bilateral multiloculated ovarian cysts of size 10x10 cm on right side and 15x10cm on left side with evidence of torsion and a single live intrauterine fetus of gestational age 13 weeks 4 days. Emergency laparotomy was done with vaginal susten 200 mg as perioperative tocolysis. ...

  13. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  18. 云计算环境下的DPSO资源负载均衡算法%DPSO resource load balancing in cloud computing

    Institute of Scientific and Technical Information of China (English)

    冯小靖; 潘郁

    2013-01-01

    Load balancing problem is one of the hot issues in cloud computing. Discrete particle swarm optimization algoritm is used to research load balancing on cloud computing environment. According to dynamic change of resources demand and low require of servers, each resource management node servers as node of the topological structure, and this paper establishes appropriate resource-task model which is resolved by DPSO. Verification results show that the algorithm enhances the utilization ratio and load balancing of resources.%负载均衡问题是云计算研究的热点问题之一.运用离散粒子群算法对云计算环境下的负载均衡问题进行研究,根据云计算环境下资源需求动态变化,并且对资源节点服务器的要求较低的特点,把各个资源节点当做网络拓扑结构中的各个节点,建立相应的资源-任务分配模型,运用离散粒子群算法实现资源负载均衡.验证表明,该算法提高了资源利用率和云计算资源的负载均衡.

  19. Surgical resection of a huge cemento-ossifying fibroma in skull base by intraoral approach.

    Science.gov (United States)

    Cheng, Xiao-Bing; Li, Yun-Peng; Lei, De-Lin; Li, Xiao-Dong; Tian, Lei

    2011-03-01

    Cemento-ossifying fibroma, also known as ossifying fibroma, usually occurs in the mandible and less commonly in the maxilla. The huge example in the skull base is even rare. We present a case of a huge cemento-ossifying fibroma arising below the skull base of a 30-year-old woman patient. Radiologic investigations showed a giant, lobulated, heterogeneous calcified hard tissue mass, which is well circumscribed and is a mixture of radiolucent and radiopaque, situated at the rear of the right maxilla to the middle skull base. The tumor expands into the right maxillary sinus and the orbital cavity, fusing with the right maxilla at the maxillary tuberosity and blocking the bilateral choanas, which caused marked proptosis and blurred vision. The tumor was resected successfully by intraoral approach, and pathologic examination confirmed the lesion to be a cemento-ossifying fibroma. This case demonstrates that cemento-ossifying fibroma in the maxilla, not like in the mandible, may appear more aggressive because the extensive growth is unimpeded by anatomic obstacles and that the intraoral approach can be used to excise the tumor in the skull base.

  20. Huge natural gas reserves central to capacity work, construction plans in Iran

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    Questions about oil production capacity in Iran tend to mask the country's huge potential as a producer of natural gas. Iran is second only to Russia in gas reserves, which National Iranian Gas Co. estimates at 20.7 trillion cu m. Among hurdles to Iran's making greater use of its rich endowment of natural gas are where and how to sell gas not used inside the country. The marketing logistics problem is common to other Middle East holders of gas reserves and a reason behind the recent proliferation of proposals for pipeline and liquefied natural gas schemes targeting Europe and India. But Iran's challenges are greater than most in the region. Political uncertainties and Islamic rules complicate long-term financing of transportation projects and raise questions about security of supply. As a result, Iran has remained mostly in the background of discussions about international trade of Middle Eastern gas. The country's huge gas reserves, strategic location, and existing transport infrastructure nevertheless give it the potential to be a major gas trader if the other issues can be resolved. The paper discusses oil capacity plans, gas development, gas injection for enhanced oil recovery, proposals for exports of gas, and gas pipeline plans

  1. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  2. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  4. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  5. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  7. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    Science.gov (United States)

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  9. 面向业务对象的计算资源动态分配方法%DYNAMIC ALLOCATION OF COMPUTING RESOURCES FOR BUSINESS-ORIENTED OBJECT

    Institute of Scientific and Technical Information of China (English)

    尚海鹰

    2017-01-01

    This paper aims to summarize the development trend of computer system infrastructure.In view of the current era Internet plus information system business scenarios,we analyze the mainstream method of computing resources allocation and load balancing.Meanwhile,to further improve transaction processing efficiency and meet the demand of service level agreement flexibility,we introduce a dynamic allocation method of computing resources for business objects.According to the reference value of the processing performance of the actual application system,the computing resources allocation plan and dynamic adjustment strategy ofeach business object were obtained.The experiment achieved the desired effect through large amount of data in the actual clearing business of the city card.%概述计算机系统基础架构的发展趋势.针对当前互联网+时代事务处理系统的业务场景,分析研究了计算资源分配与负载均衡的基本方法.为满足事务处理系统对业务对象的差异化服务需求,并充分发挥事务处理系统的整体处理能力,提出面向业务对象的计算资源动态分配方法.方法根据实际应用系统平台的处理性能基准值,确定各业务对象的计算资源分配计划及动态调整策略.通过城市一卡通实际清算业务大数据量的测试达到预期效果.

  10. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    Science.gov (United States)

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  11. Graduate Enrollment Increases in Science and Engineering Fields, Especially in Engineering and Computer Sciences. InfoBrief: Science Resources Statistics.

    Science.gov (United States)

    Burrelli, Joan S.

    This brief describes graduate enrollment increases in the science and engineering fields, especially in engineering and computer sciences. Graduate student enrollment is summarized by enrollment status, citizenship, race/ethnicity, and fields. (KHR)

  12. Sleeping money: investigating the huge surpluses of social health insurance in China.

    Science.gov (United States)

    Liu, JunQiang; Chen, Tao

    2013-12-01

    The spreading of social health insurance (SHI) worldwide poses challenges for fledging public administrators. Inefficiency, misuse and even corruption threaten the stewardship of those newly established health funds. This article examines a tricky situation faced by China's largest SHI program: the basic health insurance (BHI) scheme for urban employees. BHI accumulated a 406 billion yuan surplus by 2009, although the reimbursement level was still low. Using a provincial level panel database, we find that the huge BHI surpluses are related to the (temporarily) decreasing dependency ratio, the steady growth of average wages, the extension of BHI coverage, and progress in social insurance agency building. The financial situations of local governments and risk pooling level also matter. Besides, medical savings accounts result in about one third of BHI surpluses. Although these findings are not causal, lessons drawn from this study can help to improve the governance and performance of SHI programs in developing countries.

  13. Subcortical heterotopia appearing as huge midline mass in the newborn brain.

    Science.gov (United States)

    Fukumura, Shinobu; Watanabe, Toshihide; Kimura, Sachiko; Ochi, Satoko; Yoshifuji, Kazuhisa; Tsutsumi, Hiroyuki

    2016-02-01

    We report the case of a 2-year-old boy who showed a huge midline mass in the brain at prenatal assessment. After birth, magnetic resonance imaging (MRI) revealed a conglomerate mass with an infolded microgyrus at the midline, which was suspected as a midline brain-in-brain malformation. MRI also showed incomplete cleavage of his frontal cortex and thalamus, consistent with lobar holoprosencephaly. The patient underwent an incisional biopsy of the mass on the second day of life. The mass consisted of normal central nervous tissue with gray and white matter, representing a heterotopic brain. The malformation was considered to be a subcortical heterotopia. With maturity, focal signal changes and decreased cerebral perfusion became clear on brain imaging, suggesting secondary glial degeneration. Coincident with these MRI abnormalities, the child developed psychomotor retardation and severe epilepsy focused on the side of the intracranial mass.

  14. Huge pelvic parachordoma: fine needle aspiration cytology and histological differential diagnosis

    Directory of Open Access Journals (Sweden)

    Mona A. Kandil

    2012-10-01

    Full Text Available Parachordoma is an extremely rare soft tissue tumor of unknown lineage. Parachordoma develops most often on the extremities. Only 2 cases have been reported as pelvic parachordoma. A 46-year old Egyptian woman with a huge painful pelvic mass was found to have a parachordoma with ectopic pelvic right kidney. There is only one report in the literature of fine needle aspiration cytology in this setting. The microscopic picture of parachordoma is not new to pathologists but the gross picture of this rare tumor has not previously been published; not even in the World Health Organization classification of soft tissues tumors. Diagnosis was confirmed by immuno-histochemistry. The patient is in good clinical condition without any evidence of recurrence or metastasis after 84 months of follow up.

  15. Tiny Grains Give Huge Gains: Nanocrystal–Based Signal Amplification for Biomolecule Detection

    Science.gov (United States)

    Tong, Sheng; Ren, Binbin; Zheng, Zhilan; Shen, Han; Bao, Gang

    2013-01-01

    Nanocrystals, despite their tiny sizes, contain thousands to millions of atoms. Here we show that the large number of atoms packed in each metallic nanocrystal can provide a huge gain in signal amplification for biomolecule detection. We have devised a highly sensitive, linear amplification scheme by integrating the dissolution of bound nanocrystals and metal-induced stoichiometric chromogenesis, and demonstrated that signal amplification is fully defined by the size and atom density of nanocrystals, which can be optimized through well-controlled nanocrystal synthesis. Further, the rich library of chromogenic reactions allows implementation of this scheme in various assay formats, as demonstrated by the iron oxide nanoparticle linked immunosorbent assay (ILISA) and blotting assay developed in this study. Our results indicate that, owing to the inherent simplicity, high sensitivity and repeatability, the nanocrystal based amplification scheme can significantly improve biomolecule quantification in both laboratory research and clinical diagnostics. This novel method adds a new dimension to current nanoparticle-based bioassays. PMID:23659350

  16. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Science.gov (United States)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  17. PRS: PERSONNEL RECOMMENDATION SYSTEM FOR HUGE DATA ANALYSIS USING PORTER STEMMER

    Directory of Open Access Journals (Sweden)

    T N Chiranjeevi

    2016-04-01

    Full Text Available Personal recommendation system is one which gives better and preferential recommendation to the users to satisfy their personalized requirements such as practical applications like Webpage Preferences, Sport Videos preferences, Stock selection based on price, TV preferences, Hotel preferences, books, Mobile phones, CDs and various other products now use recommender systems. The existing Pearson Correlation Coefficient (PCC and item-based algorithm using PCC, are called as UPCC and IPCC respectively. These systems are mainly based on only the rating services and does not consider the user personal preferences, they simply just give the result based on the ratings. As the size of data increases it will give the recommendations based on the top rated services and it will miss out most of user preferences. These are main drawbacks in the existing system which will give same results to the users based on some evaluations and rankings or rating service, they will neglect the user preferences and necessities. To address this problem we propose a new approach called, Personnel Recommendation System (PRS for huge data analysis using Porter Stemmer to solve the above challenges. In the proposed system it provides a personalized service recommendation list to the users and recommends the most useful services to the users which will increase the accuracy and efficiency in searching better services. Particularly, a set of suggestions or keywords are provided to indicate user preferences and we used Collaborative Filtering and Porter Stemmer algorithm which gives a suitable recommendations to the users. In real, the broad experiments are conducted on the huge database which is available in real world, and outcome shows that our proposed personal recommender method extensively improves the precision and efficiency of service recommender system over the KASR method. In our approach mainly consider the user preferences so it will not miss out the any of the data

  18. Hydrogen-terminated mesoporous silicon monoliths with huge surface area as alternative Si-based visible light-active photocatalysts

    KAUST Repository

    Li, Ting; Li, Jun; Zhang, Qiang; Blazeby, Emma; Shang, Congxiao; Xu, Hualong; Zhang, Xixiang; Chao, Yimin

    2016-01-01

    Silicon-based nanostructures and their related composites have drawn tremendous research interest in solar energy storage and conversion. Mesoporous silicon with a huge surface area of 400-900 m2 g-1 developed by electrochemical etching exhibits

  19. The new technologies and the use of telematics resources in Scientific Education: a computational simulation in Physics Teaching

    Directory of Open Access Journals (Sweden)

    Antonio Jorge Sena dos Anjos

    2009-01-01

    Full Text Available This study presents a brief and panoramic critical view on the use of Information and Communication Technologies in Education, specifically in Science Education. The focus is centred in the resources of technology, emphasizing the use and adequate programs for Physics Teaching.

  20. Offloading Method for Efficient Use of Local Computational Resources in Mobile Location-Based Services Using Clouds

    Directory of Open Access Journals (Sweden)

    Yunsik Son

    2017-01-01

    Full Text Available With the development of mobile computing, location-based services (LBSs have been developed to provide services based on location information through communication networks or the global positioning system. In recent years, LBSs have evolved into smart LBSs, which provide many services using only location information. These include basic services such as traffic, logistic, and entertainment services. However, a smart LBS may require relatively complicated operations, which may not be effectively performed by the mobile computing system. To overcome this problem, a computation offloading technique can be used to perform certain tasks on mobile devices in cloud and fog environments. Furthermore, mobile platforms exist that provide smart LBSs. The smart cross-platform is a solution based on a virtual machine (VM that enables compatibility of content in various mobile and smart device environments. However, owing to the nature of the VM-based execution method, the execution performance is degraded compared to that of the native execution method. In this paper, we introduce a computation offloading technique that utilizes fog computing to improve the performance of VMs running on mobile devices. We applied the proposed method to smart devices with a smart VM (SVM and HTML5 SVM to compare their performances.

  1. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    Kearney David

    2007-01-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  2. Research on uranium resource models. Part IV. Logic: a computer graphics program to construct integrated logic circuits for genetic-geologic models. Progress report

    International Nuclear Information System (INIS)

    Scott, W.A.; Turner, R.M.; McCammon, R.B.

    1981-01-01

    Integrated logic circuits were described as a means of formally representing genetic-geologic models for estimating undiscovered uranium resources. The logic circuits are logical combinations of selected geologic characteristics judged to be associated with particular types of uranium deposits. Each combination takes on a value which corresponds to the combined presence, absence, or don't know states of the selected characteristic within a specified geographic cell. Within each cell, the output of the logic circuit is taken as a measure of the favorability of occurrence of an undiscovered deposit of the type being considered. In this way, geological, geochemical, and geophysical data are incorporated explicitly into potential uranium resource estimates. The present report describes how integrated logic circuits are constructed by use of a computer graphics program. A user's guide is also included

  3. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  4. Winning the Popularity Contest: Researcher Preference When Selecting Resources for Civil Engineering, Computer Science, Mathematics and Physics Dissertations

    Science.gov (United States)

    Dotson, Daniel S.; Franks, Tina P.

    2015-01-01

    More than 53,000 citations from 609 dissertations published at The Ohio State University between 1998-2012 representing four science disciplines--civil engineering, computer science, mathematics and physics--were examined to determine what, if any, preferences or trends exist. This case study seeks to identify whether or not researcher preferences…

  5. A Framework for Safe Composition of Heterogeneous SOA Services in a Pervasive Computing Environment with Resource Constraints

    Science.gov (United States)

    Reyes Alamo, Jose M.

    2010-01-01

    The Service Oriented Computing (SOC) paradigm, defines services as software artifacts whose implementations are separated from their specifications. Application developers rely on services to simplify the design, reduce the development time and cost. Within the SOC paradigm, different Service Oriented Architectures (SOAs) have been developed.…

  6. Becoming Technosocial Change Agents: Intersectionality and Culturally Responsive Pedagogies as Vital Resources for Increasing Girls' Participation in Computing

    Science.gov (United States)

    Ashcraft, Catherine; Eger, Elizabeth K.; Scott, Kimberly A.

    2017-01-01

    Drawing from our two-year ethnography, we juxtapose the experiences of two cohorts in one culturally responsive computing program, examining how the program fostered girls' emerging identities as technosocial change agents. In presenting this in-depth and up-close exploration, we simultaneously identify conditions that both facilitated and limited…

  7. Linear equations and rap battles: how students in a wired classroom utilized the computer as a resource to coordinate personal and mathematical positional identities in hybrid spaces

    Science.gov (United States)

    Langer-Osuna, Jennifer

    2015-03-01

    This paper draws on the constructs of hybridity, figured worlds, and cultural capital to examine how a group of African-American students in a technology-driven, project-based algebra classroom utilized the computer as a resource to coordinate personal and mathematical positional identities during group work. Analyses of several vignettes of small group dynamics highlight how hybridity was established as the students engaged in multiple on-task and off-task computer-based activities, each of which drew on different lived experiences and forms of cultural capital. The paper ends with a discussion on how classrooms that make use of student-led collaborative work, and where students are afforded autonomy, have the potential to support the academic engagement of students from historically marginalized communities.

  8. A Huge Capital Drop with Compression of Femoral Vessels Associated with Hip Osteoarthritis

    Directory of Open Access Journals (Sweden)

    Tomoya Takasago

    2015-01-01

    Full Text Available A capital drop is a type of osteophyte at the inferomedial portion of the femoral head commonly observed in hip osteoarthritis (OA, secondary to developmental dysplasia. Capital drop itself is typically asymptomatic; however, symptoms can appear secondary to impinge against the acetabulum or to irritation of the surrounding tissues, such as nerves, vessels, and tendons. We present here a case of unilateral leg edema in a patient with hip OA, caused by a huge bone mass occurring at the inferomedial portion of the femoral head that compressed the femoral vessels. We diagnosed this bone mass as a capital drop secondary to hip OA after confirming that the mass occurred at least after the age of 63 years based on a previous X-ray. We performed early resection and total hip arthroplasty since the patient’s hip pain was due to both advanced hip OA and compression of the femoral vessels; moreover, we aimed to prevent venous thrombosis secondary to vascular compression considering the advanced age and the potent risk of thrombosis in the patient. A large capital drop should be considered as a cause of vascular compression in cases of unilateral leg edema in OA patients.

  9. Crystal structure of Clostridium botulinum whole hemagglutinin reveals a huge triskelion-shaped molecular complex.

    Science.gov (United States)

    Amatsu, Sho; Sugawara, Yo; Matsumura, Takuhiro; Kitadokoro, Kengo; Fujinaga, Yukako

    2013-12-06

    Clostridium botulinum HA is a component of the large botulinum neurotoxin complex and is critical for its oral toxicity. HA plays multiple roles in toxin penetration in the gastrointestinal tract, including protection from the digestive environment, binding to the intestinal mucosal surface, and disruption of the epithelial barrier. At least two properties of HA contribute to these roles: the sugar-binding activity and the barrier-disrupting activity that depends on E-cadherin binding of HA. HA consists of three different proteins, HA1, HA2, and HA3, whose structures have been partially solved and are made up mainly of β-strands. Here, we demonstrate structural and functional reconstitution of whole HA and present the complete structure of HA of serotype B determined by x-ray crystallography at 3.5 Å resolution. This structure reveals whole HA to be a huge triskelion-shaped molecule. Our results suggest that whole HA is functionally and structurally separable into two parts: HA1, involved in recognition of cell-surface carbohydrates, and HA2-HA3, involved in paracellular barrier disruption by E-cadherin binding.

  10. Propranolol in Treatment of Huge and Complicated Infantile Hemangiomas in Egyptian Children

    Directory of Open Access Journals (Sweden)

    Basheir A. Hassan

    2014-01-01

    Full Text Available Background. Infantile hemangiomas (IHs are the most common benign tumours of infancy. Propranolol has recently been reported to be a highly effective treatment for IHs. This study aimed to evaluate the efficacy and side effects of propranolol for treatment of complicated cases of IHs. Patients and Methods. This prospective clinical study included 30 children with huge or complicated IHs; their ages ranged from 2 months to 1 year. They were treated by oral propranolol. Treatment outcomes were clinically evaluated. Results. Superficial cutaneous hemangiomas began to respond to propranolol therapy within one to two weeks after the onset of treatment. The mean treatment period that was needed for the occurrence of complete resolution was 9.4 months. Treatment with propranolol was well tolerated and had few side effects. No rebound growth of the tumors was noted when propranolol dosing stopped except in one case. Conclusion. Propranolol is a promising treatment for IHs without obvious side effects. However, further studies with longer follow-up periods are needed.

  11. Propranolol in treatment of huge and complicated infantile hemangiomas in egyptian children.

    Science.gov (United States)

    Hassan, Basheir A; Shreef, Khalid S

    2014-01-01

    Background. Infantile hemangiomas (IHs) are the most common benign tumours of infancy. Propranolol has recently been reported to be a highly effective treatment for IHs. This study aimed to evaluate the efficacy and side effects of propranolol for treatment of complicated cases of IHs. Patients and Methods. This prospective clinical study included 30 children with huge or complicated IHs; their ages ranged from 2 months to 1 year. They were treated by oral propranolol. Treatment outcomes were clinically evaluated. Results. Superficial cutaneous hemangiomas began to respond to propranolol therapy within one to two weeks after the onset of treatment. The mean treatment period that was needed for the occurrence of complete resolution was 9.4 months. Treatment with propranolol was well tolerated and had few side effects. No rebound growth of the tumors was noted when propranolol dosing stopped except in one case. Conclusion. Propranolol is a promising treatment for IHs without obvious side effects. However, further studies with longer follow-up periods are needed.

  12. Huge thermal conductivity enhancement in boron nitride – ethylene glycol nanofluids

    International Nuclear Information System (INIS)

    Żyła, Gaweł; Fal, Jacek; Traciak, Julian; Gizowska, Magdalena; Perkowski, Krzysztof

    2016-01-01

    Paper presents the results of experimental studies on thermophysical properties of boron nitride (BN) plate-like shaped particles in ethylene glycol (EG). Essentially, the studies were focused on the thermal conductivity of suspensions of these particles. Nanofluids were obtained with two-step method (by dispersing BN particles in ethylene glycol) and its’ thermal conductivity was analyzed at various mass concentrations, up to 20 wt. %. Thermal conductivity was measured in temperature range from 293.15 K to 338.15 K with 15 K step. The measurements of thermal conductivity of nanofluids were performed in the system based on a device using the transient line heat source method. Studies have shown that nanofluids’ thermal conductivity increases with increasing fraction of nanoparticles. The results of studies also presented that the thermal conductivity of nanofluids changes very slightly with the increase of temperature. - Highlights: • Huge thermal conductivity enhancement in BN-EG nanofluid was reported. • Thermal conductivity increase very slightly with increasing of the temperature. • Thermal conductivity increase linearly with volume concentration of particles.

  13. A rare life-threatening disease: unilateral kidney compressed by huge chronic spontaneous retroperitoneal hemorrhage

    Directory of Open Access Journals (Sweden)

    Lu HY

    2018-03-01

    Full Text Available Hao-Yuan Lu,1,* Wei Wei,2,* Qi-Wei Chen,1,* Qing-Gui Meng,1 Gao-Hua Hu,1 Xian-Lin Yi,1,3 Xian-Zhong Bai1 1Department of Urology, Tumor Hospital of Guangxi Medical University and Guangxi Cancer Research Institute, Nanning 530021, China; 2Department of Radiology, Tumor Hospital of Guangxi Medical University and Guangxi Cancer Research Institute, Nanning 530021,China; 3Hubei Engineering Laboratory for Synthetic Microbiology, Wuhan Institute of Biotechnology, Wuhan 430075, China *These authors contributed equally to this work Objectives: To study an uncommon life-threatening disease, spontaneous retroperitoneal and perirenal hemorrhage. Case descriptions: A 69-year-old male presented with pain in the left waist and back of 1 month duration. The renal abscess was suspected by magnetic resonance imaging before operation. The perirenal hematoma was cleaned by operation. In another case, the patient had a functional solitary left kidney compressed by a huge retroperitoneal mass and uropenia appeared. Results: The first patient died of adult respiratory distress syndrome after surgery. The second patient died of cardiac insufficiency and pulmonary embolism on the second day after evacuation of retroperitoneal hematoma. Conclusion: Conservative surgery, such as selective arterial embolization, is a reasonable approach in patients with chronic spontaneous retroperitoneal and perirenal space hemorrhage and with poor general condition. We strongly recommend drainage or interventional therapy, but not a major surgery, in patients with poor condition. Keywords: kidney, spontaneous, retroperitoneal, hemorrhage, surgery

  14. Huge thermal conductivity enhancement in boron nitride – ethylene glycol nanofluids

    Energy Technology Data Exchange (ETDEWEB)

    Żyła, Gaweł, E-mail: gzyla@prz.edu.pl [Department of Physics and Medical Engineering, Rzeszow University of Technology, Rzeszow, 35-905 (Poland); Fal, Jacek; Traciak, Julian [Department of Physics and Medical Engineering, Rzeszow University of Technology, Rzeszow, 35-905 (Poland); Gizowska, Magdalena; Perkowski, Krzysztof [Department of Nanotechnology, Institute of Ceramics and Building Materials, Warsaw, 02-676 (Poland)

    2016-09-01

    Paper presents the results of experimental studies on thermophysical properties of boron nitride (BN) plate-like shaped particles in ethylene glycol (EG). Essentially, the studies were focused on the thermal conductivity of suspensions of these particles. Nanofluids were obtained with two-step method (by dispersing BN particles in ethylene glycol) and its’ thermal conductivity was analyzed at various mass concentrations, up to 20 wt. %. Thermal conductivity was measured in temperature range from 293.15 K to 338.15 K with 15 K step. The measurements of thermal conductivity of nanofluids were performed in the system based on a device using the transient line heat source method. Studies have shown that nanofluids’ thermal conductivity increases with increasing fraction of nanoparticles. The results of studies also presented that the thermal conductivity of nanofluids changes very slightly with the increase of temperature. - Highlights: • Huge thermal conductivity enhancement in BN-EG nanofluid was reported. • Thermal conductivity increase very slightly with increasing of the temperature. • Thermal conductivity increase linearly with volume concentration of particles.

  15. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    International Nuclear Information System (INIS)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-01-01

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads per MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications

  16. New computational methodology for large 3D neutron transport problems

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    We present a new computational methodology, based on 3D characteristics method, dedicated to solve very large 3D problems without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we set up a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (authors)

  17. Parallel high-performance grid computing: Capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency

    NARCIS (Netherlands)

    F.N. Kepper (Nick); R. Ettig (Ramona); F. Dickmann (Frank); R. Stehr (Rene); F.G. Grosveld (Frank); G. Wedemann (Gero); T.A. Knoch (Tobias)

    2010-01-01

    textabstractEspecially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the

  18. Computational models can predict response to HIV therapy without a genotype and may reduce treatment failure in different resource-limited settings.

    Science.gov (United States)

    Revell, A D; Wang, D; Wood, R; Morrow, C; Tempelman, H; Hamers, R L; Alvarez-Uria, G; Streinu-Cercel, A; Ene, L; Wensing, A M J; DeWolf, F; Nelson, M; Montaner, J S; Lane, H C; Larder, B A

    2013-06-01

    Genotypic HIV drug-resistance testing is typically 60%-65% predictive of response to combination antiretroviral therapy (ART) and is valuable for guiding treatment changes. Genotyping is unavailable in many resource-limited settings (RLSs). We aimed to develop models that can predict response to ART without a genotype and evaluated their potential as a treatment support tool in RLSs. Random forest models were trained to predict the probability of response to ART (≤400 copies HIV RNA/mL) using the following data from 14 891 treatment change episodes (TCEs) after virological failure, from well-resourced countries: viral load and CD4 count prior to treatment change, treatment history, drugs in the new regimen, time to follow-up and follow-up viral load. Models were assessed by cross-validation during development, with an independent set of 800 cases from well-resourced countries, plus 231 cases from Southern Africa, 206 from India and 375 from Romania. The area under the receiver operating characteristic curve (AUC) was the main outcome measure. The models achieved an AUC of 0.74-0.81 during cross-validation and 0.76-0.77 with the 800 test TCEs. They achieved AUCs of 0.58-0.65 (Southern Africa), 0.63 (India) and 0.70 (Romania). Models were more accurate for data from the well-resourced countries than for cases from Southern Africa and India (P < 0.001), but not Romania. The models identified alternative, available drug regimens predicted to result in virological response for 94% of virological failures in Southern Africa, 99% of those in India and 93% of those in Romania. We developed computational models that predict virological response to ART without a genotype with comparable accuracy to genotyping with rule-based interpretation. These models have the potential to help optimize antiretroviral therapy for patients in RLSs where genotyping is not generally available.

  19. Development and simulation of the air-jack for emergency like a huge disaster; Kyujoyo eajakki no kaihatsu to sono simyureshon

    Energy Technology Data Exchange (ETDEWEB)

    Katsuyama, Kunihisa.; Ogata, Yuji.; Wada, Yuji. [National Institute for Resources and Environment, Tsukuba (Japan); Hashizume, Kiyoshi.; Nishida, Kenjiro. [Nippon Kayaku Corp., Tokyo (Japan)

    1999-02-28

    When a disaster is so huge like Kobe earthquake, every energy line is killed. Even if we want to help the sufferers, we have no energy to move machines to help them. As collapsed houses are very heavy, we need machines to remove collapsed stuff. Explosives include a lot of energy in themselves. So, an air-jack which has explosives inside was developed to remove collapsed stuff on suffered people. A simple air-jack was made and tested. One concrete block, 50cm x 50cm x 50cm, was lifted by the simple air-jack. A simulation of lifting the concrete block was carried out with a programme ANSYS on the super computer. (author)

  20. Taming the big data tidal wave finding opportunities in huge data streams with advanced analytics

    CERN Document Server

    Franks, Bill

    2012-01-01

    You receive an e-mail. It contains an offer for a complete personal computer system. It seems like the retailer read your mind since you were exploring computers on their web site just a few hours prior…. As you drive to the store to buy the computer bundle, you get an offer for a discounted coffee from the coffee shop you are getting ready to drive past. It says that since you're in the area, you can get 10% off if you stop by in the next 20 minutes…. As you drink your coffee, you receive an apology from the manufacturer of a product that you complained about yesterday on your Facebook pa

  1. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  2. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  3. Climate change adaptation accounting for huge uncertainties in future projections - the case of urban drainage

    Science.gov (United States)

    Willems, Patrick

    2015-04-01

    Hydrological design parameters, which are currently used in the guidelines for the design of urban drainage systems (Willems et al., 2013) have been revised, taking the Flanders region of Belgium as case study. The revision involved extrapolation of the design rainfall statistics, taking into account the current knowledge on future climate change trends till 2100. Uncertainties in these trend projections have been assessed after statistically analysing and downscaling by a quantile perturbation tool based on a broad ensemble set of climate model simulation results (44 regional + 69 global control-scenario climate model run combinations for different greenhouse gas scenarios). The impact results of the climate scenarios were investigated as changes to rainfall intensity-duration-frequency (IDF) curves. Thereafter, the climate scenarios and related changes in rainfall statistics were transferred to changes in flood frequencies of sewer systems and overflow frequencies of storage facilities. This has been done based on conceptual urban drainage models. Also the change in storage capacity required to exceed a given overflow return period, has been calculated for a range of return periods and infiltration or throughflow rates. These results were used on the basis of the revision of the hydraulic design rules of urban drainage systems. One of the major challenges while formulating these policy guidelines was the consideration of the huge uncertainties in the future climate change projections and impact assessments; see also the difficulties and pitfalls reported by the IWA/IAHR Joint Committee on Urban Drainage - Working group on urban rainfall (Willems et al., 2012). We made use of the risk concept, and found it a very useful approach to deal with the high uncertainties. It involves an impact study of the different climate projections, or - for practical reasons - a reduced set of climate scenarios tailored for the specific type of impact considered (urban floods in our

  4. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space

    OpenAIRE

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-01-01

    Motivation: UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. Application: We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without ex...

  5. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    Science.gov (United States)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  6. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    International Nuclear Information System (INIS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware. (paper)

  7. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  8. Development of Resource Sharing System Components for AliEn Grid Infrastructure

    CERN Document Server

    Harutyunyan, Artem

    2010-01-01

    The problem of the resource provision, sharing, accounting and use represents a principal issue in the contemporary scientific cyberinfrastructures. For example, collaborations in physics, astrophysics, Earth science, biology and medicine need to store huge amounts of data (of the order of several petabytes) as well as to conduct highly intensive computations. The appropriate computing and storage capacities cannot be ensured by one (even very large) research center. The modern approach to the solution of this problem suggests exploitation of computational and data storage facilities of the centers participating in collaborations. The most advanced implementation of this approach is based on Grid technologies, which enable effective work of the members of collaborations regardless of their geographical location. Currently there are several tens of Grid infrastructures deployed all over the world. The Grid infrastructures of CERN Large Hadron Collider experiments - ALICE, ATLAS, CMS, and LHCb which are exploi...

  9. RELIGIOUS DIMENSION OF COMPUTER GAMES

    OpenAIRE

    Sukhov, Anton

    2017-01-01

    Modern computer games are huge virtual worlds that raisesophisticated social and even religious issues. The “external” aspect of thereligious dimension of computer games focuses on the problem of the polysemanticrelation of world religions (Judaism,Christianity, Islam, Buddhism) to computer games. The“inner” aspect represents transformation of monotheistic and polytheisticreligions within the virtual worlds in the view of heterogeneity and genredifferentiation of computer games (arcades, acti...

  10. Exploring Tradeoffs in Demand-Side and Supply-Side Management of Urban Water Resources Using Agent-Based Modeling and Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Lufthansa Kanta

    2015-11-01

    Full Text Available Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger: (1 increases in the volume of water pumped through inter-basin transfers from an external reservoir; and (2 drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  11. Huge increase in gas phase nanoparticle generation by pulsed direct current sputtering in a reactive gas admixture

    Science.gov (United States)

    Polonskyi, Oleksandr; Peter, Tilo; Mohammad Ahadi, Amir; Hinz, Alexander; Strunskus, Thomas; Zaporojtchenko, Vladimir; Biederman, Hynek; Faupel, Franz

    2013-07-01

    Using reactive DC sputtering in a gas aggregation cluster source, we show that pulsed discharge gives rise to a huge increase in deposition rate of nanoparticles by more than one order of magnitude compared to continuous operation. We suggest that this effect is caused by an equilibrium between slight target oxidation (during "time-off") and subsequent sputtering of Ti oxides (sub-oxides) at "time-on" with high power impulse.

  12. Reconstruction of juxta-articular huge defects of distal femur with vascularized fibular bone graft and Ilizarov's distraction osteogenesis.

    Science.gov (United States)

    Lai, Davy; Chen, Chuan-Mu; Chiu, Fang-Yao; Chang, Ming-Chau; Chen, Tain-Hsiung

    2007-01-01

    We evaluate the effect of reconstructing huge defects (mean, 15.8 cm) of the distal femur with Ilizarov's distraction osteogenesis and free twin-barreled vascularized fibular bone graft (TVFG). We retrospectively reviewed a consecutive series of five patients who had cases of distal femoral fractures with huge defects and infection that were treated by the Ilizarov's distraction osteogenesis. After radical debridement, two of the five cases had free TVFG and monolocal distraction osteogenesis, and another two cases had multilocal distraction osteogenesis with knee fusion because of loss of the joint congruity. The other case with floating knee injury had bilocal distraction osteogenesis and a preserved knee joint. The mean defect of distal femur was 15.8 cm (range, 14-18 cm) in length. The mean length of distraction osteogenesis by Ilizarov's apparatus was 8.2 cm. The mean length of TVFG was 8 cm. The average duration from application of Ilizarov's apparatus to achievement of bony union was 10.2 months (range, 8-13 months). At the end of the follow-up, ranges of motion of three knees were 0 to 45 degrees, 0 to 60 degrees, and 0 to 90 degrees. Two cases had knee arthrodesis with bony fusion because of loss of the joint congruity. There were no leg length discrepancies in all five patients. In addition, three patients had pin tract infections and one case had a 10 degree varus deformity of the femur. Juxta-articular huge defect (>10 cm) of distal femur remains a challenge to orthopedic surgeons. Ilizarov's technique provides the capability to maintain stability, eradicate infection, restore leg length, and to perform adjuvant reconstructive procedure easily. In this study, we found that combining Ilizarov's distraction osteogenesis with TVFG results in improved patient outcome for patients with injuries such as supracondylar or intercondylar infected fractures or nonunion of distal femur with huge bone defect.

  13. Hypointensity on postcontrast MR imaging from compression of the sacral promontory in enlarged uterus with huge leiomyoma and adenomyosis

    International Nuclear Information System (INIS)

    Uotani, Kensuke; Monzawa, Shuichi; Adachi, Shuji; Takemori, Masayuki; Kaji, Yasushi; Sugimura, Kazuro

    2007-01-01

    In patients with huge leiomyoma and with adenomyosis of the uterus, a peculiar area of hypointensity was occasionally observed on postcontrast magnetic resonance (MR) imaging in the dorsal portion of the enlarged uterus near the sacral promontory. We describe the imaging characteristics of these MR findings and correlate them with histopathological findings to examine whether the areas represent specific pathological changes. Ten patients with huge leiomyomas and two with huge adenomyotic lesions whose imaging revealed the hypointensity were enrolled. All had enlarged uteri that extended beyond the sacral promontory. MR findings of the hypointense areas were evaluated and correlated with histopathological findings in 5 patients with leiomyoma and two with adenomyosis who had hysterectomy. The ten patients with leiomyoma showed flare-shaped hypointensity arising from the dorsal surface of the uterine body that extended deep into the tumor. The base of the hypointense areas was narrow in 5 patients with intramural leiomyoma and broad in five with subserosal leiomyoma. Two patients with adenomyosis showed nodular-shaped areas of hypointensity in front of the sacral promontory. Precontrast T 1 - and T 2 -weighted MR images showed no signal abnormalities in the portions corresponding to the hypointensity in any of the 12 patients. Pathological examinations showed no specific findings in the portions corresponding to the hypointensity in the 7 patients who had hysterectomy. The areas of hypointensity may represent functional changes, such as decreased localized blood flow caused by compression of the sacral promontory. (author)

  14. The causes and the nursing interventions of the complications due to repeated embolization therapy for huge cerebral arteriovenous malformations

    International Nuclear Information System (INIS)

    Sun Lingfang; Sun Ge

    2010-01-01

    Objective: To investigate the causes of the complications occurred after repeated embolization therapy for huge cerebral arteriovenous malformations and to discuss their nursing interventions. Methods: A total of 54 embolization procedures were performed in 17 patients with huge cerebral arteriovenous malformations. The clinical data were retrospectively analyzed. The causes of complications were carefully examined and the preventive measures were discussed. The prompt and necessary nursing interventions were formulated in order to prevent the complications or serious consequences. Results: Among the total 17 patients, one patient gave up the treatment because of the cerebral hemorrhage which occurred two months after receiving 3 times of embolization therapy. One patient experienced cerebral vascular spasm during the procedure, which was relieved after antispasmodic medication and no neurological deficit was left behind. Two patients developed transient dizziness and headache, which were alleviated spontaneously. One patient presented with nervousness, fear and irritability, which made him hard to cooperate with the operation and the basis intravenous anesthesia was employed. No complications occurred in the remaining cases. Conclusion: The predictive nursing interventions for the prevention of complications are very important for obtaining a successful repeated embolization therapy for huge cerebral arteriovenous malformations, which will ensure that the patients can get the best treatment and the complications can be avoided. (authors)

  15. [Radical Resection of Huge Gastrointestinal Stromal Tumor of the Stomach Following Neoadjuvant Chemotherapy with lmatinib - ACase Report].

    Science.gov (United States)

    Hiraki, Yoko; Kato, Hiroaki; Shiraishi, Osamu; Tanaka, Yumiko; Iwama, Mitsuru; Yasuda, Atsushi; Shinkai, Masayuki; Kimura, Yutaka; Imano, Motohiro; Imamoto, Haruhiko; Yasuda, Takushi

    2017-11-01

    The usefulness and safety of imatinibfor neoadjuvant chemotherapy for resectable gastrointestinal stromal tumor(GIST) has not been established. We reported a case of a huge GIST of the stomach that was safely resected following preoperative imatinibtherapy. A 69-year-old man was hospitalized with abdominal fullness which increased rapidly from a month ago. A CT scan showed a huge tumor containing solid and cystic component which was accompanied by an extra-wall nodule. The tumor was strongly suspected to be originated from the stomach and EUS-FNA revealed GIST. We diagnosed GIST of the stomach and initiated preoperative adjuvant chemotherapy with imatinib because there was a risk for the break of tumor capsule and composite resection of the other organs without prior chemotherapy. After the administration of imatinib4 00 mg/day for 6months, the solid component was decreased in size and its' activity by PET-CT had declined, but the size of the cystic component was not changed and the patient's complaint of fullness was not reduced. Then, after a week cessation of imatinib, we performed surgical removal of the tumor with partial gastrectomy without surgical complication during and after the operation. Imatinibwas resumed 2 weeks later postoperatively and 1 year and 8 months has passed since the operation without recurrence. Neoadjuvant chemotherapy with imatinibhas the potential to become an important therapeutic option for the treatment of huge GISTs.

  16. Effects of Huge Earthquakes on Earth Rotation and the length of Day

    Directory of Open Access Journals (Sweden)

    Changyi Xu

    2013-01-01

    Full Text Available We calculated the co-seismic Earth rotation changes for several typical great earthquakes since 1960 based on Dahlen¡¦s analytical expression of Earth inertia moment change, the excitation functions of polar motion and, variation in the length of a day (ΔLOD. Then, we derived a mathematical relation between polar motion and earthquake parameters, to prove that the amplitude of polar motion is independent of longitude. Because the analytical expression of Dahlen¡¦s theory is useful to theoretically estimate rotation changes by earthquakes having different seismic parameters, we show results for polar motion and ΔLOD for various types of earthquakes in a comprehensive manner. The modeled results show that the seismic effect on the Earth¡¦s rotation decreases gradually with increased latitude if other parameters are unchanged. The Earth¡¦s rotational change is symmetrical for a 45° dip angle and the maximum changes appear at the equator and poles. Earthquakes at a medium dip angle and low latitudes produce large rotation changes. As an example, we calculate the polar motion and ΔLOD caused by the 2011 Tohoku-Oki Earthquake using two different fault models. Results show that a fine slip fault model is useful to compute co-seismic Earth rotation change. The obtained results indicate Dahlen¡¦s method gives good approximations for computation of co-seismic rotation changes, but there are some differences if one considers detailed fault slip distributions. Finally we analyze and discuss the co-seismic Earth rotation change signal using GRACE data, showing that such a signal is hard to be detected at present, but it might be detected under some conditions. Numerical results of this study will serve as a good indicator to check if satellite observations such as GRACE can detect a seismic rotation change when a great earthquake occur.

  17. Continental Margins and the Law of the Sea - an `Arranged Marriage' with Huge Research Potential

    Science.gov (United States)

    Parson, L.

    2005-12-01

    The United Nations Convention on the Law of the Sea (UNCLOS) requires coastal states intending to secure sovereignty over continental shelf territory extending beyond 200 nautical miles to submit geological/geophysical data, along with their analysis and synthesis of the relevant continental margin in support of their claim. These submissions are scrutinised and assessed by a UN Commission of experts who decide if the claim is justified, and thereby ultimately allowing the exploitation of non-living resources into this extended maritime space. The amount of data required to support the case will vary from margin to margin, depending on the local geological evolution, but typically will involve the running of new, dedicated marine surveys, mostly bathymetric and seismic. Key geological/geophysical issues revolve around proof of `naturalness' of the prolongation of land mass (cue - wide-angle seismics, deep drilling and sampling programmes) and shelf and slope morphology and sediment section thickness (cue - swath bathymetry and multichannel seismics programmes). These surveys, probably primarily funded by government agencies anxious not to lose out on the `land grab', will generate datasets which will inevitably boost not only the research effort leading to increased understanding of margin evolution in academic terms, but also contribute to wider applied aspects of the work such as those leading to refinement of deepwater hydrocarbon resource potential. It is conservatively estimated that in the region of fifty coastal states world-wide have a significant potential for claiming continental shelf beyond 200 nautical miles, and that the total area available as extended shelf could easily exceed 7 million square kilometres. However, while for the vast majority of these states a UNCLOS deadline of 2009 exists for submitting a claim - to date only four have done so (Russia, Brazil, Australia and Ireland). It is therefore predictable, if not inevitable, that within the

  18. Storage of intermittent energies. From self-consumption to huge photovoltaic power plants

    International Nuclear Information System (INIS)

    Perrin, Marion; Martin, Nicolas

    2013-01-01

    Power grids are evolving rapidly due to an increased use of decentralized power units, mostly based on intermittent renewable energy resources and due also to new ways of consuming energy (e.g. electrical vehicles). In the same time, the performance increase of new technologies such as telecommunications and storage systems could provide solutions for optimizing the electrical system. In this context, we are more and more talking about the 'smart-grids concept' because in parallel to the power interconnection, we also create communication networks which allow knowing in real time the status of the power grid, and so that the power flows can be controlled in an optimal way. In this article, we investigate challenges and opportunities for managing intermittent energy sources by using energy storage systems, from the consumer level to the grid operator. First we describe how the feed-in tariff could evolve in order to improve grid integration of large solar plants. We showed that behind the constraints due to the coupling of the power plants with a storage system, we could imagine lots of opportunities to diversify the business model. Then we evaluate the medium size PV with storage installation at the community level. For this purpose, we describe the local problems induced by the PV integration before proposing new ways to manage these systems. Finally, the self-consumption business model is investigated in terms of performance for the consumer and for the grid operator. (authors)

  19. Self managing experiment resources

    International Nuclear Information System (INIS)

    Stagni, F; Ubeda, M; Charpentier, P; Tsaregorodtsev, A; Romanovskiy, V; Roiser, S; Graciani, R

    2014-01-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  20. The importance of CFD methods to the design of huge scrubber systems

    International Nuclear Information System (INIS)

    Maier, H.

    2005-01-01

    Due to the influence of the multiphase flow on the scrubber removal performance Austrian Energy and Environment started research end development in co-operation with universities on the simulation of wet scrubber systems using CFD methods (Computational removal performance). In November 2001 the spray banks were reconstructed with a minimum of requirements according to the concept of AE and E. The first experiences in operation already showed a significant improvement. In July 2002 measurements of the SO 2 -profile confirmed the experiences of the client. The high SO 2 peaks nearly disappeared at the absorber wall. Furthermore the changes resulted in a more homogenous SO 2 distribution in the clean gas which was also found out by measurements in the outlet duct. According to the client the LG-ratio could be reduced. Nearly every load case can now be handled with one active spray bank less. With respect to energy consumption of the plant this means a remarkable reduction of operational costs. Compared to that the scrubbers of the FGD system in Neurath will have a flue gas capacity nearly twice much as that of the FGD plant in Heyden. The start up will take place in 2008

  1. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    Science.gov (United States)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  2. THE IQMULUS URBAN SHOWCASE: AUTOMATIC TREE CLASSIFICATION AND IDENTIFICATION IN HUGE MOBILE MAPPING POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    J. Böhm

    2016-06-01

    Full Text Available Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  3. Proceedings Papers of the AFSC (Air Force Systems Command) Avionics Standardization Conference (2nd) Held at Dayton, Ohio on 30 November-2 December 1982. Volume 3. Embedded Computer Resources Governing Documents.

    Science.gov (United States)

    1982-11-01

    1. Validation of computer resource requirements, including soft - ware, risk analyses, planning, preliminary design, security where applicable (DoD...Technology Base Program for soft - ware basic research, exploratory development, advanced devel- opment, and technology demonstrations addressing critical... chancres including agement Procedures (O/S CMP). The basic alose iact of Cr other clu configuration management approach con- tained in the CRISP will be

  4. Efficient algorithms for accurate hierarchical clustering of huge datasets: tackling the entire protein space.

    Science.gov (United States)

    Loewenstein, Yaniv; Portugaly, Elon; Fromer, Menachem; Linial, Michal

    2008-07-01

    UPGMA (average linking) is probably the most popular algorithm for hierarchical data clustering, especially in computational biology. However, UPGMA requires the entire dissimilarity matrix in memory. Due to this prohibitive requirement, UPGMA is not scalable to very large datasets. We present a novel class of memory-constrained UPGMA (MC-UPGMA) algorithms. Given any practical memory size constraint, this framework guarantees the correct clustering solution without explicitly requiring all dissimilarities in memory. The algorithms are general and are applicable to any dataset. We present a data-dependent characterization of hardness and clustering efficiency. The presented concepts are applicable to any agglomerative clustering formulation. We apply our algorithm to the entire collection of protein sequences, to automatically build a comprehensive evolutionary-driven hierarchy of proteins from sequence alone. The newly created tree captures protein families better than state-of-the-art large-scale methods such as CluSTr, ProtoNet4 or single-linkage clustering. We demonstrate that leveraging the entire mass embodied in all sequence similarities allows to significantly improve on current protein family clusterings which are unable to directly tackle the sheer mass of this data. Furthermore, we argue that non-metric constraints are an inherent complexity of the sequence space and should not be overlooked. The robustness of UPGMA allows significant improvement, especially for multidomain proteins, and for large or divergent families. A comprehensive tree built from all UniProt sequence similarities, together with navigation and classification tools will be made available as part of the ProtoNet service. A C++ implementation of the algorithm is available on request.

  5. Genome network medicine: innovation to overcome huge challenges in cancer therapy.

    Science.gov (United States)

    Roukos, Dimitrios H

    2014-01-01

    The post-ENCODE era shapes now a new biomedical research direction for understanding transcriptional and signaling networks driving gene expression and core cellular processes such as cell fate, survival, and apoptosis. Over the past half century, the Francis Crick 'central dogma' of single n gene/protein-phenotype (trait/disease) has defined biology, human physiology, disease, diagnostics, and drugs discovery. However, the ENCODE project and several other genomic studies using high-throughput sequencing technologies, computational strategies, and imaging techniques to visualize regulatory networks, provide evidence that transcriptional process and gene expression are regulated by highly complex dynamic molecular and signaling networks. This Focus article describes the linear experimentation-based limitations of diagnostics and therapeutics to cure advanced cancer and the need to move on from reductionist to network-based approaches. With evident a wide genomic heterogeneity, the power and challenges of next-generation sequencing (NGS) technologies to identify a patient's personal mutational landscape for tailoring the best target drugs in the individual patient are discussed. However, the available drugs are not capable of targeting aberrant signaling networks and research on functional transcriptional heterogeneity and functional genome organization is poorly understood. Therefore, the future clinical genome network medicine aiming at overcoming multiple problems in the new fields of regulatory DNA mapping, noncoding RNA, enhancer RNAs, and dynamic complexity of transcriptional circuitry are also discussed expecting in new innovation technology and strong appreciation of clinical data and evidence-based medicine. The problematic and potential solutions in the discovery of next-generation, molecular, and signaling circuitry-based biomarkers and drugs are explored. © 2013 Wiley Periodicals, Inc.

  6. Online Resources

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics; Online Resources. Journal of Genetics. Online Resources. Volume 97. 2018 | Online resources. Volume 96. 2017 | Online resources. Volume 95. 2016 | Online resources. Volume 94. 2015 | Online resources. Volume 93. 2014 | Online resources. Volume 92. 2013 | Online resources ...

  7. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  8. Huge Varicose Inferior Mesenteric Vein: an Unanticipated 99mTc-labeled Red Blood Cell Scintigraphy Finding

    International Nuclear Information System (INIS)

    Hoseinzadeh, Samaneh; Shafiei, Babak; Salehian, Mohamadtaghi; Neshandar Asli, Isa; Ghodoosi, Iraj

    2010-01-01

    Ectopic varices (EcV) are enlarged portosystemic venous collaterals, which usually develop secondary to portal hypertension (PHT). Mesocaval collateral vessels are unusual pathways to decompress the portal system. Here we report the case of a huge varicose inferior mesenteric vein (IMV) that drained into peri rectal collateral veins, demonstrated by 99m Tc-labeled red blood cell (RBC) scintigraphy performed for lower gastrointestinal (GI) bleeding in a 14-year-old girl. This case illustrates the crucial role of 99m Tc-labeled RBC scintigraphy for the diagnosis of rare ectopic lower GI varices.

  9. Huge Varicose Inferior Mesenteric Vein: an Unanticipated {sup 99m}Tc-labeled Red Blood Cell Scintigraphy Finding

    Energy Technology Data Exchange (ETDEWEB)

    Hoseinzadeh, Samaneh; Shafiei, Babak; Salehian, Mohamadtaghi; Neshandar Asli, Isa; Ghodoosi, Iraj [Shaheed Beheshti Medical University, Tehran (Iran, Islamic Republic of)

    2010-09-15

    Ectopic varices (EcV) are enlarged portosystemic venous collaterals, which usually develop secondary to portal hypertension (PHT). Mesocaval collateral vessels are unusual pathways to decompress the portal system. Here we report the case of a huge varicose inferior mesenteric vein (IMV) that drained into peri rectal collateral veins, demonstrated by {sup 99m}Tc-labeled red blood cell (RBC) scintigraphy performed for lower gastrointestinal (GI) bleeding in a 14-year-old girl. This case illustrates the crucial role of {sup 99m}Tc-labeled RBC scintigraphy for the diagnosis of rare ectopic lower GI varices.

  10. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  11. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    Science.gov (United States)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  12. Computer Labs | College of Engineering & Applied Science

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  13. Computer Science | Classification | College of Engineering & Applied

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  14. Herpes - resources

    Science.gov (United States)

    Genital herpes - resources; Resources - genital herpes ... following organizations are good resources for information on genital herpes : March of Dimes -- www.marchofdimes.org/complications/sexually- ...

  15. LHCb: Self managing experiment resources

    CERN Multimedia

    Stagni, F

    2013-01-01

    Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System ( Resource Status System ) delivering real time informatio...

  16. Elective hemi transurethral resection of prostate: a safe and effective method of treating huge benign prostatic hyperplasia

    International Nuclear Information System (INIS)

    Abidi, S.S.; Feroz, I.; Aslam, M.; Fawad, A.

    2012-01-01

    To evaluate the safety and efficacy of elective hemi-resection of prostate in patients with huge gland, weighing more than 120 grams. Study Design: Multi centric, analytical comparative study. Place and Duration of Study: Department of Urology, Karachi Medical and Dental College, Abbasi Shaheed Hospital and Dr. Ziauddin Hospital, Karachi, from August 2006 to July 2009. Methodology: All benign cases were included in this study and divided into two groups. In group A, patients having huge prostate (> 120 grams) were placed and hemi TURP was performed. In group B, patients having 60 to 100 grams prostate were placed and conventional Blandy's TURP was performed. Results of both groups were compared in terms of duration of surgery, amount of tissue resected, operative bleeding, postoperative complications, duration of postoperative catheterization, re-admission and re-operations. Effectiveness of procedure was assessed by a simple questionnaire filled by the patients at first month, first year and second year. Patients satisfaction in terms of their ability to void, control urination, frequency, urgency, urge incontinence, haematuria, recurrent UTI, re-admission and re-operations were also assessed. Fisher exact test was applied to compare the safety and efficacy of variables. Results: In group A and B, average age range was 72 and 69 years, average weight of prostate was 148 and 70 grams, average duration of surgery was 102 and 50 minutes respectively. Average weight of resected tissue was 84 and 54 grams and haemoglobin loss was two grams and one gram respectively. Total hospital stay was 5 and 4 days. Total duration of indwelling Foley's catheter (postoperative) was 5 days and 2 days. Patient satisfaction in term of urine flow, urinary control, improvement in frequency and nocturia were comparable in both groups. UTI and re-admission was more in hemi resection group. At the end of 2 years follow-up, there is no statistical difference between the safety and efficacy

  17. Genome-Wide Study of Percent Emphysema on Computed Tomography in the General Population. The Multi-Ethnic Study of Atherosclerosis Lung/SNP Health Association Resource Study

    Science.gov (United States)

    Manichaikul, Ani; Hoffman, Eric A.; Smolonska, Joanna; Gao, Wei; Cho, Michael H.; Baumhauer, Heather; Budoff, Matthew; Austin, John H. M.; Washko, George R.; Carr, J. Jeffrey; Kaufman, Joel D.; Pottinger, Tess; Powell, Charles A.; Wijmenga, Cisca; Zanen, Pieter; Groen, Harry J. M.; Postma, Dirkje S.; Wanner, Adam; Rouhani, Farshid N.; Brantly, Mark L.; Powell, Rhea; Smith, Benjamin M.; Rabinowitz, Dan; Raffel, Leslie J.; Hinckley Stukovsky, Karen D.; Crapo, James D.; Beaty, Terri H.; Hokanson, John E.; Silverman, Edwin K.; Dupuis, Josée; O’Connor, George T.; Boezen, H. Marike; Rich, Stephen S.

    2014-01-01

    Rationale: Pulmonary emphysema overlaps partially with spirometrically defined chronic obstructive pulmonary disease and is heritable, with moderately high familial clustering. Objectives: To complete a genome-wide association study (GWAS) for the percentage of emphysema-like lung on computed tomography in the Multi-Ethnic Study of Atherosclerosis (MESA) Lung/SNP Health Association Resource (SHARe) Study, a large, population-based cohort in the United States. Methods: We determined percent emphysema and upper-lower lobe ratio in emphysema defined by lung regions less than −950 HU on cardiac scans. Genetic analyses were reported combined across four race/ethnic groups: non-Hispanic white (n = 2,587), African American (n = 2,510), Hispanic (n = 2,113), and Chinese (n = 704) and stratified by race and ethnicity. Measurements and Main Results: Among 7,914 participants, we identified regions at genome-wide significance for percent emphysema in or near SNRPF (rs7957346; P = 2.2 × 10−8) and PPT2 (rs10947233; P = 3.2 × 10−8), both of which replicated in an additional 6,023 individuals of European ancestry. Both single-nucleotide polymorphisms were previously implicated as genes influencing lung function, and analyses including lung function revealed independent associations for percent emphysema. Among Hispanics, we identified a genetic locus for upper-lower lobe ratio near the α-mannosidase–related gene MAN2B1 (rs10411619; P = 1.1 × 10−9; minor allele frequency [MAF], 4.4%). Among Chinese, we identified single-nucleotide polymorphisms associated with upper-lower lobe ratio near DHX15 (rs7698250; P = 1.8 × 10−10; MAF, 2.7%) and MGAT5B (rs7221059; P = 2.7 × 10−8; MAF, 2.6%), which acts on α-linked mannose. Among African Americans, a locus near a third α-mannosidase–related gene, MAN1C1 (rs12130495; P = 9.9 × 10−6; MAF, 13.3%) was associated with percent emphysema. Conclusions: Our results suggest that some genes previously identified as

  18. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  19. Hydrogen-terminated mesoporous silicon monoliths with huge surface area as alternative Si-based visible light-active photocatalysts

    KAUST Repository

    Li, Ting

    2016-07-21

    Silicon-based nanostructures and their related composites have drawn tremendous research interest in solar energy storage and conversion. Mesoporous silicon with a huge surface area of 400-900 m2 g-1 developed by electrochemical etching exhibits excellent photocatalytic ability and stability after 10 cycles in degrading methyl orange under visible light irradiation, owing to its unique mesoporous network, abundant surface hydrides and efficient light harvesting. This work showcases the profound effects of surface area, crystallinity, pore topology on charge migration/recombination and mass transportation. Therein the ordered 1D channel array has outperformed the interconnected 3D porous network by greatly accelerating the mass diffusion and enhancing the accessibility of the active sites on the extensive surfaces. © 2016 The Royal Society of Chemistry.

  20. Analysis of the Huge Immigration of Sogatella furcifera (Hemiptera: Delphacidae) to Southern China in the Spring of 2012.

    Science.gov (United States)

    Sun, Si-Si; Bao, Yun-Xuan; Wu, Yan; Lu, Min-Hong; Tuan, Hoang-Anh

    2018-02-08

    Sogatella furcifera (Horváth) is a migratory rice pest that periodically erupts across Asia, and early immigration is an important cause of its outbreak. The early immigration of S. furcifera into southern China shows evident annual fluctuations. In the spring of 2012, the huge size of the immigrant population and the large number of immigration peaks were at levels rarely seen prior to that year. However, little research has been done on the entire process of round-trip migration to clarify the development of the population, the long-distance migration and the final eruption. In this study, the light-trap data for S. furcifera in southern China and Vietnam in 2011-2016 were collected, and the trajectory modeling showed that the early immigrants to southern China came from the northern and central Vietnam, Laos, and northeastern Thailand. Analysis of the development of the population, the migration process and meteorological factors revealed the reasons for the huge size of the early immigration: 1) the expansion of the source area could be seen as a precondition; 2) the large size of the returned population in the last autumn and the warm temperature of southern Vietnam and Laos in the last winter increased the initial populations; 3) the sustained strong southwest winds were conducive to the northward migration of the population during the major immigration period in early May. Therefore, the large-scale immigration of S. furcifera to southern China in the spring of 2012 resulted from the combined effects of several factors involved in the process of round-trip migration. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Static Load Balancing Algorithms In Cloud Computing Challenges amp Solutions

    Directory of Open Access Journals (Sweden)

    Nadeem Shah

    2015-08-01

    Full Text Available Abstract Cloud computing provides on-demand hosted computing resources and services over the Internet on a pay-per-use basis. It is currently becoming the favored method of communication and computation over scalable networks due to numerous attractive attributes such as high availability scalability fault tolerance simplicity of management and low cost of ownership. Due to the huge demand of cloud computing efficient load balancing becomes critical to ensure that computational tasks are evenly distributed across servers to prevent bottlenecks. The aim of this review paper is to understand the current challenges in cloud computing primarily in cloud load balancing using static algorithms and finding gaps to bridge for more efficient static cloud load balancing in the future. We believe the ideas suggested as new solution will allow researchers to redesign better algorithms for better functionalities and improved user experiences in simple cloud systems. This could assist small businesses that cannot afford infrastructure that supports complex amp dynamic load balancing algorithms.

  2. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  3. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  4. Now And Next Generation Sequencing Techniques: Future of Sequence Analysis using Cloud Computing

    Directory of Open Access Journals (Sweden)

    Radhe Shyam Thakur

    2012-12-01

    Full Text Available Advancements in the field of sequencing techniques resulted in the huge sequenced data to be produced at a very faster rate. It is going cumbersome for the datacenter to maintain the databases. Data mining and sequence analysis approaches needs to analyze the databases several times to reach any efficient conclusion. To cope with such overburden on computer resources and to reach efficient and effective conclusions quickly, the virtualization of the resources and computation on pay as you go concept was introduced and termed as cloud computing. The datacenter’s hardware and software is collectively known as cloud which when available publicly is termed as public cloud. The datacenter’s resources are provided in a virtual mode to the clients via a service provider like Amazon, Google and Joyent which charges on pay as you go manner. The workload is shifted to the provider which is maintained by the required hardware and software upgradation. The service provider manages it by upgrading the requirements in the virtual mode. Basically a virtual environment is created according to the need of the user by taking permission from datacenter via internet, the task is performed and the environment is deleted after the task is over. In this discussion, we are focusing on the basics of cloud computing, the prerequisites and overall working of clouds. Furthermore, briefly the applications of cloud computing in biological systems, especially in comparative genomics, genome informatics and SNP detection with reference to traditional workflow are discussed.

  5. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  6. Evaluation of Selected Resource Allocation and Scheduling Methods in Heterogeneous Many-Core Processors and Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ciznicki Milosz

    2014-12-01

    Full Text Available Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs, can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently assigned to different types of processor units in time taking into account their specific resource requirements. Additionally, one should note that available heterogeneous resources have been designed as general purpose units, however, with many built-in features accelerating specific application operations. In other words, the same algorithm or application functionality can be implemented as a different task for CPU or GPU. Nevertheless, from the perspective of various evaluation criteria, e.g. the total execution time or energy consumption, we may observe completely different results. Therefore, as tasks can be scheduled and managed in many alternative ways on both many-core CPUs or GPUs and consequently have a huge impact on the overall computing resources performance, there are needs for new and improved resource management techniques. In this paper we discuss results achieved during experimental performance studies of selected task scheduling methods in heterogeneous computing systems. Additionally, we present a new architecture for resource allocation and task scheduling library which provides a generic application programming interface at the operating system level for improving scheduling polices taking into account a diversity of tasks and heterogeneous computing resources characteristics.

  7. Genome-wide study of percent emphysema on computed tomography in the general population. The Multi-Ethnic Study of Atherosclerosis Lung/SNP Health Association Resource Study

    NARCIS (Netherlands)

    Manichaikul, Ani; Hoffman, Eric A.; Smolonska, Joanna; Gao, Wei; Cho, Michael H.; Baumhauer, Heather; Budoff, Matthew; Austin, John H. M.; Washko, George R.; Carr, J. Jeffrey; Kaufman, Joel D.; Pottinger, Tess; Powell, Charles A.; Wijmenga, Cisca; Zanen, Pieter; Groen, Harry J.M.; Postma, Dirkje S.; Wanner, Adam; Rouhani, Farshid N.; Brantly, Mark L.; Powell, Rhea; Smith, Benjamin M.; Rabinowitz, Dan; Raffel, Leslie J.; Stukovsky, Karen D. Hinckley; Crapo, James D.; Beaty, Terri H.; Hokanson, John E.; Silverman, Edwin K.; Dupuis, Josee; O'Connor, George T.; Boezen, Hendrika; Rich, Stephen S.; Barr, R. Graham

    2014-01-01

    Rationale: Pulmonary emphysema overlaps partially with spirometrically defined chronic obstructive pulmonary disease and is heritable, with moderately high familial clustering. Objectives: To complete a genome-wide association study (GWAS) for the percentage of emphysema-like lung on computed

  8. NASA Water Resources Program

    Science.gov (United States)

    Toll, David L.

    2011-01-01

    With increasing population pressure and water usage coupled with climate variability and change, water issues are being reported by numerous groups as the most critical environmental problems facing us in the 21st century. Competitive uses and the prevalence of river basins and aquifers that extend across boundaries engender political tensions between communities, stakeholders and countries. In addition to the numerous water availability issues, water quality related problems are seriously affecting human health and our environment. The potential crises and conflicts especially arise when water is competed among multiple uses. For example, urban areas, environmental and recreational uses, agriculture, and energy production compete for scarce resources, not only in the Western U.S. but throughout much of the U.S. and also in numerous parts of the world. Mitigating these conflicts and meeting water demands and needs requires using existing water resources more efficiently. The NASA Water Resources Program Element works to use NASA products and technology to address these critical water issues. The primary goal of the Water Resources is to facilitate application of NASA Earth science products as a routine use in integrated water resources management for the sustainable use of water. This also includes the extreme events of drought and floods and the adaptation to the impacts from climate change. NASA satellite and Earth system observations of water and related data provide a huge volume of valuable data in both near-real-time and extended back nearly 50 years about the Earth's land surface conditions such as precipitation, snow, soil moisture, water levels, land cover type, vegetation type, and health. NASA Water Resources Program works closely to use NASA and Earth science data with other U.S. government agencies, universities, and non-profit and private sector organizations both domestically and internationally. The NASA Water Resources Program organizes its

  9. The software developing method for multichannel computer-aided system for physical experiments control, realized by resources of national instruments LabVIEW instrumental package

    International Nuclear Information System (INIS)

    Gorskaya, E.A.; Samojlov, V.N.

    1999-01-01

    This work is describing the method of developing the computer-aided control system in integrated environment of LabVIEW. Using the object-oriented design of complex systems, the hypothetical model for methods of developing the software for computer-aided system for physical experiments control was constructed. Within the framework of that model architecture solutions and implementations of suggested method were described. (author)

  10. Hypercalcemia and huge splenomegaly presenting in an elderly patient with B-cell non-Hodgkin's lymphoma: a case report

    Directory of Open Access Journals (Sweden)

    Tirgari Farrokh

    2010-10-01

    Full Text Available Abstract Introduction Hypercalcemia is the major electrolyte abnormality in patients with malignant tumors. It can be due to localized osteolytic hypercalcemia or elaboration of humoral substances such as parathyroid hormone-related protein from tumoral cells. In hematological malignancies, a third mechanism of uncontrolled synthesis and secretion of 1-25(OH2D3 from tumoral cells or neighboring macrophages may contribute to the problem. However, hypercalcemia is quite unusual in patients with B-cell non-Hodgkin's lymphoma. Case presentation An 85-year-old Caucasian woman presented with low grade fever, anorexia, abdominal discomfort and fullness in her left abdomen for the last six months. She was mildly anemic and complained of fatigability. She had huge splenomegaly and was hypercalcemic. After correction of her hypercalcemia, she had a splenectomy. Microscopic evaluation revealed a malignant lymphoma. Her immunohistochemistry was positive for leukocyte common antigen, CD20 and parathyroid hormone-related peptide. Conclusion Immunopositivity for parathyroid hormone-related peptide clearly demonstrates that hypersecretion of a parathyroid hormone-like substance from the tumor had led to hypercalcemia in this case. High serum calcium is seen in only seven to eight percent of patients with B-cell non-Hodgkin's lymphoma, apparently due to different mechanisms. Evaluation of serum parathyroid hormone-related protein and 1-25(OH2D3 can be helpful in diagnosis and management. It should be noted that presentation with hypercalcemia has a serious impact on prognosis and survival.

  11. A huge ovarian mucinous cystadenoma associated with contralateral teratoma and polycystic ovary syndrome in an obese adolescent girl.

    Science.gov (United States)

    Thaweekul, Patcharapa; Thaweekul, Yuthadej; Mairiang, Karicha

    2016-12-01

    A 13-year-old, obese girl presented with acute abdominal pain with abdominal distension for a year. The physical examination revealed marked abdominal distension with a large well-circumscribed mass sized 13×20 cm. Her body mass index (BMI) was 37.8 kg/m2. An abdominal CT scan revealed a huge multiloculated cystic mass and a left adnexal mass. She had an abnormal fasting plasma glucose and low HDL-C. Laparotomy, right salpingooophorectomy, left cystectomy, lymph node biopsies and partial omentectomy were performed. The left ovary demonstrated multiple cystic follicles over the cortex. The histologic diagnosis was a mucinous cystadenoma of the right ovary and a matured cystic teratoma of the left ovary. Both obesity and polycystic ovary syndrome (PCOS) are associated with a greater risk of ovarian tumours, where PCOS could be either the cause or as a consequence of an ovarian tumour. We report an obese, perimenarchal girl with bilateral ovarian tumours coexistent with a polycystic ovary and the metabolic syndrome.

  12. Huge Inverse Magnetization Generated by Faraday Induction in Nano-Sized Au@Ni Core@Shell Nanoparticles.

    Science.gov (United States)

    Kuo, Chen-Chen; Li, Chi-Yen; Lee, Chi-Hung; Li, Hsiao-Chi; Li, Wen-Hsien

    2015-08-25

    We report on the design and observation of huge inverse magnetizations pointing in the direction opposite to the applied magnetic field, induced in nano-sized amorphous Ni shells deposited on crystalline Au nanoparticles by turning the applied magnetic field off. The magnitude of the induced inverse magnetization is very sensitive to the field reduction rate as well as to the thermal and field processes before turning the magnetic field off, and can be as high as 54% of the magnetization prior to cutting off the applied magnetic field. Memory effect of the induced inverse magnetization is clearly revealed in the relaxation measurements. The relaxation of the inverse magnetization can be described by an exponential decay profile, with a critical exponent that can be effectively tuned by the wait time right after reaching the designated temperature and before the applied magnetic field is turned off. The key to these effects is to have the induced eddy current running beneath the amorphous Ni shells through Faraday induction.

  13. Huge Inverse Magnetization Generated by Faraday Induction in Nano-Sized Au@Ni Core@Shell Nanoparticles

    Directory of Open Access Journals (Sweden)

    Chen-Chen Kuo

    2015-08-01

    Full Text Available We report on the design and observation of huge inverse magnetizations pointing in the direction opposite to the applied magnetic field, induced in nano-sized amorphous Ni shells deposited on crystalline Au nanoparticles by turning the applied magnetic field off. The magnitude of the induced inverse magnetization is very sensitive to the field reduction rate as well as to the thermal and field processes before turning the magnetic field off, and can be as high as 54% of the magnetization prior to cutting off the applied magnetic field. Memory effect of the induced inverse magnetization is clearly revealed in the relaxation measurements. The relaxation of the inverse magnetization can be described by an exponential decay profile, with a critical exponent that can be effectively tuned by the wait time right after reaching the designated temperature and before the applied magnetic field is turned off. The key to these effects is to have the induced eddy current running beneath the amorphous Ni shells through Faraday induction.

  14. Huge Inverse Magnetization Generated by Faraday Induction in Nano-Sized Au@Ni Core@Shell Nanoparticles

    Science.gov (United States)

    Kuo, Chen-Chen; Li, Chi-Yen; Lee, Chi-Hung; Li, Hsiao-Chi; Li, Wen-Hsien

    2015-01-01

    We report on the design and observation of huge inverse magnetizations pointing in the direction opposite to the applied magnetic field, induced in nano-sized amorphous Ni shells deposited on crystalline Au nanoparticles by turning the applied magnetic field off. The magnitude of the induced inverse magnetization is very sensitive to the field reduction rate as well as to the thermal and field processes before turning the magnetic field off, and can be as high as 54% of the magnetization prior to cutting off the applied magnetic field. Memory effect of the induced inverse magnetization is clearly revealed in the relaxation measurements. The relaxation of the inverse magnetization can be described by an exponential decay profile, with a critical exponent that can be effectively tuned by the wait time right after reaching the designated temperature and before the applied magnetic field is turned off. The key to these effects is to have the induced eddy current running beneath the amorphous Ni shells through Faraday induction. PMID:26307983

  15. Computational models can predict response to HIV therapy without a genotype and may reduce treatment failure in different resource-limited settings

    NARCIS (Netherlands)

    Revell, A. D.; Wang, D.; Wood, R.; Morrow, C.; Tempelman, H.; Hamers, R. L.; Alvarez-Uria, G.; Streinu-Cercel, A.; Ene, L.; Wensing, A. M. J.; DeWolf, F.; Nelson, M.; Montaner, J. S.; Lane, H. C.; Larder, B. A.

    2013-01-01

    Genotypic HIV drug-resistance testing is typically 6065 predictive of response to combination antiretroviral therapy (ART) and is valuable for guiding treatment changes. Genotyping is unavailable in many resource-limited settings (RLSs). We aimed to develop models that can predict response to ART

  16. Computability theory

    CERN Document Server

    Weber, Rebecca

    2012-01-01

    What can we compute--even with unlimited resources? Is everything within reach? Or are computations necessarily drastically limited, not just in practice, but theoretically? These questions are at the heart of computability theory. The goal of this book is to give the reader a firm grounding in the fundamentals of computability theory and an overview of currently active areas of research, such as reverse mathematics and algorithmic randomness. Turing machines and partial recursive functions are explored in detail, and vital tools and concepts including coding, uniformity, and diagonalization are described explicitly. From there the material continues with universal machines, the halting problem, parametrization and the recursion theorem, and thence to computability for sets, enumerability, and Turing reduction and degrees. A few more advanced topics round out the book before the chapter on areas of research. The text is designed to be self-contained, with an entire chapter of preliminary material including re...

  17. Cryptogenic transient ischemic attack after nose blowing: association of huge atrial septal aneurysm with patent foramen ovale as potential cause

    Directory of Open Access Journals (Sweden)

    Lotze U

    2013-07-01

    Full Text Available Ulrich Lotze,1 Uwe Kirsch,1 Marc-Alexander Ohlow,2 Thorsten Scholle,3 Jochen Leonhardi,3 Bernward Lauer,2 Gerhard Oltmanns,4 Hendrik Schmidt5,6 1Department of Internal Medicine, DRK Krankenhaus Sondershausen, Sondershausen, Germany; 2Department of Cardiology, Zentralklinik Bad Berka, Bad Berka, Germany; 3Institute of Diagnostic and Interventional Radiology, Zentralklinik Bad Berka, Germany; 4Department of Internal Medicine, DRK Krankenhaus Sömmerda; Sömmerda, Germany; 5Department of Cardiology and Diabetology, Klinikum Magdeburg, Magdeburg, Germany; 6Department of Internal Medicine III, Martin-Luther-Univeristy Halle-Wittenberg, Halle, Germany Abstract: Association of atrial septal aneurysm (ASA with patent foramen ovale (PFO is considered an important risk factor for cardioembolism frequently forwarding paradoxical embolism in patients with cryptogenic or unexplained cerebral ischemic events. We herein describe the case of a 69-year-old male patient reporting uncontrolled movements of the right arm due to a muscle weakness, slurred speech, and paresthesia in the oral region some seconds after he had blown his nose. These neurological symptoms had improved dramatically within a few minutes and were completely regressive at admission to our hospital about two hours later. On transesophageal echocardiography (TEE a huge ASA associated with PFO was detected. Diagnosis of the large-sized ASA was also confirmed by cardiac magnetic resonance imaging. Due to the early complete recovery from his neurological symptoms, the patient was diagnosed with a transient ischemic attack (TIA. After nine days he was discharged in a good clinical condition under the treatment with oral anticoagulation. It is concluded that in cryptogenic or unexplained stroke or TIA TEE should always be performed to rule out ASA and PFO as potential sources for paradoxical embolism in those inconclusive clinical situations. Keywords: congenital cardiac abnormality, atrial septal

  18. ESR1 Gene Polymorphisms and Prostate Cancer Risk: A HuGE Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Yu-Mei Wang

    Full Text Available Many published data on the association between single nucleotide polymorphisms (SNPs in the ESR1 gene and prostate cancer susceptibility are inconclusive. The aim of this Human Genome Epidemiology (HuGE review and meta-analysis is to derive a more precise estimation of this relationship.A literature search of PubMed, Embase, Web of Science and Chinese Biomedical (CBM databases was conducted from their inception through July 1st, 2012. Crude odds ratios (ORs with 95% confidence intervals (CIs were calculated to assess the strength of association.Twelve case-control studies were included with a total 2,165 prostate cancer cases and 3,361 healthy controls. When all the eligible studies were pooled into the meta-analysis, ESR1 PvuII (C>T and XbaI (A>G polymorphisms showed no association with the risk of prostate cancer. However, in the stratified analyses based on ethnicity and country, the results indicated that ESR1 PvuII (C>T polymorphism was significantly associated with increased risk of prostate cancer among Asian populations, especially among Indian population; while ESR1 XbaI (A>G polymorphism may significantly increase the risk of prostate cancer among American population. Furthermore, we also performed a pooled analysis for all eligible case-control studies to explore the role of codon 10 (T>C, codon 325 (C>G, codon 594 (G>A and +261G>C polymorphisms in prostate cancer risk. Nevertheless, no significant associations between these polymorphisms and the risk of prostate cancer were observed.Results from the current meta-analysis indicate that ESR1 PvuII (C>T polymorphism may be a risk factor for prostate cancer among Asian populations, especially among Indian population; while ESR1 XbaI (A>G polymorphism may increase the risk of prostate cancer among American population.

  19. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... News Physician Resources Professions Site Index A-Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) ... are the limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known as a ...

  20. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    Science.gov (United States)

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.

  1. Frontline diagnostic evaluation of patients suspected of angina by coronary computed tomography reduces downstream resource utilization when compared to conventional ischemia testing

    DEFF Research Database (Denmark)

    Nielsen, L. H.; Markenvard, John; Jensen, Jesper Møller

    2011-01-01

    It has been proposed that the increasing use of coronary computed tomographic angiography (CTA) may introduce additional unnecessary diagnostic procedures. However, no previous study has assessed the impact on downstream test utilization of conventional diagnostic testing relative to CTA in patie...... prospective trials are needed in order to define the most cost-effective diagnostic use of CTA relative to conventional ischemia testing....

  2. Computational tools and resources for metabolism-related property predictions. 1. Overview of publicly available (free and commercial) databases and software.

    Science.gov (United States)

    Peach, Megan L; Zakharov, Alexey V; Liu, Ruifeng; Pugliese, Angelo; Tawa, Gregory; Wallqvist, Anders; Nicklaus, Marc C

    2012-10-01

    Metabolism has been identified as a defining factor in drug development success or failure because of its impact on many aspects of drug pharmacology, including bioavailability, half-life and toxicity. In this article, we provide an outline and descriptions of the resources for metabolism-related property predictions that are currently either freely or commercially available to the public. These resources include databases with data on, and software for prediction of, several end points: metabolite formation, sites of metabolic transformation, binding to metabolizing enzymes and metabolic stability. We attempt to place each tool in historical context and describe, wherever possible, the data it was based on. For predictions of interactions with metabolizing enzymes, we show a typical set of results for a small test set of compounds. Our aim is to give a clear overview of the areas and aspects of metabolism prediction in which the currently available resources are useful and accurate, and the areas in which they are inadequate or missing entirely.

  3. Golden Jubilee photos: Computers for physics

    CERN Multimedia

    2004-01-01

    CERN's first computer, a huge vacuum-tube Ferranti Mercury, was installed in building 2 in 1958. With its 60 microsecond clock cycle, it was a million times slower than today's big computers. The Mercury took 3 months to install and filled a huge room, even so, its computational ability didn't quite match that of a modern pocket calculator. "Mass" storage was provided by four magnetic drums each holding 32K x 20 bits - not enough to hold the data from a single proton-proton collision in the LHC. It was replaced in 1960 by the IBM 709 computer, seen here being unloaded at Cointrin airport. Although it was taken over so quickly by transistor equipped machines, a small part of the Ferranti Mercury remains. The computer's engineers installed a warning bell to signal computing errors - it can still be found mounted on the wall in a corridor of building 2.

  4. Science-Driven Computing: NERSC's Plan for 2006-2010

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.; Banda,Michael J.; Bethel, E. Wes; Craw, James M.; Fortney, William J.; Hules,John A.; Meyer, Nancy L.; Meza, Juan C.; Ng, Esmond G.; Rippe, Lynn E.; Saphir, William C.; Verdier, Francesca; Walter, Howard A.; Yelick,Katherine A.

    2005-05-16

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise of the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.

  5. A data management system to enable urgent natural disaster computing

    Science.gov (United States)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton

    2014-05-01

    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe

  6. The Quality Resources

    Directory of Open Access Journals (Sweden)

    Anca Gabriela TURTUREANU

    2005-10-01

    Full Text Available A significant element characterizing a lasting development in Braila Plain region, but also in any other similar area, would be anenvironment factor that is the water, with an increasing importance when considered against the high dryness background. Generally speaking, boththe consumed water and the consumption structure reflect the quality and quantity of water resources and, implicitly, the economic potential of theregion at issue. As for Braila Plain, here there is a paradox to be considered: even if the region is bordered by highly significant rivers (The Danubeand Siret with huge flows – not to mention here the salty water lakes or underground streams with a more or less drinking water – the need ofdrinking water becomes obvious, mostly in summer and autumn. The climate, morphometric and lithological conditions confer certain peculiaritiesupon the waters resources of the Northern-Eastern Romanian Plain. One can say about the Braila Plain hydrographical network that it is poor and thisis due to the discharge, situated under the value of 1 l/sqkm, but also to the very low relief energy. The allochthonous Rivers: the Danube, Siret,Buzau and Calmatui are affected by the relief climate conditions and also by the size and the geographic position of the hydrographical basins.

  7. The Osceola Mudflow from Mount Rainier: Sedimentology and hazard implications of a huge clay-rich debris flow

    Science.gov (United States)

    Vallance, J.W.; Scott, K.M.

    1997-01-01

    altered rock in the preavalanche mass determines whether a debris avalanche will transform into a cohesive debris flow or remain a largely unsaturated debris avalanche. The distinction among cohesive lahar, noncohesive lahar, and debris avalanche is important in hazard assessment because cohesive lahars spread much more widely than noncohesive lahars that travel similar distances, and travel farther and spread more widely than debris avalanches of similar volume. The Osceola Mudflow is documented here as an example of a cohesive debris flow of huge size that can be used as a model for hazard analysis of similar flows.

  8. Dynamic Evaluation of Water Quality Improvement Based on Effective Utilization of Stockbreeding Biomass Resource

    Directory of Open Access Journals (Sweden)

    Jingjing Yan

    2014-11-01

    Full Text Available The stockbreeding industry is growing rapidly in rural regions of China, carrying a high risk to the water environment due to the emission of huge amounts of pollutants in terms of COD, T-N and T-P to rivers. On the other hand, as a typical biomass resource, stockbreeding waste can be used as a clean energy source by biomass utilization technologies. In this paper, we constructed a dynamic linear optimization model to simulate the synthetic water environment management policies which includes both the water environment system and social-economic situational changes over 10 years. Based on the simulation, the model can precisely estimate trends of water quality, production of stockbreeding biomass energy and economic development under certain restrictions of the water environment. We examined seven towns of Shunyi district of Beijing as the target area to analyse synthetic water environment management policies by computer simulation based on the effective utilization of stockbreeding biomass resources to improve water quality and realize sustainable development. The purpose of our research is to establish an effective utilization method of biomass resources incorporating water environment preservation, resource reutilization and economic development, and finally realize the sustainable development of the society.

  9. Exploring the Key Risk Factors for Application of Cloud Computing in Auditing

    Directory of Open Access Journals (Sweden)

    Kuang-Hua Hu

    2016-08-01

    Full Text Available In the cloud computing information technology environment, cloud computing has some advantages such as lower cost, immediate access to hardware resources, lower IT barriers to innovation, higher scalability, etc., but for the financial audit information flow and processing in the cloud system, CPA (Certified Public Accountant firms need special considerations, for example: system problems, information security and other related issues. Auditing cloud computing applications is the future trend in the CPA firms, given this issue is an important factor for them and very few studies have been conducted to investigate this issue; hence this study seeks to explore the key risk factors for the cloud computing and audit considerations. The dimensions/perspectives of the application of cloud computing audit considerations are huge and cover many criteria/factors. These risk factors are becoming increasingly complex, and interdependent. If the dimensions could be established, the mutually influential relations of the dimensions and criteria determined, and the current execution performance established; a prioritized improvement strategy designed could be constructed to use as a reference for CPA firm management decision making; as well as provide CPA firms with a reference for build auditing cloud computing systems. Empirical results show that key risk factors to consider when using cloud computing in auditing are, in order of priority for improvement: Operations (D, Automating user provisioning (C, Technology Risk (B and Protection system (A.

  10. Can cloud computing benefit health services? - a SWOT analysis.

    Science.gov (United States)

    Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth

    2011-01-01

    In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.

  11. Toward Cloud Computing Evolution

    OpenAIRE

    Susanto, Heru; Almunawar, Mohammad Nabil; Kang, Chen Chin

    2012-01-01

    -Information Technology (IT) shaped the success of organizations, giving them a solid foundation that increases both their level of efficiency as well as productivity. The computing industry is witnessing a paradigm shift in the way computing is performed worldwide. There is a growing awareness among consumers and enterprises to access their IT resources extensively through a "utility" model known as "cloud computing." Cloud computing was initially rooted in distributed grid-based computing. ...

  12. Resource-adaptive cognitive processes

    CERN Document Server

    Crocker, Matthew W

    2010-01-01

    This book investigates the adaptation of cognitive processes to limited resources. The central topics of this book are heuristics considered as results of the adaptation to resource limitations, through natural evolution in the case of humans, or through artificial construction in the case of computational systems; the construction and analysis of resource control in cognitive processes; and an analysis of resource-adaptivity within the paradigm of concurrent computation. The editors integrated the results of a collaborative 5-year research project that involved over 50 scientists. After a mot

  13. A study on coupling and coordinating development mechanism of China's low-carbon development and environmental resources system

    NARCIS (Netherlands)

    Cong, H.; Zou, D.; Wu, F.; Zhang, Qiufang

    2015-01-01

    With the rapid development of China’s modern industry, human beings have consumed enormous amounts of high-carbon energy resources. This has caused huge destruction to the systems of environmental resources. Low-carbon development is the best solution to the irrational demand for natural resources,

  14. Computer security

    CERN Document Server

    Gollmann, Dieter

    2011-01-01

    A completely up-to-date resource on computer security Assuming no previous experience in the field of computer security, this must-have book walks you through the many essential aspects of this vast topic, from the newest advances in software and technology to the most recent information on Web applications security. This new edition includes sections on Windows NT, CORBA, and Java and discusses cross-site scripting and JavaScript hacking as well as SQL injection. Serving as a helpful introduction, this self-study guide is a wonderful starting point for examining the variety of competing sec

  15. Resources for GCSE.

    Science.gov (United States)

    Anderton, Alain

    1987-01-01

    Argues that new resources are needed to help teachers prepare students for the new General Certificate in Secondary Education (GCSE) examination. Compares previous examinations with new examinations to illustrate the problem. Presents textbooks, workbooks, computer programs, and other curriculum materials to demonstrate the gap between resources…

  16. Hydropower and Environmental Resource Assessment (HERA): a computational tool for the assessment of the hydropower potential of watersheds considering engineering and socio-environmental aspects.

    Science.gov (United States)

    Martins, T. M.; Kelman, R.; Metello, M.; Ciarlini, A.; Granville, A. C.; Hespanhol, P.; Castro, T. L.; Gottin, V. M.; Pereira, M. V. F.

    2015-12-01

    The hydroelectric potential of a river is proportional to its head and water flows. Selecting the best development alternative for Greenfield projects watersheds is a difficult task, since it must balance demands for infrastructure, especially in the developing world where a large potential remains unexplored, with environmental conservation. Discussions usually diverge into antagonistic views, as in recent projects in the Amazon forest, for example. This motivates the construction of a computational tool that will support a more qualified debate regarding development/conservation options. HERA provides the optimal head division partition of a river considering technical, economic and environmental aspects. HERA has three main components: (i) pre-processing GIS of topographic and hydrologic data; (ii) automatic engineering and equipment design and budget estimation for candidate projects; (iii) translation of division-partition problem into a mathematical programming model. By integrating an automatic calculation with geoprocessing tools, cloud computation and optimization techniques, HERA makes it possible countless head partition division alternatives to be intrinsically compared - a great advantage with respect to traditional field surveys followed by engineering design methods. Based on optimization techniques, HERA determines which hydro plants should be built, including location, design, technical data (e.g. water head, reservoir area and volume, engineering design (dam, spillways, etc.) and costs). The results can be visualized in the HERA interface, exported to GIS software, Google Earth or CAD systems. HERA has a global scope of application since the main input data area a Digital Terrain Model and water inflows at gauging stations. The objective is to contribute to an increased rationality of decisions by presenting to the stakeholders a clear and quantitative view of the alternatives, their opportunities and threats.

  17. Computing for Heavy Ion Physics

    International Nuclear Information System (INIS)

    Martinez, G.; Schiff, D.; Hristov, P.; Menaud, J.M.; Hrivnacova, I.; Poizat, P.; Chabratova, G.; Albin-Amiot, H.; Carminati, F.; Peters, A.; Schutz, Y.; Safarik, K.; Ollitrault, J.Y.; Hrivnacova, I.; Morsch, A.; Gheata, A.; Morsch, A.; Vande Vyvre, P.; Lauret, J.; Nief, J.Y.; Pereira, H.; Kaczmarek, O.; Conesa Del Valle, Z.; Guernane, R.; Stocco, D.; Gruwe, M.; Betev, L.; Baldisseri, A.; Vilakazi, Z.; Rapp, B.; Masoni, A.; Stoicea, G.; Brun, R.

    2005-01-01

    This workshop was devoted to the computational technologies needed for the heavy quarkonia and open flavor production study at LHC (large hadron collider) experiments. These requirements are huge: peta-bytes of data will be generated each year. Analysing this will require the equivalent of a few thousands of today's fastest PC processors. The new developments in terms of dedicated software has been addressed. This document gathers the transparencies that were presented at the workshop

  18. Computing for Heavy Ion Physics

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, G.; Schiff, D.; Hristov, P.; Menaud, J.M.; Hrivnacova, I.; Poizat, P.; Chabratova, G.; Albin-Amiot, H.; Carminati, F.; Peters, A.; Schutz, Y.; Safarik, K.; Ollitrault, J.Y.; Hrivnacova, I.; Morsch, A.; Gheata, A.; Morsch, A.; Vande Vyvre, P.; Lauret, J.; Nief, J.Y.; Pereira, H.; Kaczmarek, O.; Conesa Del Valle, Z.; Guernane, R.; Stocco, D.; Gruwe, M.; Betev, L.; Baldisseri, A.; Vilakazi, Z.; Rapp, B.; Masoni, A.; Stoicea, G.; Brun, R

    2005-07-01

    This workshop was devoted to the computational technologies needed for the heavy quarkonia and open flavor production study at LHC (large hadron collider) experiments. These requirements are huge: peta-bytes of data will be generated each year. Analysing this will require the equivalent of a few thousands of today's fastest PC processors. The new developments in terms of dedicated software has been addressed. This document gathers the transparencies that were presented at the workshop.

  19. Computing for Heavy Ion Physics

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, G; Schiff, D; Hristov, P; Menaud, J M; Hrivnacova, I; Poizat, P; Chabratova, G; Albin-Amiot, H; Carminati, F; Peters, A; Schutz, Y; Safarik, K; Ollitrault, J Y; Hrivnacova, I; Morsch, A; Gheata, A; Morsch, A; Vande Vyvre, P; Lauret, J; Nief, J Y; Pereira, H; Kaczmarek, O; Conesa Del Valle, Z; Guernane, R; Stocco, D; Gruwe, M; Betev, L; Baldisseri, A; Vilakazi, Z; Rapp, B; Masoni, A; Stoicea, G; Brun, R

    2005-07-01

    This workshop was devoted to the computational technologies needed for the heavy quarkonia and open flavor production study at LHC (large hadron collider) experiments. These requirements are huge: peta-bytes of data will be generated each year. Analysing this will require the equivalent of a few thousands of today's fastest PC processors. The new developments in terms of dedicated software has been addressed. This document gathers the transparencies that were presented at the workshop.

  20. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Head Computed tomography (CT) of the head uses special x-ray ... What is CT Scanning of the Head? Computed tomography, more commonly known as a CT or CAT ...

  1. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses uses special x-ray equipment ... story here Images × Image Gallery Patient undergoing computed tomography (CT) scan. View full size with caption Pediatric Content ...

  2. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Head Computed tomography (CT) of the head uses special x-ray equipment ... story here Images × Image Gallery Patient undergoing computed tomography (CT) scan. View full size with caption Pediatric Content ...

  3. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  4. Security of fixed and wireless computer networks

    NARCIS (Netherlands)

    Verschuren, J.; Degen, A.J.G.; Veugen, P.J.M.

    2003-01-01

    A few decades ago, most computers were stand-alone machines: they were able to process information using their own resources. Later, computer systems were connected to each other enabling a computer system to exchange data with another computer and to use resources of another computer. With the

  5. COMPARATIVE STUDY OF CLOUD COMPUTING AND MOBILE CLOUD COMPUTING

    OpenAIRE

    Nidhi Rajak*, Diwakar Shukla

    2018-01-01

    Present era is of Information and Communication Technology (ICT) and there are number of researches are going on Cloud Computing and Mobile Cloud Computing such security issues, data management, load balancing and so on. Cloud computing provides the services to the end user over Internet and the primary objectives of this computing are resource sharing and pooling among the end users. Mobile Cloud Computing is a combination of Cloud Computing and Mobile Computing. Here, data is stored in...

  6. Water Resources

    International Nuclear Information System (INIS)

    Abira, M.A.

    1997-01-01

    Water is essential for life and ecological sustenance; its availability is essential component of national welfare and productivity.The country's socio-economic activities are largely dependent on the natural endowment of water resources. Kenya's water resources comprises of surface waters (rivers, lakes and wetlands) and ground water. Surface water forms 86% of total water resources while the rest is ground water Geological, topographical and climatic factors influence the natural availability and distribution of water with the rainfall distribution having the major influence. Water resources in Kenya are continuously under threat of depletion and quality degradation owing to rising population, industrialization, changing land use and settlement activities as well as natural changes. However, the anticipated climate change is likely to exacerbate the situation resulting in increased conflict over water use rights in particular, and, natural resource utilisation in general. The impacts of climate change on the water resources would lead to other impacts on environmental and socio-economic systems

  7. FY 1994 Blue Book: High Performance Computing and Communications: Toward a National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — government and industry that advanced computer and telecommunications technologies could provide huge benefits throughout the research community and the entire U.S....

  8. Reconstruction and identification of electrons in the Atlas experiment. Setup of a Tier 2 of the computing grid

    International Nuclear Information System (INIS)

    Derue, F.

    2008-03-01

    The origin of the mass of elementary particles is linked to the electroweak symmetry breaking mechanism. Its study will be one of the main efforts of the Atlas experiment at the Large Hadron Collider of CERN, starting in 2008. In most cases, studies will be limited by our knowledge of the detector performances, as the precision of the energy reconstruction or the efficiency to identify particles. This manuscript presents a work dedicated to the reconstruction of electrons in the Atlas experiment with simulated data and data taken during the combined test beam of 2004. The analysis of the Atlas data implies the use of a huge amount of computing and storage resources which brought to the development of a world computing grid. (author)

  9. Cost and resource utilization associated with use of computed tomography to evaluate chest pain in the emergency department: the Rule Out Myocardial Infarction using Computer Assisted Tomography (ROMICAT) study.

    Science.gov (United States)

    Hulten, Edward; Goehler, Alexander; Bittencourt, Marcio Sommer; Bamberg, Fabian; Schlett, Christopher L; Truong, Quynh A; Nichols, John; Nasir, Khurram; Rogers, Ian S; Gazelle, Scott G; Nagurney, John T; Hoffmann, Udo; Blankstein, Ron

    2013-09-01

    Coronary computed tomographic angiography (cCTA) allows rapid, noninvasive exclusion of obstructive coronary artery disease (CAD). However, concern exists whether implementation of cCTA in the assessment of patients presenting to the emergency department with acute chest pain will lead to increased downstream testing and costs compared with alternative strategies. Our aim was to compare observed actual costs of usual care (UC) with projected costs of a strategy including early cCTA in the evaluation of patients with acute chest pain in the Rule Out Myocardial Infarction Using Computer Assisted Tomography I (ROMICAT I) study. We compared cost and hospital length of stay of UC observed among 368 patients enrolled in the ROMICAT I study with projected costs of management based on cCTA. Costs of UC were determined by an electronic cost accounting system. Notably, UC was not influenced by cCTA results because patients and caregivers were blinded to the cCTA results. Costs after early implementation of cCTA were estimated assuming changes in management based on cCTA findings of the presence and severity of CAD. Sensitivity analysis was used to test the influence of key variables on both outcomes and costs. We determined that in comparison with UC, cCTA-guided triage, whereby patients with no CAD are discharged, could reduce total hospital costs by 23% (Pcost increases such that when the prevalence of ≥ 50% stenosis is >28% to 33%, the use of cCTA becomes more costly than UC. cCTA may be a cost-saving tool in acute chest pain populations that have a prevalence of potentially obstructive CAD cost would be anticipated in populations with higher prevalence of disease.

  10. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  11. Research on cloud computing solutions

    OpenAIRE

    Liudvikas Kaklauskas; Vaida Zdanytė

    2015-01-01

    Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, ...

  12. Multiple huge epiphrenic esophageal diverticula with motility disease treated with video-assisted thoracoscopic and hand-assisted laparoscopic esophagectomy: a case report

    OpenAIRE

    Taniguchi, Yoshiki; Takahashi, Tsuyoshi; Nakajima, Kiyokazu; Higashi, Shigeyoshi; Tanaka, Koji; Miyazaki, Yasuhiro; Makino, Tomoki; Kurokawa, Yukinori; Yamasaki, Makoto; Takiguchi, Shuji; Mori, Masaki; Doki, Yuichiro

    2017-01-01

    Background Epiphrenic esophageal diverticulum is a rare condition that is often associated with a concomitant esophageal motor disorder. Some patients have the chief complaints of swallowing difficulty and gastroesophageal reflux; traditionally, such diverticula have been resected via right thoracotomy. Here, we describe a case with huge multiple epiphrenic diverticula with motility disorder, which were successfully resected using a video-assisted thoracic and laparoscopic procedure. Case pre...

  13. Computational chemistry

    Science.gov (United States)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  14. On the prediction of hydroelastic behaviors of a huge floating structure in waves. 2nd Report; Choogata futai no harochu dansei kyodo no suiteiho ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Murai, M.; Kagemoto, H.; Fujino, M. [The University of Tokyo, Tokyo (Japan)

    1997-08-01

    On the hydroelastic behaviors of a huge floating structure, a mutual interaction theory based on the area division method is used for the analysis of a fluid problem and a mode analysis method is used for the analysis of deformation. On the continuous deformation of a floating structure, the structure is considered as a set of partial structures obtained when the plane shape was divided into squares and discretely handled as a series of rigid motions in the small partial structures obtained by dividing the partial structures more finely. The experimental result in a water tank and the distribution method at a singular point were compared on the deformation of the elastic floating structure estimated by calculation based on this formulation. The result showed that the estimation method on the hydroelastic problem proposed in this paper is valid. On the prediction of hydroelastic behaviors of a huge floating structure, various calculation examples indicate that the hydroelastic behavior is not only the relation between the structure length and wavelength, but also that the bending rigidity of a structure is a very important factor. For a huge floating structure in the 5,000 m class, up to shorter wavelength of about {lambda}/L = 1/100 must be investigated. 6 refs., 14 figs., 5 tabs.

  15. Research Computing and Data for Geoscience

    OpenAIRE

    Smith, Preston

    2015-01-01

    This presentation will discuss the data storage and computational resources available for GIS researchers at Purdue. This presentation will discuss the data storage and computational resources available for GIS researchers at Purdue.

  16. Controlling user access to electronic resources without password

    Science.gov (United States)

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  17. Public Library Training Program for Older Adults Addresses Their Computer and Health Literacy Needs. A Review of: Xie, B. (2011. Improving older adults’ e-health literacy through computer training using NIH online resources. Library & Information Science Research, 34, 63-71. doi: /10.1016/j.lisr.2011.07.006

    Directory of Open Access Journals (Sweden)

    Cari Merkley

    2012-12-01

    – Participants showed significant decreases in their levels of computer anxiety, and significant increases in their interest in computers at the end of the program (p>0.01. Computer and web knowledge also increased among those completing the knowledge tests. Most participants (78% indicated that something they had learned in the program impacted their health decision making, and just over half of respondents (55% changed how they took medication as a result of the program. Participants were also very satisfied with the program’s delivery and format, with 97% indicating that they had learned a lot from the course. Most (68% participants said that they wished the class had been longer, and there was full support for similar programming to be offered at public libraries. Participants also reported that they found the NIHSeniorHealth website more useful, but not significantly more usable, than MedlinePlus.Conclusion – The intervention as designed successfully addressed issues of computer and health literacy with older adult participants. By using existing resources, such as public library computer facilities and curricula developed by the National Institutes of Health, the intervention also provides a model that could be easily replicated in other locations without the need for significant financial resources.

  18. COMPUTER GAMES AND EDUCATION

    OpenAIRE

    Sukhov, Anton

    2018-01-01

    This paper devoted to the research of educational resources and possibilities of modern computer games. The “internal” educational aspects of computer games include educational mechanism (a separate or integrated “tutorial”) and representation of a real or even fantastic educational process within virtual worlds. The “external” dimension represents educational opportunities of computer games for personal and professional development in different genres of computer games (various transport, so...

  19. UT-CT: A National Resource for Applications of High-Resolution X-ray Computed Tomography in the Geological Sciences

    Science.gov (United States)

    Carlson, W. D.; Ketcham, R. A.; Rowe, T. B.

    2002-12-01

    An NSF-sponsored (EAR-IF) shared multi-user facility dedicated to research applications of high-resolution X-ray computed tomography (CT) in the geological sciences has been in operation since 1997 at the University of Texas at Austin. The centerpiece of the facility is an industrial CT scanner custom-designed for geological applications. Because the instrument can optimize trade-offs among penetrating ability, spatial resolution, density discrimination, imaging modes, and scan times, it can image a very broad range of geological specimens and materials, and thus offers significant advantages over medical scanners and desktop microtomographs. Two tungsten-target X-ray sources (200-kV microfocal and 420-kV) and three X-ray detectors (image-intensifier, high-sensitivity cadmium tungstate linear array, and high-resolution gadolinium-oxysulfide radiographic line scanner) can be used in various combinations to meet specific imaging goals. Further flexibility is provided by multiple imaging modes: second-generation (translate-rotate), third-generation (rotate-only; centered and variably offset), and cone-beam (volume CT). The instrument can accommodate specimens as small as about 1 mm on a side, and as large as 0.5 m in diameter and 1.5 m tall. Applications in petrology and structural geology include measuring crystal sizes and locations to identify mechanisms governing the kinetics of metamorphic reactions; visualizing relationships between alteration zones and abundant macrodiamonds in Siberian eclogites to elucidate metasomatic processes in the mantle; characterizing morphologies of spiral inclusion trails in garnet to test hypotheses of porphyroblast rotation during growth; measuring vesicle size distributions in basaltic flows for determination of elevation at the time of eruption to constrain timing and rates of continental uplift; analysis of the geometry, connectivity, and tortuosity of migmatite leucosomes to define the topology of melt flow paths, for numerical

  20. MR-based field-of-view extension in MR/PET: B0 homogenization using gradient enhancement (HUGE).

    Science.gov (United States)

    Blumhagen, Jan O; Ladebeck, Ralf; Fenchel, Matthias; Scheffler, Klaus

    2013-10-01

    In whole-body MR/PET, the human attenuation correction can be based on the MR data. However, an MR-based field-of-view (FoV) is limited due to physical restrictions such as B0 inhomogeneities and gradient nonlinearities. Therefore, for large patients, the MR image and the attenuation map might be truncated and the attenuation correction might be biased. The aim of this work is to explore extending the MR FoV through B0 homogenization using gradient enhancement in which an optimal readout gradient field is determined to locally compensate B0 inhomogeneities and gradient nonlinearities. A spin-echo-based sequence was developed that computes an optimal gradient for certain regions of interest, for example, the patient's arms. A significant distortion reduction was achieved outside the normal MR-based FoV. This FoV extension was achieved without any hardware modifications. In-plane distortions in a transaxially extended FoV of up to 600 mm were analyzed in phantom studies. In vivo measurements of the patient's arms lying outside the normal specified FoV were compared with and without the use of B0 homogenization using gradient enhancement. In summary, we designed a sequence that provides data for reducing the image distortions due to B0 inhomogeneities and gradient nonlinearities and used the data to extend the MR FoV. Copyright © 2011 Wiley Periodicals, Inc.

  1. Uranium resources

    International Nuclear Information System (INIS)

    Gangloff, A.

    1978-01-01

    It is first indicated how to evaluate the mining resources as a function of the cost of production and the degree of certainty in the knowledge of the deposit. A table is given of the world resources (at the beginning 1977) and resources and reserves are compared. There is a concordance between requirements and possible production until 1990. The case of France is examined: known reserves, present and future prospection, present production (In 1978 2200 T of U metal will be produced from 3 French processing plants), production coming from Cogema. A total production of 2000 T in 1980 and 10.000 in 1985 is expected [fr

  2. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) is ... a CT scan. View full size with caption Pediatric Content Some imaging tests and treatments have special ...

  3. Database of Information technology resources

    OpenAIRE

    Barzda, Erlandas

    2005-01-01

    The subject of this master work is the internet information resource database. This work also handles the problems of old information systems which do not meet the new contemporary requirements. The aim is to create internet information system, based on object-oriented technologies and tailored to computer users’ needs. The internet information database system helps computers administrators to get the all needed information about computers network elements and easy to register all changes int...

  4. Seaweed resources

    Digital Repository Service at National Institute of Oceanography (India)

    Deshmukhe, G.V.; Dhargalkar, V.K.; Untawale, A.G.

    The chapter summarizes our present knowledge of the seaweed resources of the Indian Ocean region with regard to the phytogeographical distribution, composition, biomass, utilization, cultivation, conservation and management. The voluminous data...

  5. Arthritis - resources

    Science.gov (United States)

    Resources - arthritis ... The following organizations provide more information on arthritis : American Academy of Orthopaedic Surgeons -- orthoinfo.aaos.org/menus/arthritis.cfm Arthritis Foundation -- www.arthritis.org Centers for Disease Control and Prevention -- www. ...

  6. Mineral resources

    Digital Repository Service at National Institute of Oceanography (India)

    Valsangkar, A.B.

    (placers), biogenous (ooze, limestone) or chemogenous (phosphorites and polymetallic nodules) type. In recent years, hydrothermal deposits, cobalt crust and methane gas hydrates are considered as frontier resources. Their distribution depends upon proximity...

  7. Depression - resources

    Science.gov (United States)

    Resources - depression ... Depression is a medical condition. If you think you may be depressed, see a health care provider. ... following organizations are good sources of information on depression : American Psychological Association -- www.apa.org/topics/depression/ ...

  8. Hemophilia - resources

    Science.gov (United States)

    Resources - hemophilia ... The following organizations provide further information on hemophilia : Centers for Disease Control and Prevention -- www.cdc.gov/ncbddd/hemophilia/index.html National Heart, Lung, and Blood Institute -- www.nhlbi.nih.gov/ ...

  9. Diabetes - resources

    Science.gov (United States)

    Resources - diabetes ... The following sites provide further information on diabetes: American Diabetes Association -- www.diabetes.org Juvenile Diabetes Research Foundation International -- www.jdrf.org National Center for Chronic Disease Prevention and Health Promotion -- ...

  10. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available Toggle navigation Test/Treatment Patient Type Screening/Wellness Disease/Condition Safety En Español More Info About Us News Physician Resources Professions Site Index A-Z Computed Tomography ( ...

  11. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available Toggle navigation Test/Treatment Patient Type Screening/Wellness Disease/Condition Safety En Español More Info About Us News Physician Resources Professions Site Index A-Z Computed Tomography ( ...

  12. Forest Resources

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-06-01

    Forest biomass is an abundant biomass feedstock that complements the conventional forest use of wood for paper and wood materials. It may be utilized for bioenergy production, such as heat and electricity, as well as for biofuels and a variety of bioproducts, such as industrial chemicals, textiles, and other renewable materials. The resources within the 2016 Billion-Ton Report include primary forest resources, which are taken directly from timberland-only forests, removed from the land, and taken to the roadside.

  13. Dynamic Placement of Virtual Machines with Both Deterministic and Stochastic Demands for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wenying Yue

    2014-01-01

    Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.

  14. Resources and Operations Section

    International Nuclear Information System (INIS)

    Burgess, R.L.

    1978-01-01

    Progress is reported on the data resources group with regard to numeric information support; IBP data center; and geoecology project. Systems ecology studies consisted of nonlinear analysis-time delays in a host-parasite model; dispersal of seeds by animals; three-dimensional computer graphics in ecology; spatial heterogeneity in ecosystems; and analysis of forest structure. Progress is also reported on the national inventory of biological monitoring programs; ecological sciences information center; and educational activities

  15. A Huge Morel-Lavallée Lesion Treated Using a Quilting Suture Method: A Case Report and Review of the Literature.

    Science.gov (United States)

    Seo, Bommie F; Kang, In Sook; Jeong, Yeon Jin; Moon, Suk Ho

    2014-06-01

    The Morel-Lavallée lesion is a collection of serous fluid that develops after closed degloving injuries and after surgical procedures particularly in the pelvis and abdomen. It is a persistent seroma and is usually resistant to conservative methods of treatment such as percutaneous drainage and compression. Various methods of curative treatment have been reported in the literature, such as application of fibrin sealant, doxycycline, or alcohol sclerodhesis. We present a case of a huge recurrent Morel-Lavallée lesion in the lower back and buttock region that was treated with quilting sutures, fibrin sealant, and compression, with a review of the literature. © The Author(s) 2014.

  16. Scientific Discovery through Advanced Computing in Plasma Science

    Science.gov (United States)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations

  17. Cloud Computing : Research Issues and Implications

    OpenAIRE

    Marupaka Rajenda Prasad; R. Lakshman Naik; V. Bapuji

    2013-01-01

    Cloud computing is a rapidly developing and excellent promising technology. It has aroused the concern of the computer society of whole world. Cloud computing is Internet-based computing, whereby shared information, resources, and software, are provided to terminals and portable devices on-demand, like the energy grid. Cloud computing is the product of the combination of grid computing, distributed computing, parallel computing, and ubiquitous computing. It aims to build and forecast sophisti...

  18. COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...

    Science.gov (United States)

    This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).

  19. A resource management architecture for metacomputing systems.

    Energy Technology Data Exchange (ETDEWEB)

    Czajkowski, K.; Foster, I.; Karonis, N.; Kesselman, C.; Martin, S.; Smith, W.; Tuecke, S.

    1999-08-24

    Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.

  20. Power Consumption Evaluation of Distributed Computing Network Considering Traffic Locality

    Science.gov (United States)

    Ogawa, Yukio; Hasegawa, Go; Murata, Masayuki

    When computing resources are consolidated in a few huge data centers, a massive amount of data is transferred to each data center over a wide area network (WAN). This results in increased power consumption in the WAN. A distributed computing network (DCN), such as a content delivery network, can reduce the traffic from/to the data center, thereby decreasing the power consumed in the WAN. In this paper, we focus on the energy-saving aspect of the DCN and evaluate its effectiveness, especially considering traffic locality, i.e., the amount of traffic related to the geographical vicinity. We first formulate the problem of optimizing the DCN power consumption and describe the DCN in detail. Then, numerical evaluations show that, when there is strong traffic locality and the router has ideal energy proportionality, the system's power consumption is reduced to about 50% of the power consumed in the case where a DCN is not used; moreover, this advantage becomes even larger (up to about 30%) when the data center is located farthest from the center of the network topology.