WorldWideScience

Sample records for huge computing resources

  1. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  2. Resource Optimization Based on Demand in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ramakrishnan Ramanathan

    2014-10-01

    Full Text Available A Cloud Computing gives the opportunity to dynamically scale the computing resources for application. Cloud Computing consist of large number of resources, it is called resource pool. These resources are shared among the cloud consumer using virtualization technology. Virtualization technologies engaged in cloud environment is resource consolidation and management. Cloud consists of physical and virtual resources. Cloud performance is important for Cloud Provider perspective predicts the dynamic nature of users, user demands and application demand. The cloud consumer perspective, the job should be completed on time with minimum cost and limited resources. Finding optimum resource allocation is difficult in huge system like Cluster, Data Centre and Grid. In this study we present two types of resource allocation schemes such as Commitment Allocation (CA and Over Commitment Allocation (OCA in the physical and virtual level resource. These resource allocation schemes helps to identify the virtual resource utilization and physical resource availability.

  3. Computational AstroStatistics Fast and Efficient Tools for Analysing Huge Astronomical Data Sources

    CERN Document Server

    Nichol, R C; Connolly, A J; Davies, S; Genovese, C; Hopkins, A M; Miller, C J; Moore, A W; Pelleg, D; Richards, G T; Schneider, J; Szapudi, I; Wasserman, L H

    2001-01-01

    I present here a review of past and present multi-disciplinary research of the Pittsburgh Computational AstroStatistics (PiCA) group. This group is dedicated to developing fast and efficient statistical algorithms for analysing huge astronomical data sources. I begin with a short review of multi-resolutional kd-trees which are the building blocks for many of our algorithms. For example, quick range queries and fast n-point correlation functions. I will present new results from the use of Mixture Models (Connolly et al. 2000) in density estimation of multi-color data from the Sloan Digital Sky Survey (SDSS). Specifically, the selection of quasars and the automated identification of X-ray sources. I will also present a brief overview of the False Discovery Rate (FDR) procedure (Miller et al. 2001a) and show how it has been used in the detection of ``Baryon Wiggles'' in the local galaxy power spectrum and source identification in radio data. Finally, I will look forward to new research on an automated Bayes Netw...

  4. LHCb Computing Resources: 2017 requests

    CERN Document Server

    Bozzi, Concezio

    2016-01-01

    This document presents an assessment of computing resources needed by LHCb in 2017, as resulting from the accumulated experience in Run2 data taking and recent changes in the LHCb computing model parameters.

  5. A Computational Method for Enabling Teaching-Learning Process in Huge Online Courses and Communities

    Science.gov (United States)

    Mora, Higinio; Ferrández, Antonio; Gil, David; Peral, Jesús

    2017-01-01

    Massive Open Online Courses and e-learning represent the future of the teaching-learning processes through the development of Information and Communication Technologies. They are the response to the new education needs of society. However, this future also presents many challenges such as the processing of online forums when a huge number of…

  6. Quantifying Resource Use in Computations

    CERN Document Server

    van Son, R J J H

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for instance, in cryptanalysis, and in neuroscience, for instance, comparative neuro-anatomy. A System versus Environment game formalism is proposed based on Computability Logic that allows to define a computational work function that describes the theoretical and physical resources needed to perform any purely algorithmic computation. Within this formalism, the cost of a computation is defined as the sum of information storage over the steps of the computation. The size of the computational device, eg, the action table of a Universal Turing Machine, the number of transistors in silicon, or the number and complexity of synapses in a neural net, is explicitly included in the computational cost. The proposed cost function leads in a na...

  7. Quantifying resource use in computations

    NARCIS (Netherlands)

    van Son, R.J.J.H.

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for in- stance, in

  8. Quantifying resource use in computations

    NARCIS (Netherlands)

    van Son, R.J.J.H.

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for in- stance, in

  9. Aggregated Computational Toxicology Online Resource

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...

  10. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  11. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  12. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning.

  13. Web-Based Computing Resource Agent Publishing

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Web-based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent-child agent framework and primary-slave agent framework were proposed respectively and discussed in detail.

  14. Resource Management in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  15. Adaptive computational resource allocation for sensor networks

    Institute of Scientific and Technical Information of China (English)

    WANG Dian-hong; FEI E; YAN Yu-jie

    2008-01-01

    To efficiently utilize the limited computational resource in real-time sensor networks, this paper focu-ses on the challenge of computational resource allocation in sensor networks and provides a solution with the method of economies. It designs a mieroeconomic system in which the applications distribute their computational resource consumption across sensor networks by virtue of mobile agent. Further, it proposes the market-based computational resource allocation policy named MCRA which satisfies the uniform consumption of computational energy in network and the optimal division of the single computational capacity for multiple tasks. The simula-tion in the scenario of target tracing demonstrates that MCRA realizes an efficient allocation of computational re-sources according to the priority of tasks, achieves the superior allocation performance and equilibrium perform-ance compared to traditional allocation policies, and ultimately prolongs the system lifetime.

  16. A Survey on Resource Allocation Strategies in Cloud Computing

    Directory of Open Access Journals (Sweden)

    V.Vinothina

    2012-06-01

    Full Text Available Cloud computing has become a new age technology that has got huge potentials in enterprises and markets. Clouds can make it possible to access applications and associated data from anywhere. Companies are able to rent resources from cloud for storage and other computational purposes so that their infrastructure cost can be reduced significantly. Further they can make use of company-wide access to applications, based on pay-as-you-go model. Hence there is no need for getting licenses for individual products. However one of the major pitfalls in cloud computing is related to optimizing the resources being allocated. Because of the uniqueness of the model, resource allocation is performed with the objective of minimizing the costs associated with it. The other challenges of resource allocation are meeting customer demands and application requirements. In this paper, various resource allocation strategies and their challenges are discussed in detail. It is believed that this paper would benefit both cloud users and researchers in overcoming the challenges faced.

  17. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resourcesresources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  18. Resourceful Computing in Unstructured Environments

    Science.gov (United States)

    1991-07-31

    Patt A 11, no. 3 (March 1989): 244-257. Little, James J., Guy E. Blelloch, and Todd Cass, "How to Program the Connection Machine for Computer Vision...Blelloch, and Todd Cass, "Parallel Algorithms for Computer Vision on the Connection Machine," Proceedings of the Image Understanding Workshop, Los...L. Jones, Emmanuel Mazer, Patrick A. O’Donnell, "Task-Level Planning of Pick-and-Place Robot Motions," Computer Magazine, vol. 22, no. 3, March 1989

  19. Courtyard planting mushroom fungus-huge protein resources%庭院种蕈菌--巨大的蛋白质资源

    Institute of Scientific and Technical Information of China (English)

    王宝义

    2000-01-01

    Edible fungi has nourishing and hygienical function,and it is huge protein resource. The developing situation, prospect and producing technique of edible fungi in China were expounded. It was pointed out that courtyard planting mushroom fungus has good economic benefits.%食用菌具有营养保健功能,庭院推广种蕈菌是开发巨大的蛋白质资源。阐述了我国食用菌的开发现状、发展前景和生产技术,指出庭院种蕈菌具有很好的经济效益。

  20. Resource management in mobile computing environments

    CERN Document Server

    Mavromoustakis, Constandinos X; Mastorakis, George

    2014-01-01

    This book reports the latest advances on the design and development of mobile computing systems, describing their applications in the context of modeling, analysis and efficient resource management. It explores the challenges on mobile computing and resource management paradigms, including research efforts and approaches recently carried out in response to them to address future open-ended issues. The book includes 26 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of mobile computing, from basic concepts to advanced findings, reporting the state-of-the-art on resource management in such environments. It is mainly intended as a reference guide for researchers and practitioners involved in the design, development and applications of mobile computing systems, seeking solutions to related issues. It also represents a useful textbook for advanced undergraduate and graduate courses, addressing special t...

  1. Efficient Resource Management in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rushikesh Shingade

    2015-12-01

    Full Text Available Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Management in Cloud Computing (EFRE model, CloudSim is used as a simulation toolkit that allows simulation of DataCenter in Cloud computing system. The CloudSim toolkit also supports the creation of multiple virtual machines (VMs on a node of a DataCenter where cloudlets (user requests are assigned to virtual machines by scheduling policies. This paper represents, allocation policies, Time-Shared and Space-Shared are used for scheduling the cloudlets and compared with the constraints (metrics like total execution time, a number of resources and resource allocation algorithm. CloudSim has been used for simulations and the result of simulation demonstrate that Resource Management is effective.

  2. Framework of Resource Management for Intercloud Computing

    Directory of Open Access Journals (Sweden)

    Mohammad Aazam

    2014-01-01

    Full Text Available There has been a very rapid increase in digital media content, due to which media cloud is gaining importance. Cloud computing paradigm provides management of resources and helps create extended portfolio of services. Through cloud computing, not only are services managed more efficiently, but also service discovery is made possible. To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible for standalone clouds to handle everything with the increasing user demands. For scalability and better service provisioning, at times, clouds have to communicate with other clouds and share their resources. This scenario is called Intercloud computing or cloud federation. The study on Intercloud computing is still in its start. Resource management is one of the key concerns to be addressed in Intercloud computing. Already done studies discuss this issue only in a trivial and simplistic way. In this study, we present a resource management model, keeping in view different types of services, different customer types, customer characteristic, pricing, and refunding. The presented framework was implemented using Java and NetBeans 8.0 and evaluated using CloudSim 3.0.3 toolkit. Presented results and their discussion validate our model and its efficiency.

  3. COMPUTATIONAL RESOURCES FOR BIOFUEL FEEDSTOCK SPECIES

    Energy Technology Data Exchange (ETDEWEB)

    Buell, Carol Robin [Michigan State University; Childs, Kevin L [Michigan State University

    2013-05-07

    While current production of ethanol as a biofuel relies on starch and sugar inputs, it is anticipated that sustainable production of ethanol for biofuel use will utilize lignocellulosic feedstocks. Candidate plant species to be used for lignocellulosic ethanol production include a large number of species within the Grass, Pine and Birch plant families. For these biofuel feedstock species, there are variable amounts of genome sequence resources available, ranging from complete genome sequences (e.g. sorghum, poplar) to transcriptome data sets (e.g. switchgrass, pine). These data sets are not only dispersed in location but also disparate in content. It will be essential to leverage and improve these genomic data sets for the improvement of biofuel feedstock production. The objectives of this project were to provide computational tools and resources for data-mining genome sequence/annotation and large-scale functional genomic datasets available for biofuel feedstock species. We have created a Bioenergy Feedstock Genomics Resource that provides a web-based portal or clearing house for genomic data for plant species relevant to biofuel feedstock production. Sequence data from a total of 54 plant species are included in the Bioenergy Feedstock Genomics Resource including model plant species that permit leveraging of knowledge across taxa to biofuel feedstock species.We have generated additional computational analyses of these data, including uniform annotation, to facilitate genomic approaches to improved biofuel feedstock production. These data have been centralized in the publicly available Bioenergy Feedstock Genomics Resource (http://bfgr.plantbiology.msu.edu/).

  4. Optimised resource construction for verifiable quantum computation

    Science.gov (United States)

    Kashefi, Elham; Wallden, Petros

    2017-04-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph.

  5. Limitation of computational resource as physical principle

    CERN Document Server

    Ozhigov, Y I

    2003-01-01

    Limitation of computational resources is considered as a universal principle that for simulation is as fundamental as physical laws are. It claims that all experimentally verifiable implications of physical laws can be simulated by the effective classical algorithms. It is demonstrated through a completely deterministic approach proposed for the simulation of biopolymers assembly. A state of molecule during its assembly is described in terms of the reduced density matrix permitting only limited tunneling. An assembly is treated as a sequence of elementary scatterings of simple molecules from the environment on the point of assembly. A decoherence is treated as a forced measurement of quantum state resulted from the shortage of computational resource. All results of measurements are determined by a choice from the limited number of special options of the nonphysical nature which stay unchanged till the completion of assembly; we do not use the random numbers generators. Observations of equal states during the ...

  6. Automating usability of ATLAS Distributed Computing resources

    CERN Document Server

    "Tupputi, S A; The ATLAS collaboration

    2013-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic exclusion/recovery of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources who feature non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes SAM (Site Availability Test) site-by-site SRM tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites.\

  7. Architecturing Conflict Handling of Pervasive Computing Resources

    OpenAIRE

    Jakob, Henner; Consel, Charles; Loriant, Nicolas

    2011-01-01

    International audience; Pervasive computing environments are created to support human activities in different domains (e.g., home automation and healthcare). To do so, applications orchestrate deployed services and devices. In a realistic setting, applications are bound to conflict in their usage of shared resources, e.g., controlling doors for security and fire evacuation purposes. These conflicts can have critical effects on the physical world, putting people and assets at risk. This paper ...

  8. LHCb Computing Resource usage in 2015 (II)

    CERN Document Server

    Bozzi, Concezio

    2016-01-01

    This documents reports the usage of computing resources by the LHCb collaboration during the period January 1st – December 31st 2015. The data in the following sections has been compiled from the EGI Accounting portal: https://accounting.egi.eu. For LHCb specific information, the data is taken from the DIRAC Accounting at the LHCb DIRAC Web portal: http://lhcb-portal-dirac.cern.ch.

  9. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  10. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  11. Automating usability of ATLAS Distributed Computing resources

    Science.gov (United States)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  12. Multi-Programmatic and Institutional Computing Capacity Resource Attachment 2 Statement of Work

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M

    2002-04-15

    Lawrence Livermore National Laboratory (LLNL) has identified high-performance computing as a critical competency necessary to meet the goals of LLNL's scientific and engineering programs. Leadership in scientific computing demands the availability of a stable, powerful, well-balanced computational infrastructure, and it requires research directed at advanced architectures, enabling numerical methods and computer science. To encourage all programs to benefit from the huge investment being made by the Advanced Simulation and Computing Program (ASCI) at LLNL, and to provide a mechanism to facilitate multi-programmatic leveraging of resources and access to high-performance equipment by researchers, M&IC was created. The Livermore Computing (LC) Center, a part of the Computations Directorate Integrated Computing and Communications (ICC) Department can be viewed as composed of two facilities, one open and one secure. This acquisition is focused on the M&IC resources in the Open Computing Facility (OCF). For the M&IC program, recent efforts and expenditures have focused on enhancing capacity and stabilizing the TeraCluster 2000 (TC2K) resource. Capacity is a measure of the ability to process a varied workload from many scientists simultaneously. Capability represents the ability to deliver a very large system to run scientific calculations at large scale. In this procurement action, we intend to significantly increase the capability of the M&IC resource to address multiple teraFLOP/s problems, and well as increasing the capacity to do many 100 gigaFLOP/s calculations.

  13. Optimal Joint Multiple Resource Allocation Method for Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2011-01-01

    Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources. To provide cloud computing services economically, it is important to optimize resource allocation under the assumption that the required resource can be taken from a shared resource pool. In addition, to be able to provide processing ability and storage capacity, it is necessary to allocate bandwidth to access them at the same time. This paper proposes an optimal resource allocation method for cloud computing environments. First, this paper develops a resource allocation model of cloud computing environments, assuming both processing ability and bandwidth are allocated simultaneously to each service request and rented out on an hourly basis. The allocated resources are dedicated to each service request. Next, this paper proposes an optimal joint multiple resource allocation method, based on the above resource allocation model. It is demonstrated by simulation evaluation that the p...

  14. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  15. Contract on using computer resources of another

    Directory of Open Access Journals (Sweden)

    Cvetković Mihajlo

    2016-01-01

    Full Text Available Contractual relations involving the use of another's property are quite common. Yet, the use of computer resources of others over the Internet and legal transactions arising thereof certainly diverge from the traditional framework embodied in the special part of contract law dealing with this issue. Modern performance concepts (such as: infrastructure, software or platform as high-tech services are highly unlikely to be described by the terminology derived from Roman law. The overwhelming novelty of high-tech services obscures the disadvantageous position of contracting parties. In most cases, service providers are global multinational companies which tend to secure their own unjustified privileges and gain by providing lengthy and intricate contracts, often comprising a number of legal documents. General terms and conditions in these service provision contracts are further complicated by the '.service level agreement', rules of conduct and (nonconfidentiality guarantees. Without giving the issue a second thought, users easily accept the pre-fabricated offer without reservations, unaware that such a pseudo-gratuitous contract actually conceals a highly lucrative and mutually binding agreement. The author examines the extent to which the legal provisions governing sale of goods and services, lease, loan and commodatum may apply to 'cloud computing' contracts, and analyses the scope and advantages of contractual consumer protection, as a relatively new area in contract law. The termination of a service contract between the provider and the user features specific post-contractual obligations which are inherent to an online environment.

  16. THE STRATEGY OF RESOURCE MANAGEMENT BASED ON GRID COMPUTING

    Institute of Scientific and Technical Information of China (English)

    Wang Ruchuan; Han Guangfa; Wang Haiyan

    2006-01-01

    This paper analyzes the defaults of traditional method according to the resource management method of grid computing based on virtual organization. It supports the concept to ameliorate the resource management with mobile agent and gives the ameliorated resource management model. Also pointed out is the methodology of ameliorating resource management and the way to realize in reality.

  17. MANAGEMENT OF HUGE ENCEPHALOCELE

    Directory of Open Access Journals (Sweden)

    Rajeev

    2015-11-01

    Full Text Available Among all neural tube defects, encephalocele incidents are 1 in 5000 live births. (1 Newborn with encephalocele may be associated with other congenital malformations. Encephalocele patient’s management pose many challenge to neurosurgeon due to other associated anomalies that may present like ventriculocele, Dandy Walker and Arnold-Chiari malformation, and difficult positioning airway management to anaesthesiologist. We discuss a case of huge encephalocele and its management

  18. Dynamic Resource Management and Job Scheduling for High Performance Computing

    OpenAIRE

    2016-01-01

    Job scheduling and resource management plays an essential role in high-performance computing. Supercomputing resources are usually managed by a batch system, which is responsible for the effective mapping of jobs onto resources (i.e., compute nodes). From the system perspective, a batch system must ensure high system utilization and throughput, while from the user perspective it must ensure fast response times and fairness when allocating resources across jobs. Parallel jobs can be divide...

  19. LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents the computing resources needed by LHCb in 2019 and a reassessment of the 2018 requests, as resulting from the current experience of Run2 data taking and minor changes in the LHCb computing model parameters.

  20. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  1. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  2. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  3. PERFORMANCE IMPROVEMENT IN CLOUD COMPUTING USING RESOURCE CLUSTERING

    Directory of Open Access Journals (Sweden)

    G. Malathy

    2013-01-01

    Full Text Available Cloud computing is a computing paradigm in which the various tasks are assigned to a combination of connections, software and services that can be accessed over the network. The computing resources and services can be efficiently delivered and utilized, making the vision of computing utility realizable. In various applications, execution of services with more number of tasks has to perform with minimum intertask communication. The applications are more likely to exhibit different patterns and levels and the distributed resources organize into various topologies for information and query dissemination. In a distributed system the resource discovery is a significant process for finding appropriate nodes. The earlier resource discovery mechanism in cloud system relies on the recent observations. In this study, resource usage distribution for a group of nodes with identical resource usages patterns are identified and kept as a cluster and is named as resource clustering approach. The resource clustering approach is modeled using CloudSim, a toolkit for modeling and simulating cloud computing environments and the evaluation improves the performance of the system in the usage of the resources. Results show that resource clusters are able to provide high accuracy for resource discovery.

  4. Resource Centered Computing delivering high parallel performance

    OpenAIRE

    2014-01-01

    International audience; Modern parallel programming requires a combination of differentparadigms, expertise and tuning, that correspond to the differentlevels in today's hierarchical architectures. To cope with theinherent difficulty, ORWL (ordered read-write locks) presents a newparadigm and toolbox centered around local or remote resources, suchas data, processors or accelerators. ORWL programmers describe theircomputation in terms of access to these resources during criticalsections. Exclu...

  5. Resource management in utility and cloud computing

    CERN Document Server

    Zhao, Han

    2013-01-01

    This SpringerBrief reviews the existing market-oriented strategies for economically managing resource allocation in distributed systems. It describes three new schemes that address cost-efficiency, user incentives, and allocation fairness with regard to different scheduling contexts. The first scheme, taking the Amazon EC2? market as a case of study, investigates the optimal resource rental planning models based on linear integer programming and stochastic optimization techniques. This model is useful to explore the interaction between the cloud infrastructure provider and the cloud resource c

  6. LHCb Computing Resources: 2018 requests and preview of 2019 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents a reassessment of computing resources needed by LHCb in 2018 and a preview of computing requests for 2019, as resulting from the current experience of Run2 data taking and recent changes in the LHCb computing model parameters.

  7. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  8. Research on Cloud Computing Resources Provisioning Based on Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Zhiping Peng

    2015-01-01

    Full Text Available As one of the core issues for cloud computing, resource management adopts virtualization technology to shield the underlying resource heterogeneity and complexity which makes the massive distributed resources form a unified giant resource pool. It can achieve efficient resource provisioning by using the rational implementing resource management methods and techniques. Therefore, how to manage cloud computing resources effectively becomes a challenging research topic. By analyzing the executing progress of a user job in the cloud computing environment, we proposed a novel resource provisioning scheme based on the reinforcement learning and queuing theory in this study. With the introduction of the concepts of Segmentation Service Level Agreement (SSLA and Utilization Unit Time Cost (UUTC, we viewed the resource provisioning problem in cloud computing as a sequential decision issue, and then we designed a novel optimization object function and employed reinforcement learning to solve it. Experiment results not only demonstrated the effectiveness of the proposed scheme, but also proved to outperform the common methods of resource utilization rate in terms of SLA collision avoidance and user costs.

  9. Resource Provisioning in SLA-Based Cluster Computing

    Science.gov (United States)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  10. Resource-efficient linear optical quantum computation.

    Science.gov (United States)

    Browne, Daniel E; Rudolph, Terry

    2005-07-01

    We introduce a scheme for linear optics quantum computation, that makes no use of teleported gates, and requires stable interferometry over only the coherence length of the photons. We achieve a much greater degree of efficiency and a simpler implementation than previous proposals. We follow the "cluster state" measurement based quantum computational approach, and show how cluster states may be efficiently generated from pairs of maximally polarization entangled photons using linear optical elements. We demonstrate the universality and usefulness of generic parity measurements, as well as introducing the use of redundant encoding of qubits to enable utilization of destructive measurements--both features of use in a more general context.

  11. Resource requirements for digital computations on electrooptical systems.

    Science.gov (United States)

    Eshaghian, M M; Panda, D K; Kumar, V K

    1991-03-10

    In this paper we study the resource requirements of electrooptical organizations in performing digital computing tasks. We define a generic model of parallel computation using optical interconnects, called the optical model of computation (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Using this model we derive relationships between information transfer and computational resources in solving a given problem. To illustrate our results, we concentrate on a computationally intensive operation, 2-D digital image convolution. Irrespective of the input/output scheme and the order of computation, we show a lower bound of ?(nw) on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  12. Resource requirements for digital computations on electrooptical systems

    Science.gov (United States)

    Eshaghian, Mary M.; Panda, Dhabaleswar K.; Kumar, V. K. Prasanna

    1991-03-01

    The resource requirements of electrooptical organizations in performing digital computing tasks are studied via a generic model of parallel computation using optical interconnects, called the 'optical model of computation' (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Relationships between information transfer and computational resources in solving a given problem are derived. A computationally intensive operation, two-dimensional digital image convolution is undertaken. Irrespective of the input/output scheme and the order of computation, a lower bound of Omega(nw) is obtained on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  13. Data-centric computing on distributed resources

    NARCIS (Netherlands)

    Cushing, R.S.

    2015-01-01

    Distributed computing has always been a challenge due to the NP-completeness of finding optimal underlying management routines. The advent of big data increases the dimensionality of the problem whereby data partitionability, processing complexity and locality play a crucial role in the effectivenes

  14. Allocation Strategies of Virtual Resources in Cloud-Computing Networks

    Directory of Open Access Journals (Sweden)

    D.Giridhar Kumar

    2014-11-01

    Full Text Available In distributed computing, Cloud computing facilitates pay per model as per user demand and requirement. Collection of virtual machines including both computational and storage resources will form the Cloud. In Cloud computing, the main objective is to provide efficient access to remote and geographically distributed resources. Cloud faces many challenges, one of them is scheduling/allocation problem. Scheduling refers to a set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its allocation strategy according to the changing environment and the type of task. In this paper we will see FCFS, Round Robin scheduling in addition to Linear Integer Programming an approach of resource allocation.

  15. A global resource for computational chemistry

    OpenAIRE

    2004-01-01

    Describes the creation and curation of the ca 200,000 molecules and calculations deposited in this collection (WWMM) This article has been submitted to the Journal Of Molecular Modeling (Springer) which allows self-archiving of preprints (but not postprints) - ROMEO-yellow A modular distributable system has been built for high-throughput computation of molecular structures and properties. It has been used to process 250K compounds from the NCI database and to make the results searchabl...

  16. Cloud Scheduler: a resource manager for distributed compute clouds

    CERN Document Server

    Armstrong, P; Bishop, A; Charbonneau, A; Desmarais, R; Fransham, K; Hill, N; Gable, I; Gaudet, S; Goliath, S; Impey, R; Leavett-Brown, C; Ouellete, J; Paterson, M; Pritchet, C; Penfold-Brown, D; Podaima, W; Schade, D; Sobie, R J

    2010-01-01

    The availability of Infrastructure-as-a-Service (IaaS) computing clouds gives researchers access to a large set of new resources for running complex scientific applications. However, exploiting cloud resources for large numbers of jobs requires significant effort and expertise. In order to make it simple and transparent for researchers to deploy their applications, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. Cloud Scheduler boots and manages the user-customized virtual machines in response to a user's job submission. We describe the motivation and design of the Cloud Scheduler and present results on its use on both science and commercial clouds.

  17. Computer Usage as Instructional Resources for Vocational Training in Nigeria

    Science.gov (United States)

    Oguzor, Nkasiobi Silas

    2011-01-01

    The use of computers has become the driving force in the delivery of instruction of today's vocational education and training (VET) in Nigeria. Though computers have become an increasingly accessible resource for educators to use in their teaching activities, most teachers are still unable to integrate it in their teaching and learning processes.…

  18. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  19. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  20. Shared resource control between human and computer

    Science.gov (United States)

    Hendler, James; Wilson, Reid

    1989-01-01

    The advantages of an AI system of actively monitoring human control of a shared resource (such as a telerobotic manipulator) are presented. A system is described in which a simple AI planning program gains efficiency by monitoring human actions and recognizing when the actions cause a change in the system's assumed state of the world. This enables the planner to recognize when an interaction occurs between human actions and system goals, and allows maintenance of an up-to-date knowledge of the state of the world and thus informs the operator when human action would undo a goal achieved by the system, when an action would render a system goal unachievable, and efficiently replans the establishment of goals after human intervention.

  1. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  2. Computer Resources Handbook for Flight Critical Systems.

    Science.gov (United States)

    1985-01-01

    4- lr , 4-21 71 -r:v-’.7. 7777 -.- - ~~ --- 2- ’K 2. N It- NATIONAL 8UURFAU OF E UORGoPY RESOLUI TESI 4.4, % % ! O0 ASI-TR-85,-502O (0 COMPUTER...associated with the ,.l-a, and the status of the originating unit or function is identifiel (e. g., ’.." 4, . ..-. operating in no rrrji / r estr i ct ed emrg...lllllEEEEElhEE IEEEEEEEEEEEEE Eu. -2w |’’ ".4 -, M.iii - /, - ,, IV. . ,,. 1 0 2-4 11M ~ 2 - Hill- 14 W15 NATIONAL BURAU OF S MCROGOPY RESOUYI TESI 5’W, 4

  3. CPT White Paper on Tier-1 Computing Resource Needs

    CERN Document Server

    CERN. Geneva. CPT Project

    2006-01-01

    In the summer of 2005, CMS like the other LHC experiments published a Computing Technical Design Report (C-TDR) for the LHCC, which describes the CMS computing models as a distributed system of Tier-0, Tier-1, and Tier-2 regional computing centers, and the CERN analysis facility, the CMS-CAF. The C-TDR contains information on resource needs for the different computing tiers that are derived from a set of input assumptions and desiderata on how to achieve high-throughput and a robust computing environment. At the CERN Computing Resources Review Board meeting in October 2005, the funding agencies agreed on a Memorandum of Understanding (MoU) describing the worldwide collaboration on LHC computing (WLCG). In preparation for this meeting the LCG project had put together information from countries regarding their pledges for computing resources at Tier-1 and Tier-2 centers. These pledges include the amount of CPU power, disk storage, tape storage library space, and network connectivity for each of the LHC experime...

  4. Dynamic computing resource allocation in online flood monitoring and prediction

    Science.gov (United States)

    Kuchar, S.; Podhoranyi, M.; Vavrik, R.; Portero, A.

    2016-08-01

    This paper presents tools and methodologies for dynamic allocation of high performance computing resources during operation of the Floreon+ online flood monitoring and prediction system. The resource allocation is done throughout the execution of supported simulations to meet the required service quality levels for system operation. It also ensures flexible reactions to changing weather and flood situations, as it is not economically feasible to operate online flood monitoring systems in the full performance mode during non-flood seasons. Different service quality levels are therefore described for different flooding scenarios, and the runtime manager controls them by allocating only minimal resources currently expected to meet the deadlines. Finally, an experiment covering all presented aspects of computing resource allocation in rainfall-runoff and Monte Carlo uncertainty simulation is performed for the area of the Moravian-Silesian region in the Czech Republic.

  5. Application-adaptive resource scheduling in a computational grid

    Institute of Scientific and Technical Information of China (English)

    LUAN Cui-ju; SONG Guang-hua; ZHENG Yao

    2006-01-01

    Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid.Resource scheduling should consider the specific characteristics of the application, and decide the metrics to be used accordingly.This paper presents a distributed resource scheduling framework mainly consisting of a job scheduler and a local scheduler. In order to meet the requirements of different applications, we adopt HGSA, a Heuristic-based Greedy Scheduling Algorithm, to schedule jobs in the grid, where the heuristic knowledge is the metric weights of the computing resources and the metric workload impact factors. The metric weight is used to control the effect of the metric on the application. For different applications, only metric weights and the metric workload impact factors need to be changed, while the scheduling algorithm remains the same.Experimental results are presented to demonstrate the adaptability of the HGSA.

  6. EST analysis pipeline: use of distributed computing resources.

    Science.gov (United States)

    González, Francisco Javier; Vizcaíno, Juan Antonio

    2011-01-01

    This chapter describes how a pipeline for the analysis of expressed sequence tag (EST) data can be -implemented, based on our previous experience generating ESTs from Trichoderma spp. We focus on key steps in the workflow, such as the processing of raw data from the sequencers, the clustering of ESTs, and the functional annotation of the sequences using BLAST, InterProScan, and BLAST2GO. Some of the steps require the use of intensive computing power. Since these resources are not available for small research groups or institutes without bioinformatics support, an alternative will be described: the use of distributed computing resources (local grids and Amazon EC2).

  7. Using OSG Computing Resources with (iLC)Dirac

    CERN Document Server

    Sailer, Andre

    2017-01-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called 'SiteDirectors', which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional sitespecific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were develo...

  8. Load Balancing in Local Computational Grids within Resource Allocation Process

    Directory of Open Access Journals (Sweden)

    Rouhollah Golmohammadi

    2012-11-01

    Full Text Available A suitable resource allocation method in computational grids should schedule resources in a way that provides the requirements of the users and the resource providers; i.e., the maximum number of tasks should be completed in their time and budget constraints and the received load be distributed equally between resources. This is a decision-making problem, while the scheduler should select a resource from all ones. This process is a multi criteria decision-making problem; because of affect of different properties of resources on this decision. The goal of this decision-making process is balancing the load and completing the tasks in their defined constraints. The proposed algorithm is an analytic hierarchy process based Resource Allocation (ARA method. This method estimates a value for the preference of each resource and then selects the appropriate resource based on the allocated values. The simulations show the ARA method decreases the task failure rate at least 48% and increases the balance factor more than 3.4%.

  9. A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking

    Directory of Open Access Journals (Sweden)

    Fang-Yie Leu

    2012-03-01

    Full Text Available In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short, which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET, users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.

  10. A Review on Modern Distributed Computing Paradigms: Cloud Computing, Jungle Computing and Fog Computing

    OpenAIRE

    Hajibaba, Majid; Gorgin, Saeid

    2014-01-01

    The distributed computing attempts to improve performance in large-scale computing problems by resource sharing. Moreover, rising low-cost computing power coupled with advances in communications/networking and the advent of big data, now enables new distributed computing paradigms such as Cloud, Jungle and Fog computing.Cloud computing brings a number of advantages to consumers in terms of accessibility and elasticity. It is based on centralization of resources that possess huge processing po...

  11. GridFactory - Distributed computing on ephemeral resources

    DEFF Research Database (Denmark)

    Orellana, Frederik; Niinimaki, Marko

    2011-01-01

    A novel batch system for high throughput computing is presented. The system is specifically designed to leverage virtualization and web technology to facilitate deployment on cloud and other ephemeral resources. In particular, it implements a security model suited for forming collaborations...

  12. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    OpenAIRE

    Cirasella, Jill

    2009-01-01

    This article is an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news.

  13. Active resources concept of computation for enterprise software

    Directory of Open Access Journals (Sweden)

    Koryl Maciej

    2017-06-01

    Full Text Available Traditional computational models for enterprise software are still to a great extent centralized. However, rapid growing of modern computation techniques and frameworks causes that contemporary software becomes more and more distributed. Towards development of new complete and coherent solution for distributed enterprise software construction, synthesis of three well-grounded concepts is proposed: Domain-Driven Design technique of software engineering, REST architectural style and actor model of computation. As a result new resources-based framework arises, which after first cases of use seems to be useful and worthy of further research.

  14. Computing Resource And Work Allocations Using Social Profiles

    Directory of Open Access Journals (Sweden)

    Peter Lavin

    2013-01-01

    Full Text Available If several distributed and disparate computer resources exist, many of whichhave been created for different and diverse reasons, and several large scale com-puting challenges also exist with similar diversity in their backgrounds, then oneproblem which arises in trying to assemble enough of these resources to addresssuch challenges is the need to align and accommodate the different motivationsand objectives which may lie behind the existence of both the resources andthe challenges. Software agents are offered as a mainstream technology formodelling the types of collaborations and relationships needed to do this. Asan initial step towards forming such relationships, agents need a mechanism toconsider social and economic backgrounds. This paper explores addressing so-cial and economic differences using a combination of textual descriptions knownas social profiles and search engine technology, both of which are integrated intoan agent technology.

  15. Exploiting multicore compute resources in the CMS experiment

    Science.gov (United States)

    Ramírez, J. E.; Pérez-Calero Yzquierdo, A.; Hernández, J. M.; CMS Collaboration

    2016-10-01

    CMS has developed a strategy to efficiently exploit the multicore architecture of the compute resources accessible to the experiment. A coherent use of the multiple cores available in a compute node yields substantial gains in terms of resource utilization. The implemented approach makes use of the multithreading support of the event processing framework and the multicore scheduling capabilities of the resource provisioning system. Multicore slots are acquired and provisioned by means of multicore pilot agents which internally schedule and execute single and multicore payloads. Multicore scheduling and multithreaded processing are currently used in production for online event selection and prompt data reconstruction. More workflows are being adapted to run in multicore mode. This paper presents a review of the experience gained in the deployment and operation of the multicore scheduling and processing system, the current status and future plans.

  16. The Grid Resource Broker, A Ubiquitous Grid Computing Framework

    Directory of Open Access Journals (Sweden)

    Giovanni Aloisio

    2002-01-01

    Full Text Available Portals to computational/data grids provide the scientific community with a friendly environment in order to solve large-scale computational problems. The Grid Resource Broker (GRB is a grid portal that allows trusted users to create and handle computational/data grids on the fly exploiting a simple and friendly web-based GUI. GRB provides location-transparent secure access to Globus services, automatic discovery of resources matching the user's criteria, selection and scheduling on behalf of the user. Moreover, users are not required to learn Globus and they do not need to write specialized code or to rewrite their existing legacy codes. We describe GRB architecture, its components and current GRB features addressing the main differences between our approach and related work in the area.

  17. Computing Bounds on Resource Levels for Flexible Plans

    Science.gov (United States)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow

  18. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    CERN Document Server

    Buyya, Rajkumar; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of ...

  19. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    Science.gov (United States)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie; Atlas Collaboration

    2014-06-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  20. A huge presacral Tarlov cyst. Case report.

    Science.gov (United States)

    Ishii, Kazuhiko; Yuzurihara, Masahito; Asamoto, Shunji; Doi, Hiroshi; Kubota, Motoo

    2007-08-01

    Perineural cysts have become a common incidental finding during lumbosacral magnetic resonance (MR) imaging. Only some of the symptomatic cysts warrant treatment. The authors describe the successful operative treatment of a patient with, to the best of their knowledge, the largest perineural cyst reported to date. A 29-year-old woman had been suffering from long-standing constipation and low-back pain. During an obstetric investigation for infertility, the clinician discovered a huge presacral cystic mass. Computed tomography myelography showed the lesion to be a huge Tarlov cyst arising from the left S-3 nerve root and compressing the ipsilateral S-2 nerve. The cyst was successfully treated by ligation of the cyst neck together with sectioning of the S-3 nerve root. Postoperative improvement in her symptoms and MR imaging findings were noted. Identification of the nerve root involved by the cyst wall, operative indication, operative procedure, and treatment of multiple cysts are important preoperative considerations.

  1. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  2. Common accounting system for monitoring the ATLAS Distributed Computing resources

    CERN Document Server

    Karavakis, E; The ATLAS collaboration; Campana, S; Gayazov, S; Jezequel, S; Saiz, P; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  3. Research on message resource optimization in computer supported collaborative design

    Institute of Scientific and Technical Information of China (English)

    张敬谊; 张申生; 陈纯; 王波

    2004-01-01

    An adaptive mechanism is presented to reduce bandwidth usage and to optimize the use of computing resources of heterogeneous computer mixes utilized in CSCD to reach the goal of collaborative design in distributed-synchronous mode.The mechanism is realized on a C/S architecture based on operation information sharing. Firstly, messages are aggregated into packets on the client. Secondly, an outgoing-message weight priority queue with traffic adjusting technique is cached on the server. Thirdly, an incoming-message queue is cached on the client. At last, the results of implementing the proposed scheme in a simple collaborative design environment are presented.

  4. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  5. Energy based Efficient Resource Scheduling in Green Computing

    Directory of Open Access Journals (Sweden)

    B.Vasumathi,

    2015-11-01

    Full Text Available Cloud Computing is an evolving area of efficient utilization of computing resources. Data centers accommodating Cloud applications ingest massive quantities of energy, contributing to high functioning expenditures and carbon footprints to the atmosphere. Hence, Green Cloud computing resolutions are required not only to save energy for the environment but also to decrease operating charges. In this paper, we emphasis on the development of energy based resource scheduling framework and present an algorithm that consider the synergy between various data center infrastructures (i.e., software, hardware, etc., and performance. In specific, this paper proposes (a architectural principles for energy efficient management of Clouds; (b energy efficient resource allocation strategies and scheduling algorithm considering Quality of Service (QoS outlooks. The performance of the proposed algorithm has been evaluated with the existing energy based scheduling algorithms. The experimental results demonstrate that this approach is effective in minimizing the cost and energy consumption of Cloud applications thus moving towards the achievement of Green Clouds.

  6. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    Science.gov (United States)

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  7. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    CERN Document Server

    Öhman, H; The ATLAS collaboration; Hendrix, V

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. With the new cloud technologies come also new challenges, and one such is the contextualization of cloud resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible, which precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration, dynamic resource scaling, and high degree of scalability.

  8. Pre-allocation Strategies of Computational Resources in Cloud Computing using Adaptive Resonance Theory-2

    CERN Document Server

    Nair, T R Gopalakrishnan

    2012-01-01

    One of the major challenges of cloud computing is the management of request-response coupling and optimal allocation strategies of computational resources for the various types of service requests. In the normal situations the intelligence required to classify the nature and order of the request using standard methods is insufficient because the arrival of request is at a random fashion and it is meant for multiple resources with different priority order and variety. Hence, it becomes absolutely essential that we identify the trends of different request streams in every category by auto classifications and organize preallocation strategies in a predictive way. It calls for designs of intelligent modes of interaction between the client request and cloud computing resource manager. This paper discusses about the corresponding scheme using Adaptive Resonance Theory-2.

  9. 分布式云计算环境下的海量数据有效查询方法%Huge Amounts Data Effective Query Methods in Distributed Cloud Computing Environment

    Institute of Scientific and Technical Information of China (English)

    陈志华

    2015-01-01

    在对分布式云环境下的海量数据进行查询的过程中,容易出现带宽有限、能量有限、链路频繁断接的特点,导致传统的查询方法由于采用自适应分发数据机制来减少数据的通信量,不能有效实现海量数据查询,提出一种基于查询节点动态轮换的分布式环境下海量数据有效查询方法,将分布式云计算环境下的网络看作是一个带权的无向图,给出分布式云计算环境下单位数据传输时延计算公式,分析了系统模型及海量数据查询的问题描述.将每次剩余能量最高的节点作为查询节点,当接收到一个查询请求时,各节点需感应同时采集该节点所覆盖区域的数据源,对其进行计算、处理等操作,获取趋于请求的结果集,每个节点沿各自路径将数据传输至查询节点,在传输的过程中,各节点将接收到的数据进行融合处理.仿真实验结果表明,所提方法具有很高的查询命中率.%The huge amounts of data in a distributed cloud environment in the process of query, it's easy to have a limited bandwidth, limited energy, link the characteristics of a breakout, frequently lead to the traditional query method with adap-tive data distribution mechanism to reduce the traffic data, cannot effectively realize the huge amounts of data query, in this paper, a dynamic query node based on rotation under the distributed environment of huge amounts of data query methods ef-fectively, will be distributed in cloud computing environment of the network as a weighted undirected graph, gives a distrib-uted cloud environment formula of unit of data transmission delay, analyzes the system model and huge amounts of data que-ry problem description.Every time will be the highest residual energy of nodes as a query node,When receives a query re-quest, each node needs to sensor nodes covered area on the acquisition data sources at the same time, the calculation and processing operations, such as access

  10. Huge music archives on mobile devices

    DEFF Research Database (Denmark)

    Blume, H.; Bischl, B.; Botteck, M.

    2011-01-01

    The availability of huge nonvolatile storage capacities such as flash memory allows large music archives to be maintained even in mobile devices. With the increase in size, manual organization of these archives and manual search for specific music becomes very inconvenient. Automated dynamic...... and difficult to tackle on mobile platforms. Against this background, we provided an overview of algorithms for music classification as well as their computation times and other hardware-related aspects, such as power consumption on various hardware architectures. For mobile platforms such as smartphones...

  11. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  12. MADLVF: An Energy Efficient Resource Utilization Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    J.K. Verma

    2014-06-01

    Full Text Available Last few decades have remained the witness of steeper growth in demand for higher computational power. It is merely due to shift from the industrial age to Information and Communication Technology (ICT age which was marginally the result of digital revolution. Such trend in demand caused establishment of large-scale data centers situated at geographically apart locations. These large-scale data centers consume a large amount of electrical energy which results into very high operating cost and large amount of carbon dioxide (CO2 emission due to resource underutilization. We propose MADLVF algorithm to overcome the problems such as resource underutilization, high energy consumption, and large CO2 emissions. Further, we present a comparative study between the proposed algorithm and MADRS algorithms showing proposed methodology outperforms over the existing one in terms of energy consumption and the number of VM migrations.

  13. Resources

    Science.gov (United States)

    ... resources Alzheimer's - resources Anorexia nervosa - resources Arthritis - resources Asthma and allergy - resources Autism - resources Blindness - resources BPH - resources Breastfeeding - resources Bulimia - resources Burns - resources Cancer - resources Cerebral ...

  14. Mobile devices and computing cloud resources allocation for interactive applications

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2017-06-01

    Full Text Available Using mobile devices such as smartphones or iPads for various interactive applications is currently very common. In the case of complex applications, e.g. chess games, the capabilities of these devices are insufficient to run the application in real time. One of the solutions is to use cloud computing. However, there is an optimization problem of mobile device and cloud resources allocation. An iterative heuristic algorithm for application distribution is proposed. The algorithm minimizes the energy cost of application execution with constrained execution time.

  15. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  16. Enabling Grid Computing resources within the KM3NeT computing model

    Science.gov (United States)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  17. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  18. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  19. An Optimal Solution of Resource Provisioning Cost in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Arun Pandian

    2013-03-01

    Full Text Available In cloud computing, providing an optimal resource to user becomes more and more important. Cloud computing users can access the pool of computing resources through internet. Cloud providers are charge for these computing resources based on cloud resource usage. The provided resource plans are reservation and on demand. The computing resources are provisioned by cloud resource provisioning model. In this model resource cost is high due to the difficulty in optimization of resource cost under uncertainty. The resource optimization cost is dealing with an uncertainty of resource provisioning cost. The uncertainty of resource provisioning cost consists: on demand cost, Reservation cost, Expending cost. This problem leads difficulty to achieve optimal solution of resource provisioning cost in cloud computing. The Stochastic Integer Programming is applied for difficulty to obtain optimal resource provisioning cost. The Two Stage Stochastic Integer Programming with recourse is applied to solve the complexity of optimization problems under uncertainty. The stochastic programming is enhanced as Deterministic Equivalent Formulation for solve the probability distribution of all scenarios to reduce the on demand cost. The Benders Decomposition is applied for break down the resource optimization problem into multiple sub problems to reduce the on demand cost and reservation cost. The Sample Average Approximation is applied for reduce the problem scenarios in a resource optimization problem. This algorithm is used to reduce the reservation cost and expending cost.

  20. A resource-sharing model based on a repeated game in fog computing

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2017-03-01

    Full Text Available With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  1. A resource-sharing model based on a repeated game in fog computing.

    Science.gov (United States)

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  2. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    Science.gov (United States)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  3. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  4. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  5. Pooling resources to decypher the Big Bang

    CERN Multimedia

    Harvey, Fiona

    2003-01-01

    " Work has started on a "virtual supercomputer" that will be the world's second most powerful data processor. The virtual computer will take the form of a "grid", a technology that links many smaller computers to make one huge computing resource" (1/2 page.

  6. Pooling resources to decypher the Big Bang

    CERN Multimedia

    Harvey, F

    2003-01-01

    "Work has started on a "virtual supercomputer" that will be the world's second most powerful data processor. The virtual computer will take the form of a "grid", a technology that links many smaller computers to make one huge computing resource" (1/2 page)

  7. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  8. The Mechanism of Resource Dissemination and Resource Discovery for Computational Grid%计算网格的资源分发和发现机制

    Institute of Scientific and Technical Information of China (English)

    武秀川; 鞠九滨

    2003-01-01

    Computational Grid is a large-scale distributed computing environment. The resource management of com-putational Grid discoveries and locates and allocates resources for users within the filed of grid environment as theyhave a request to these resources. The other case for that is co-operating in order to finish a large computing. Thesetasks are accomplished by the mechanism of resource dissemination and resource discovery of the resource manage-ment for the grid system. In this paper, some problems about resource dissemination and resource discovery are dis-cussed and analyzed,further more future work about that is proposed.

  9. An Improved Constraint Based Resource Scheduling Approach Using Job Grouping Strategy in Grid Computing

    Directory of Open Access Journals (Sweden)

    Payal Singhal

    2013-01-01

    Full Text Available Grid computing is a collection of distributed resources interconnected by networks to provide a unified virtual computing resource view to the user. Grid computing has one important responsibility of resource management and techniques to allow the user to make optimal use of the job completion time and achieving good throughput. It is a big deal to design the efficient scheduler and is implementation. In this paper, the constraint based job and resource scheduling algorithm has been proposed. The four constraints are taken into account for grouping the jobs, i.e. Resource memory, Job memory, Job MI and the fourth constraint L2 cache are considered. Our implementation is to reduce the processing time efficiently by adding the fourth constraint L2 cache of the resource and is allocated to the resource for parallel computing. The L2 cache is a part of computer’s processor; it increases the performance of computer. It is smaller and extremely fast computer memory. The use of more constraint of the resource and job can increase the efficiency more. The work has been done in MATLAB using the parallel computing toolbox. All the constraints are calculated using different functions in MATLAB and are allocated to the resource based on it. The resource memory, Cache, job memory size and job MI are the key factors to group the jobs according to the available capability of the selected resource. The processing time is taken into account to analyze the feasibility of the algorithms.

  10. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  11. wolfPAC: building a high-performance distributed computing network for phylogenetic analysis using 'obsolete' computational resources.

    Science.gov (United States)

    Reeves, Patrick A; Friedman, Philip H; Richards, Christopher M

    2005-01-01

    wolfPAC is an AppleScript-based software package that facilitates the use of numerous, remotely located Macintosh computers to perform computationally-intensive phylogenetic analyses using the popular application PAUP* (Phylogenetic Analysis Using Parsimony). It has been designed to utilise readily available, inexpensive processors and to encourage sharing of computational resources within the worldwide phylogenetics community.

  12. A rare clinic entity: Huge trichobezoar.

    Science.gov (United States)

    Hamidi, Hidayatullah; Muhammadi, Marzia; Saberi, Bismillah; Sarwari, Mohammad Arif

    2016-01-01

    Trichobezoar is a rare clinical entity in which a ball of hair amasses within the alimentary tract. It can either be found as isolated mass in the stomach or may extend into the intestine. Trichobezoars mostly occur in young females with psychiatric disorders such as trichophagia and trichotillomania. Authors present a giant trichobezoar in an 18year old female presented with complaints of upper abdominal mass, epigastric area pain, anorexia and weight loss. The patient underwent trans-abdominal ultrasonography (USG), Computed tomography (CT), upper gastrointestinal endoscopy and subsequently laparotomy. USG was inconclusive due to non-specific findings. It revealed a thick echogenic layer with posterior dirty shadowing extending from the left sub-diaphragmatic area to the right sub hepatic region obscuring the adjacent structures. Abdominal CT images revealed a huge, well defined, multi-layered, heterogeneous, solid appearing, non-enhancing mass lesion in the gastric lumen extending from the gastric fundus to the pyloric canal. An endoscopic attempt was performed for removal of this intraluminal mass, but due to its large size, and hard nature, the endoscopic removal was unsuccessful. Finally the large trichobezoar was removed with open laparotomy. Trichobezoars should be suspected in young females with long standing upper abdominal masses; as the possibility of malignancy is not very common in this age group. While USG is inconclusive, trichobezoar can be accurately diagnosed with CT. In patient with huge trichobezoar, laparotomy can be performed firstly because of big size and location of mass, and psychiatric recommendation should be made to prevent relapse of this entity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    CERN Document Server

    Buyya, Rajkumar; Calheiros, Rodrigo N

    2012-01-01

    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible alloc...

  14. Intelligent Classification in Huge Heterogeneous Data Sets

    Science.gov (United States)

    2015-06-01

    INTELLIGENT CLASSIFICATION IN HUGE HETEROGENEOUS DATA SETS JUNE 2015 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...To) JUL 2013 – APR 2015 4. TITLE AND SUBTITLE INTELLIGENT CLASSIFICATION IN HUGE HETEROGENEOUS DATA SETS 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT...signals and through data dimension reduction, and to develop and tailor algorithms for the extraction of intelligence from several huge heterogeneous

  15. The Relative Effectiveness of Computer-Based and Traditional Resources for Education in Anatomy

    Science.gov (United States)

    Khot, Zaid; Quinlan, Kaitlyn; Norman, Geoffrey R.; Wainman, Bruce

    2013-01-01

    There is increasing use of computer-based resources to teach anatomy, although no study has compared computer-based learning to traditional. In this study, we examine the effectiveness of three formats of anatomy learning: (1) a virtual reality (VR) computer-based module, (2) a static computer-based module providing Key Views (KV), (3) a plastic…

  16. Huge pelvic mass secondary to wear debris causing ureteral obstruction.

    Science.gov (United States)

    Hananouchi, Takehito; Saito, Masanobu; Nakamura, Nobuo; Yamamoto, Tetsuya; Yonenobu, Kazuo

    2005-10-01

    We report an unusual granulomatous reaction of wear debris that produced a huge pelvic mass causing ureteral obstruction. A 72-year-old woman, who received a cemented total hip arthroplasty 30 years ago, was referred to the department of gynecology for examination of a pelvic mass. A computed tomography scan revealed a huge homogenous mass, measuring approximately 20 x 16 x 12 cm, including extensive osteolysis of the left pelvis around the acetabular component. Intravenous pyelogram revealed complete obstruction of the left ureter resulting in hydronephrosis of the left kidney. Histological examination from the biopsy specimen detected polyethylene wear debris in the mass.

  17. INJECT AN ELASTIC GRID COMPUTING TECHNIQUES TO OPTIMAL RESOURCE MANAGEMENT TECHNIQUE OPERATIONS

    Directory of Open Access Journals (Sweden)

    R. Surendran

    2013-01-01

    Full Text Available Evaluation of sharing on the Internet well- developed from energetic technique of grid computing. Dynamic Grid Computing is Resource sharing in large level high performance computing networks at worldwide. Existing systems have a Limited innovation for resource management process. In proposed work, Grid Computing is an Internet based computing for Optimal Resource Management Technique Operations (ORMTO. ORMTO are Elastic scheduling algorithm, finding the Best Grid node for a task prediction, Fault tolerance resource selection, Perfect resource co-allocation, Grid balanced Resource matchmaking and Agent based grid service, wireless mobility resource access. Survey the various resource management techniques based on the performance measurement factors like time complexity, Space complexity and Energy complexity find the ORMTO with Grid computing. Objectives of ORMTO will provide an efficient Resource co-allocation automatically for a user who is submitting the job without grid knowledge, design a Grid service (portal for selects the Best Fault tolerant Resource for a given task in a fast, secure and efficient manner and provide an Enhanced grid balancing system for multi-tasking via Hybrid topology based Grid Ranking. Best Quality of Service (QOS parameters are important role in all RMT. Proposed system ORMTO use the greater number of QOS Parameters for better enhancement of existing RMT. In proposed system, follow the enhanced techniques and algorithms use to improve the Grid based ORMTO.

  18. A case of a huge gastroepiploic arterial aneurysm.

    Science.gov (United States)

    Ikeda, Hirokuni; Takeo, Masahiko; Mikami, Ryuuichi; Yamamoto, Mistuo

    2015-08-05

    An 85-year-old man complaining of vague abdominal discomfort was admitted to our hospital. A pulsatile 8 × 7-cm mass in the right upper abdomen was noticed on clinical examination. Computed tomography of the abdomen showed a huge arterial aneurysm in the right gastroepiploic artery, and the left gastroepiploic artery was meandering and expanding. An image diagnosis of gastroepiploic arterial aneurysm (GEAA) was made. Because of the huge size of the aneurysm and the predicted high risk of perforation, surgical intervention was planned. The aneurysm was identified in the greater curve and was found to adhere firmly to the transverse colon. Partial resection of the stomach, aneurysmectomy and partial resection of the transverse colon were performed. Clinically, splanchnic arterial aneurysms are rare. Among them, GEAA is especially rare. We report a rare case of a huge GEAA that was treated successfully by surgery. Published by Oxford University Press and JSCR Publishing Ltd. All rights reserved. © The Author 2015.

  19. Effective Computer Resource Management: Keeping the Tail from Wagging the Dog.

    Science.gov (United States)

    Sampson, James P., Jr.

    1982-01-01

    Predicts that student services will be increasingly influenced by computer technology. Suggests this resource be managed effectively to minimize potential problems and prevent a mechanistic and impersonal environment. Urges student personnel workers to assume active responsibility for planning, evaluating, and operating computer resources. (JAC)

  20. Economic-based Distributed Resource Management and Scheduling for Grid Computing

    CERN Document Server

    Buyya, R

    2002-01-01

    Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off bet...

  1. Wide-Area Computing: Resource Sharing on a Large Scale

    Science.gov (United States)

    1999-01-01

    fault propagation, and a set of useful failure mode assumptions. Handle multilanguage and legacy applications “I don’t know what computer language...ence. He is a member of the IEEE Computer Society and the ACM. Frederick Knabe is a senior research scientist in the Department of Computer Science

  2. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    This paper analyzes the decision-making problem confronting SMEs considering the adoption of cloud computing as an alternative to in-house computing services provision. The economics of choosing between in-house computing and a cloud alternative is analyzed by comparing the total economic costs o...

  3. Professional Computer Education Organizations--A Resource for Administrators.

    Science.gov (United States)

    Ricketts, Dick

    Professional computer education organizations serve a valuable function by generating, collecting, and disseminating information concerning the role of the computer in education. This report touches briefly on the reasons for the rapid and successful development of professional computer education organizations. A number of attributes of effective…

  4. Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Lingna He

    2012-09-01

    Full Text Available In order to replace the traditional Internet software usage patterns and enterprise management mode, this paper proposes a new business calculation mode- cloud computing, resources scheduling strategy is the key technology in cloud computing, Based on the study of cloud computing system structure and the mode of operation, The key research for cloud computing the process of the work scheduling and resource allocation problems based on ant colony algorithm , Detailed analysis and design of the specific implementation for cloud resources scheduling . And in CloudSim simulation environment and simulation experiments, the results show that the algorithm has better scheduling performance and load balance than general algorithm.

  5. Huge ascending aortic aneurysm with an intraluminal thrombus in an embolic event-free patient.

    Science.gov (United States)

    Parato, Vito Maurizio; Prifti, Edvin; Pezzuoli, Franco; Labanti, Benedetto; Baboci, Arben

    2015-03-01

    We present a case of an 87-year-old male patient with a huge ascending aortic aneurysm, filled by a huge thrombus most probably due to previous dissection. This finding was detected by two-dimensional transthoracic echocardiography and contrast-enhanced computed tomography (CT) angiography scan. The patient refused surgical treatment and was medically treated. Despite the huge and mobile intraluminal thrombus, the patient remained embolic event-free up to 6 years later, and this makes the case unique.

  6. Relational Computing Using HPC Resources: Services and Optimizations

    OpenAIRE

    2015-01-01

    Computational epidemiology involves processing, analysing and managing large volumes of data. Such massive datasets cannot be handled efficiently by using traditional standalone database management systems, owing to their limitation in the degree of computational efficiency and bandwidth to scale to large volumes of data. In this thesis, we address management and processing of large volumes of data for modeling, simulation and analysis in epidemiological studies. Traditionally, compute intens...

  7. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  8. 基于微软云计算平台的海量数据挖掘系统%Huge amounts of data mining system based on Microsoft's cloud computing platform

    Institute of Scientific and Technical Information of China (English)

    吴悦

    2015-01-01

    科学技术在不断的进步,人们对数据挖掘服务也提出了更高的要求,微软云平台已经成为数据挖掘研究的重要研究方向,它能够较快的部署云应用程序.本文主要分析了基于微软云平台的海量数据挖掘,提出了云平台海量数据挖掘系统的设计,从而为数据挖掘提供一种新的服务机制.%The progress of science and technology constantly, people also put forward higher request to data mining services, Microsoft's cloud platform has become an important research direction in the research of data mining, it can be faster deployment of cloud applications. This article mainly analyzes the huge amounts of data mining based on Microsoft's cloud platform, puts forward the cloud platform of huge amounts of data mining system is designed, which provide a new service mechanism for data mining.

  9. Science and Technology Resources on the Internet: Computer Security.

    Science.gov (United States)

    Kinkus, Jane F.

    2002-01-01

    Discusses issues related to computer security, including confidentiality, integrity, and authentication or availability; and presents a selected list of Web sites that cover the basic issues of computer security under subject headings that include ethics, privacy, kids, antivirus, policies, cryptography, operating system security, and biometrics.…

  10. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Science.gov (United States)

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  11. Computer Simulation and Digital Resources for Plastic Surgery Psychomotor Education.

    Science.gov (United States)

    Diaz-Siso, J Rodrigo; Plana, Natalie M; Stranix, John T; Cutting, Court B; McCarthy, Joseph G; Flores, Roberto L

    2016-10-01

    Contemporary plastic surgery residents are increasingly challenged to learn a greater number of complex surgical techniques within a limited period. Surgical simulation and digital education resources have the potential to address some limitations of the traditional training model, and have been shown to accelerate knowledge and skills acquisition. Although animal, cadaver, and bench models are widely used for skills and procedure-specific training, digital simulation has not been fully embraced within plastic surgery. Digital educational resources may play a future role in a multistage strategy for skills and procedures training. The authors present two virtual surgical simulators addressing procedural cognition for cleft repair and craniofacial surgery. Furthermore, the authors describe how partnerships among surgical educators, industry, and philanthropy can be a successful strategy for the development and maintenance of digital simulators and educational resources relevant to plastic surgery training. It is our responsibility as surgical educators not only to create these resources, but to demonstrate their utility for enhanced trainee knowledge and technical skills development. Currently available digital resources should be evaluated in partnership with plastic surgery educational societies to guide trainees and practitioners toward effective digital content.

  12. Quantum computing with incoherent resources and quantum jumps.

    Science.gov (United States)

    Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R

    2012-04-27

    Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.

  13. National Resource for Computation in Chemistry (NRCC). Attached scientific processors for chemical computations: a report to the chemistry community

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, N.S.

    1980-01-01

    The demands of chemists for computational resources are well known and have been amply documented. The best and most cost-effective means of providing these resources is still open to discussion, however. This report surveys the field of attached scientific processors (array processors) and attempts to indicate their present and possible future use in computational chemistry. Array processors have the possibility of providing very cost-effective computation. This report attempts to provide information that will assist chemists who might be considering the use of an array processor for their computations. It describes the general ideas and concepts involved in using array processors, the commercial products that are available, and the experiences reported by those currently using them. In surveying the field of array processors, the author makes certain recommendations regarding their use in computational chemistry. 5 figures, 1 table (RWR)

  14. iTools: a framework for classification, categorization and integration of computational biology resources.

    Science.gov (United States)

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management

  15. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  16. Optimal Computing Resource Management Based on Utility Maximization in Mobile Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Haoyu Meng

    2017-01-01

    Full Text Available Mobile crowdsourcing, as an emerging service paradigm, enables the computing resource requestor (CRR to outsource computation tasks to each computing resource provider (CRP. Considering the importance of pricing as an essential incentive to coordinate the real-time interaction among the CRR and CRPs, in this paper, we propose an optimal real-time pricing strategy for computing resource management in mobile crowdsourcing. Firstly, we analytically model the CRR and CRPs behaviors in form of carefully selected utility and cost functions, based on concepts from microeconomics. Secondly, we propose a distributed algorithm through the exchange of control messages, which contain the information of computing resource demand/supply and real-time prices. We show that there exist real-time prices that can align individual optimality with systematic optimality. Finally, we also take account of the interaction among CRPs and formulate the computing resource management as a game with Nash equilibrium achievable via best response. Simulation results demonstrate that the proposed distributed algorithm can potentially benefit both the CRR and CRPs. The coordinator in mobile crowdsourcing can thus use the optimal real-time pricing strategy to manage computing resources towards the benefit of the overall system.

  17. 福建省自然资源和地理空间基础信息库林业海量数据整合改造研究%Research of Integration of Huge Amounts of Forestry Data within Natural Resource and Geospa-tial Fundamental Database in Fujian Province

    Institute of Scientific and Technical Information of China (English)

    周榕

    2015-01-01

    The construction of natural resources and geospatial fundamental database is very important for the promotion of the standardization of spatial information infrastructure and the information resource inte⁃gration. On the basis of forest resource planning survey database in Fujian province,this paper introduces the technical route and method of integration of huge amounts of forestry data within natural resources and geospatial fundamental database in Fujian province.%自然资源与地理空间基础信息数据库的建设,对于促进空间信息基础设施标准化和信息资源整合具有重要意义。该文以福建省森林资源规划调查数据库为基础,介绍了福建省自然资源和地理空间基础信息库林业海量数据整合改造研究的技术路线和方法。

  18. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    Science.gov (United States)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in

  19. An Efficient Algorithm for Resource Allocation in Parallel and Distributed Computing Systems

    Directory of Open Access Journals (Sweden)

    S.F. El-Zoghdy

    2013-03-01

    Full Text Available Resource allocation in heterogeneous parallel and distributed computing systems is the process of allocating user tasks to processing elements for execution such that some performance objective is optimized. In this paper, a new resource allocation algorithm for the computing grid environment is proposed. It takes into account the heterogeneity of the computational resources. It resolves the single point of failure problem which many of the current algorithms suffer from. In this algorithm, any site manager receives two kinds of tasks namely, remote tasks arriving from its associated local grid manager, and local tasks submitted directly to the site manager by local users in its domain. It allocates the grid workload based on the resources occupation ratio and the communication cost. The grid overall mean task response time is considered as the main performance metric that need to be minimized. The simulation results show that the proposed resource allocation algorithm improves the grid overall mean task response time. (Abstract

  20. Huge mediastinal liposarcoma resected by clamshell thoracotomy: a case report.

    Science.gov (United States)

    Toda, Michihito; Izumi, Nobuhiro; Tsukioka, Takuma; Komatsu, Hiroaki; Okada, Satoshi; Hara, Kantaro; Ito, Ryuichi; Shibata, Toshihiko; Nishiyama, Noritoshi

    2017-12-01

    Liposarcoma is the single most common soft tissue sarcoma. Because mediastinal liposarcomas often grow rapidly and frequently recur locally despite adjuvant chemotherapy and radiotherapy, they require complete excision. Therefore, the feasibility of achieving complete surgical excision must be carefully considered. We here report a case of a huge mediastinal liposarcoma resected via clamshell thoracotomy. A 64-year-old man presented with dyspnea on effort. Cardiomegaly had been diagnosed 6 years previously, but had been left untreated. A computed tomography scan showed a huge (36 cm diameter) anterior mediastinal tumor expanding into the pleural cavities bilaterally. The tumor comprised mostly fatty tissue but contained two solid areas. Echo-guided needle biopsies were performed and a diagnosis of an atypical lipomatous tumor was established by pathological examination of the biopsy samples. Surgical resection was performed via a clamshell incision, enabling en bloc resection of this huge tumor. Although there was no invasion of surrounding organs, the left brachiocephalic vein was resected because it was circumferentially surrounded by tumor and could not be preserved. The tumor weighed 3500 g. Pathologic examination of the resected tumor resulted in a diagnosis of a biphasic tumor comprising dedifferentiated liposarcoma and non-adipocytic sarcoma with necrotic areas. The patient remains free of recurrent tumor 20 months postoperatively. Clamshell incision provides an excellent surgical field and can be performed safely in patients with huge mediastinal liposarcomas.

  1. Justification of Filter Selection for Robot Balancing in Conditions of Limited Computational Resources

    Science.gov (United States)

    Momot, M. V.; Politsinskaia, E. V.; Sushko, A. V.; Semerenko, I. A.

    2016-08-01

    The paper considers the problem of mathematical filter selection, used for balancing of wheeled robot in conditions of limited computational resources. The solution based on complementary filter is proposed.

  2. Relaxed resource advance reservation policy in grid computing

    Institute of Scientific and Technical Information of China (English)

    XIAO Peng; HU Zhi-gang

    2009-01-01

    The advance reservation technique has been widely applied in many grid systems to provide end-to-end quality of service (QoS). However, it will result in low resource utilization rate and high rejection rate when the reservation rate is high. To mitigate these negative effects brought about by advance reservation, a relaxed advance reservation policy is proposed, which allows accepting new reservation requests that overlap the existing reservations under certain conditions. Both the benefits and the risks of the proposed policy are presented theoretically. The experimental results show that the policy can achieve a higher resource utilization rate and lower rejection rate compared to the conventional reservation policy and backfilling technique. In addition, the policy shows better adaptation when the grid systems are in the presence of a high reservation rate.

  3. Research on Digital Agricultural Information Resources Sharing Plan Based on Cloud Computing

    OpenAIRE

    2011-01-01

    Part 1: Decision Support Systems, Intelligent Systems and Artificial Intelligence Applications; International audience; In order to provide the agricultural works with customized, visual, multi-perspective and multi-level active service, we conduct a research of digital agricultural information resources sharing plan based on cloud computing to integrate and publish the digital agricultural information resources efficiently and timely. Based on cloud computing and virtualization technology, w...

  4. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  5. Efficient Qos Based Resource Scheduling Using PAPRIKA Method for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Hilda Lawrance

    2013-03-01

    Full Text Available Cloud computing is increasingly been used in enterprises and business markets for serving demanding jobs. The performance of resource scheduling in cloud computing is important due to the increase in number of users, services and type of services. Resource scheduling is influenced by many factors such as CPU speed, memory, bandwidth etc. Therefore resource scheduling can be modeled as a multi criteria decision making problem. This study proposes an efficient QoS based resource scheduling algorithm using potentially all pair-wise rankings of all possible alternatives (PAPRIKA. The tasks are arranged based on the QoS parameters and the resources are allocated to the appropriate tasks based on PAPRIKA method and user satisfaction. The scheduling algorithm was simulated with cloudsim tool package. The experiment shows that, the algorithm reduces task completion time and improves resource utility rate.

  6. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  7. Computers and Resource-Based History Teaching: A UK Perspective.

    Science.gov (United States)

    Spaeth, Donald A.; Cameron, Sonja

    2000-01-01

    Presents an overview of developments in computer-aided history teaching for higher education in the United Kingdom and the United States. Explains that these developments have focused on providing students with access to primary sources to enhance their understanding of historical methods and content. (CMK)

  8. Outcome of Hepatectomy for Huge Hepatocellular Carcinoma.

    Science.gov (United States)

    Jo, Sungho

    2011-05-01

    In spite of the recent improved results of hepatectomy for huge hepatocellular carcinomas (HCC), the prognosis of patients with huge HCCs is still poor compared to that of patients with small HCCs. This study was performed to compare the results of hepatectomy between patients with huge HCCs and those with small HCCs, to identify the prognostic factors in patients with huge HCCs, and to determine the preoperative selection criteria. We retrospectively analyzed 51 patients who underwent hepatectomy, between July 1994 and February 2009 at Dankook University Hospital. Patients with HCC≥10 cm were classified in large (L) group and others were classified in small (S) group. The clinicopathological features, operative procedures, and postoperative outcome were compared between both groups and various prognostic factors were investigated in group L. Eleven patients were classified in group L. Tumor size, vascular invasion, and tumor stage were higher in group L. Postoperative morbidity was higher in group L, but mortality was not different between the groups. Disease-free survivals were significantly lower in group L than in group S (36.4%, and 24.2% vs. 72.0%, and 44.0% for 1- and 3-year), but overall survival rates were similar in both groups (45.5%, and 15.2% in group L vs. 60.3%, and 41.3% in group S for 3- and 5-year). Presence of satellite nodules was the only prognostic factor in multivariate analysis after surgery for huge HCC. Regardless of tumor size, huge HCCs deserve consideration for surgery in patients with preserved liver function. Furthermore, the effect of surgery could be maximized with appropriate selection criteria, such as huge HCC without satellite nodules.

  9. Grid Computing: A Collaborative Approach in Distributed Environment for Achieving Parallel Performance and Better Resource Utilization

    Directory of Open Access Journals (Sweden)

    Sashi Tarun

    2011-01-01

    Full Text Available From the very beginning various measures are taken or consider for better utilization of available limited resources in the computer system for operational environment, this is came in consideration because most of the time our system get free and not able to exploit the system resource/capabilities as whole cause low performance. Parallel Computing can work efficiently, where operations are handled by multi-processors independently or efficiently, without any other processing capabilities. All processing unit’s works in a parallel fashioned and increases the system throughput without any resource allocation problem among different processing units. But this is limited and effective within a single machine. Today in this computing world, maintaining and establishing high speed computational work environment in a distributed scenario seems to be a challenging task because this environment made all operations by not depending on single resources but by interacting with otherresources in the vast network architecture. All current resource management system can only work smoothly if they apply these resources within their clusters, local organizations or disputed among many users who needs processing power, but for vast distributed environment performing various operational activities seems to be difficult because data is physically not maintained in a centralized location, it is geographically dispersed on multiple remote computers systems. Computers in the distributed environment have to depend on multiple resources for their task completion. Effective performance with high availability of resources to each computer in this speedy distributed computational environment is the major concern. To solve this problem a new approach is coined called “Grid Computing” environment. Grid uses a Middleware to coordinate disparate resources across a network, allows users to function as a virtual whole and make computing fast. In this paper I want to

  10. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  11. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  12. Adaptive workflow scheduling in grid computing based on dynamic resource availability

    Directory of Open Access Journals (Sweden)

    Ritu Garg

    2015-06-01

    Full Text Available Grid computing enables large-scale resource sharing and collaboration for solving advanced science and engineering applications. Central to the grid computing is the scheduling of application tasks to the resources. Various strategies have been proposed, including static and dynamic strategies. The former schedules the tasks to resources before the actual execution time and later schedules them at the time of execution. Static scheduling performs better but it is not suitable for dynamic grid environment. The lack of dedicated resources and variations in their availability at run time has made this scheduling a great challenge. In this study, we proposed the adaptive approach to schedule workflow tasks (dependent tasks to the dynamic grid resources based on rescheduling method. It deals with the heterogeneous dynamic grid environment, where the availability of computing nodes and links bandwidth fluctuations are inevitable due to existence of local load or load by other users. The proposed adaptive workflow scheduling (AWS approach involves initial static scheduling, resource monitoring and rescheduling with the aim to achieve the minimum execution time for workflow application. The approach differs from other techniques in literature as it considers the changes in resources (hosts and links availability and considers the impact of existing load over the grid resources. The simulation results using randomly generated task graphs and task graphs corresponding to real world problems (GE and FFT demonstrates that the proposed algorithm is able to deal with fluctuations of resource availability and provides overall optimal performance.

  13. Quantum Computing Resource Estimate of Molecular Energy Simulation

    CERN Document Server

    Whitfield, James D; Aspuru-Guzik, Alán

    2010-01-01

    Over the last century, ingenious physical and mathematical insights paired with rapidly advancing technology have allowed the field of quantum chemistry to advance dramatically. However, efficient methods for the exact simulation of quantum systems on classical computers do not exist. The present paper reports an extension of one of the authors' previous work [Aspuru-Guzik et al., Science {309} p. 1704, (2005)] where it was shown that the chemical Hamiltonian can be efficiently simulated using a quantum computer. In particular, we report in detail how a set of molecular integrals can be used to create a quantum circuit that allows the energy of a molecular system with fixed nuclear geometry to be extracted using the phase estimation algorithm proposed by Abrams and Lloyd [Phys. Rev. Lett. {83} p. 5165, (1999)]. We extend several known results related to this idea and present numerical examples of the state preparation procedure required in the algorithm. With future quantum devices in mind, we provide a compl...

  14. MCPLOTS. A particle physics resource based on volunteer computing

    Energy Technology Data Exchange (ETDEWEB)

    Karneyeu, A. [Joint Inst. for Nuclear Research, Moscow (Russian Federation); Mijovic, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Irfu/SPP, CEA-Saclay, Gif-sur-Yvette (France); Prestel, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Lund Univ. (Sweden). Dept. of Astronomy and Theoretical Physics; Skands, P.Z. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2013-07-15

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  15. MCPLOTS: a particle physics resource based on volunteer computing

    CERN Document Server

    Karneyeu, A; Prestel, S; Skands, P Z

    2014-01-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC@HOME platform.

  16. A Resource Scheduling Strategy in Cloud Computing Based on Multi-agent Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Wuxue Jiang

    2013-11-01

    Full Text Available Resource scheduling strategies in cloud computing are used either to improve system operating efficiency, or to improve user satisfaction. This paper presents an integrated scheduling strategy considering both resources credibility and user satisfaction. It takes user satisfaction as objective function and resources credibility as a part of the user satisfaction, and realizes optimal scheduling by using genetic algorithm. We integrate this scheduling strategy into Agent subsequently and propose a cloud computing system architecture based on Multi-agent. The numerical results show that this scheduling strategy improves not only the system operating efficiency, but also the user satisfaction.  

  17. Distributed Computation Resources for Earth System Grid Federation (ESGF)

    Science.gov (United States)

    Duffy, D.; Doutriaux, C.; Williams, D. N.

    2014-12-01

    The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational

  18. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud co

  19. Young Children's Exploration of Semiotic Resources during Unofficial Computer Activities in the Classroom

    Science.gov (United States)

    Bjorkvall, Anders; Engblom, Charlotte

    2010-01-01

    The article describes and discusses the learning potential of unofficial techno-literacy activities in the classroom with regards to Swedish 7-8-year-olds' exploration of semiotic resources when interacting with computers. In classroom contexts where every child works with his or her own computer, such activities tend to take up a substantial…

  20. The portability of computer-related educational resources : summary and directions for further research

    NARCIS (Netherlands)

    De Diana, Italo; Collis, Betty A.

    1990-01-01

    In this Special Issue of the Journal of Research on Computing in Education, the portability of computer-related educational resources has been examined by a number of researchers and practitioners, reflecting various backgrounds, cultures, and experiences. A first iteration of a general model of fac

  1. Orchestrating the XO Computer with Digital and Conventional Resources to Teach Mathematics

    Science.gov (United States)

    Díaz, A.; Nussbaum, M.; Varela, I.

    2015-01-01

    Recent research has suggested that simply providing each child with a computer does not lead to an improvement in learning. Given that dozens of countries across the world are purchasing computers for their students, we ask which elements are necessary to improve learning when introducing digital resources into the classroom. Understood the…

  2. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud

  3. A Comparative Study on Resource Allocation Policies in Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Bhavani B H

    2015-11-01

    Full Text Available Cloud computing is one of the latest models used for sharing pool of resources like CPU, memory, network bandwidth, hard drive etc. over the Internet. These resources are requested by the cloud user and are used on a rented basis just like electricity, water, LPG etc. When requests are made by the cloud user, allocation has to be done by the cloud service provider. With the limited amount of resources available, resource allocation becomes a challenging task for the cloud service provider as the resources are to be virtualized and allocated. These resources can be allocated dynamically or statically based on the type of request made by the cloud user and also depending on the application. In this paper, survey on both Static and Dynamic Allocation techniques are made. Also, comparison of both static and dynamic resource allocation techniques is made.

  4. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes.

  5. Connecting slow earthquakes to huge earthquakes

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes.

  6. Monitoring of computing resource utilization of the ATLAS experiment

    CERN Document Server

    Rousseau, D; The ATLAS collaboration; Vukotic, I; Aidel, O; Schaffer, RD; Albrand, S

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  7. Monitoring of computing resource utilization of the ATLAS experiment

    Science.gov (United States)

    Rousseau, David; Dimitrov, Gancho; Vukotic, Ilija; Aidel, Osman; Schaffer, Rd; Albrand, Solveig

    2012-12-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  8. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  9. Treatment strategies for huge central neurocytomas.

    Science.gov (United States)

    Xiong, Zhong-wei; Zhang, Jian-jian; Zhang, Ting-bao; Sun, Shou-jia; Wu, Xiao-lin; Wang, Hao; You, Chao; Wang, Yu; Zhang, Hua-qiu; Chen, Jin-cao

    2015-02-01

    Central neurocytomas (CNs), initially asymptomatic, sometimes become huge before detection. We described and analyzed the clinical, radiological, operational and outcome data of 13 cases of huge intraventricular CNs, and discussed the treatment strategies in this study. All huge CNs (n=13) in our study were located in bilateral lateral ventricle with diameter ≥5.0 cm and had a broad-based attachment to at least one side of the ventricle wall. All patients received craniotomy to remove the tumor through transcallosal or transcortical approach and CNs were of typical histologic and immunohistochemical features. Adjuvant therapies including conventional radiation therapy (RT) or gamma knife radiosurgery (GKRS) were also performed postoperatively. Transcallosal and transcortical approaches were used in 8 and 5 patients, respectively. Two patients died within one month after operation and 3 patients with gross total resection (GTR) were additionally given a decompressive craniectomy (DC) and/or ventriculoperitoneal shunt (VPS) as the salvage therapy. Six patients received GTR(+RT) and 7 patients received subtotal resection (STR)(+GKRS). Eight patients suffered serious complications such as hydrocephalus, paralysis and seizure after operation, and patients who underwent GTR showed worse functional outcome [less Karnofsky performance scale (KPS) scores] than those having STR(+GKRS) during the follow-up period. The clinical outcome of huge CNs seemed not to be favorable as that described in previous reports. Surgical resection for huge CNs should be meticulously considered to guarantee the maximum safety. Better results were achieved in STR(+GKRS) compared with GTR(+RT) for huge CNs, suggesting that STR(+GKRS) may be a better treatment choice. The recurrent or residual tumor can be treated with GKRS effectively.

  10. Resource pre-allocation algorithms for low-energy task scheduling of cloud computing

    Institute of Scientific and Technical Information of China (English)

    Xiaolong Xu; Lingling Cao; Xinheng Wang

    2016-01-01

    In order to lower the power consumption and im-prove the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-alocation algorithms based on the “shut down the re-dundant, turn on the demanded” strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future work-loads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control (CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource alocation algorithm based on probabilistic matching (RA-PM) are pro-posed. In order to reduce the power consumption further, the resource alocation algorithm based on the improved simu-lated annealing (RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make re-source pre-alocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.

  11. ARMS: An Agent-Based Resource Management System for Grid Computing

    Directory of Open Access Journals (Sweden)

    Junwei Cao

    2002-01-01

    Full Text Available Resource management is an important component of a grid computing infrastructure. The scalability and adaptability of such systems are two key challenges that must be addressed. In this work an agent-based resource management system, ARMS, is implemented for grid computing. ARMS utilises the performance prediction techniques of the PACE toolkit to provide quantitative data regarding the performance of complex applications running on a local grid resource. At the meta-level, a hierarchy of homogeneous agents are used to provide a scalable and adaptable abstraction of the system architecture. Each agent is able to cooperate with other agents and thereby provide service advertisement and discovery for the scheduling of applications that need to utilise grid resources. A case study with corresponding experimental results is included to demonstrate the efficiency of the resource management and scheduling system.

  12. PDBparam: Online Resource for Computing Structural Parameters of Proteins.

    Science.gov (United States)

    Nagarajan, R; Archana, A; Thangakani, A Mary; Jemimah, S; Velmurugan, D; Gromiha, M Michael

    2016-01-01

    Understanding the structure-function relationship in proteins is a longstanding goal in molecular and computational biology. The development of structure-based parameters has helped to relate the structure with the function of a protein. Although several structural features have been reported in the literature, no single server can calculate a wide-ranging set of structure-based features from protein three-dimensional structures. In this work, we have developed a web-based tool, PDBparam, for computing more than 50 structure-based features for any given protein structure. These features are classified into four major categories: (i) interresidue interactions, which include short-, medium-, and long-range interactions, contact order, long-range order, total contact distance, contact number, and multiple contact index, (ii) secondary structure propensities such as α-helical propensity, β-sheet propensity, and propensity of amino acids to exist at various positions of α-helix and amino acid compositions in high B-value regions, (iii) physicochemical properties containing ionic interactions, hydrogen bond interactions, hydrophobic interactions, disulfide interactions, aromatic interactions, surrounding hydrophobicity, and buriedness, and (iv) identification of binding site residues in protein-protein, protein-nucleic acid, and protein-ligand complexes. The server can be freely accessed at http://www.iitm.ac.in/bioinfo/pdbparam/. We suggest the use of PDBparam as an effective tool for analyzing protein structures.

  13. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  14. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2010-01-01

    GoeGrid is a grid resource center located in Goettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center will be presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster will be detailed. The benefits are an efficient use of computer and manpower resources. Further interdisciplinary projects are commonly organized courses for students of all fields to support education on grid-computing.

  15. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  16. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  17. A Huge Ancient Schwannoma of the Epiglottis.

    Science.gov (United States)

    Lee, Dong Hoon; Kim, Jo Heon; Yoon, Tae Mi; Lee, Joon Kyoo; Lim, Sang Chul

    2016-03-01

    Ancient schwannoma of the epiglottis is extremely rare. The authors report the first case of a patient with a huge ancient schwannoma of the epiglottis. Clinicians should consider the possibility that ancient schwannoma may originate in the epiglottis mimicking the other more frequently observed lesions.

  18. A Novel Approach for Resource Discovery using Random Projection on Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    M.N.Faruk

    2013-04-01

    Full Text Available Cloud computing offers different type’s utilities to the IT industries. Generally the resources are scattered throughout the clouds. It has to enable the ability to find different resources are available at clouds, This again an important criteria of distributed systems. This paper investigates the problem of locating resources which is multi variant in nature. It also used to locate the relevant dimensions of resources which is avail at the same cloud. It is also addresses the random projection on each cloud and discover the possible resources at each iteration, the outcome of each iteration updated on collision matrix. All the discovered elements are updated at the Management fabric. This paper also describes the feasibility on discovering different types of resources available each cloud.

  19. PhoenixCloud: Provisioning Resources for Heterogeneous Workloads in Cloud Computing

    CERN Document Server

    Zhan, Jianfeng; Shi, Weisong; Gong, Shimin; Zang, Xiutao

    2010-01-01

    As more and more service providers choose Cloud platforms, which is provided by third party resource providers, resource providers needs to provision resources for heterogeneous workloads in different Cloud scenarios. Taking into account the dramatic differences of heterogeneous workloads, can we coordinately provision resources for heterogeneous workloads in Cloud computing? In this paper we focus on this important issue, which is investigated by few previous work. Our contributions are threefold: (1) we respectively propose a coordinated resource provisioning solution for heterogeneous workloads in two typical Cloud scenarios: first, a large organization operates a private Cloud for two heterogeneous workloads; second, a large organization or two service providers running heterogeneous workloads revert to a public Cloud; (2) we build an agile system PhoenixCloud that enables a resource provider to create coordinated runtime environments on demand for heterogeneous workloads when they are consolidated on a C...

  20. Case study of an application of computer mapping in oil-shale resource mapping

    Energy Technology Data Exchange (ETDEWEB)

    Davis, F.G.F. Jr.; Smith, J.W.

    1979-01-01

    The Laramie Energy Technology Center, U.S. Department of Energy, is responsible for evaluating the resources of potential oil and the deposit characteristics of oil shales of the Green River Formation in Colorado, Utah, and Wyoming. While the total oil shale resource represents perhaps 2 trillion barrels of oil, only parts of this total are suitable for any particular development process. To evaluate the resource according to deposit characteristics, a computer system for making resource calculations and geological maps has been established. The system generates resource tables where the calculations have been performed over user-defined geological intervals. The system also has the capability of making area calculations and generating resource maps of geological quality. The graphics package that generates the maps uses corehole assay data and digitized map data. The generated maps may include the following features: selected drainages, towns, political boundaries, township and section surveys, and corehole locations. The maps are then generated according to user-defined scales.

  1. Analyzing huge pathology images with open source software.

    Science.gov (United States)

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here

  2. Categorization of Computing Education Resources into the ACM Computing Classification System

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yinlin [Virginia Polytechnic Institute and State University (Virginia Tech); Bogen, Paul Logasa [ORNL; Fox, Dr. Edward A. [Virginia Polytechnic Institute and State University (Virginia Tech); Hsieh, Dr. Haowei [University of Iowa; Cassel, Dr. Lillian N. [Villanova University

    2012-01-01

    The Ensemble Portal harvests resources from multiple heterogonous federated collections. Managing these dynamically increasing collections requires an automatic mechanism to categorize records in to corresponding topics. We propose an approach to use existing ACM DL metadata to build classifiers for harvested resources in the Ensemble project. We also present our experience on utilizing the Amazon Mechanical Turk platform to build ground truth training data sets from Ensemble collections.

  3. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  4. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Goettingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2011-01-01

    GoeGrid is a grid resource center located in G¨ottingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and manpower resources.

  5. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  6. Analysis on the Application of Cloud Computing to the Teaching Resources Sharing Construction in Colleges and Universities

    Institute of Scientific and Technical Information of China (English)

    LIU Mi

    2015-01-01

    Cloud computing is a new computing model. The application of cloud computing to the field of higher education informatization has been very popular currently. In this paper, the concept and characteristics of cloud computing are introduced, the current situation of the teaching resources sharing and construction in colleges and universities is analyzed, and finally the influence of cloud computing on the construction of teaching information resources is discussed.

  7. ADAPTIVE MULTI-TENANCY POLICY FOR ENHANCING SERVICE LEVEL AGREEMENT THROUGH RESOURCE ALLOCATION IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    MasnidaHussin

    2016-07-01

    Full Text Available The appearance of infinite computing resources that available on demand and fast enough to adapt with load surges makes Cloud computing favourable service infrastructure in IT market. Core feature in Cloud service infrastructures is Service Level Agreement (SLA that led seamless service at high quality of service to client. One of the challenges in Cloud is providing heterogeneous computing services for the clients. With the increasing number of clients/tenants in the Cloud, unsatisfied agreement is becoming a critical factor. In this paper, we present an adaptive resource allocation policy which attempts to improve accountable in Cloud SLA while aiming for enhancing system performance. Specifically, our allocation incorporates dynamic matching SLA rules to deal with diverse processing requirements from tenants.Explicitly, it reduces processing overheadswhile achieving better service agreement. Simulation experiments proved the efficacy of our allocation policy in order to satisfy the tenants; and helps improve reliable computing

  8. A survey on resource allocation in high performance distributed computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul; Khan, Samee Ullah; Bickler, Gage; Min-Allah, Nasro; Qureshi, Muhammad Bilal; Zhang, Limin; Yongji, Wang; Ghani, Nasir; Kolodziej, Joanna; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal; Li, Hongxiang; Wang, Lizhe; Chen, Dan; Rayes, Ammar

    2013-11-01

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.

  9. A huge renal capsular leiomyoma mimicking retroperitoneal sarcoma

    Directory of Open Access Journals (Sweden)

    Lal Anupam

    2009-01-01

    Full Text Available A huge left renal capsular leiomyoma mimicking retroperitoneal sarcoma presented in a patient as an abdominal mass. Computed tomography displayed a large heterogeneous retro-peritoneal mass in the left side of the abdomen with inferior and medial displacement as well as loss of fat plane with the left kidney. Surgical exploration revealed a capsulated mass that was tightly adherent to the left kidney; therefore, total tumor resection with radical left nephrectomy was performed. Histopathology ultimately confirmed the benign nature of the mass. This is the largest leiomyoma reported in literature to the best of our knowledge.

  10. From tiny microalgae to huge biorefineries

    OpenAIRE

    Gouveia, L.

    2014-01-01

    Microalgae are an emerging research field due to their high potential as a source of several biofuels in addition to the fact that they have a high-nutritional value and contain compounds that have health benefits. They are also highly used for water stream bioremediation and carbon dioxide mitigation. Therefore, the tiny microalgae could lead to a huge source of compounds and products, giving a good example of a real biorefinery approach. This work shows and presents examples of experimental...

  11. A rare clinic entity: Huge trichobezoar

    Directory of Open Access Journals (Sweden)

    Hidayatullah Hamidi, Dr, MD

    2016-01-01

    Conclusion: Trichobezoars should be suspected in young females with long standing upper abdominal masses; as the possibility of malignancy is not very common in this age group. While USG is inconclusive, trichobezoar can be accurately diagnosed with CT. In patient with huge trichobezoar, laparotomy can be performed firstly because of big size and location of mass, and psychiatric recommendation should be made to prevent relapse of this entity.

  12. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis."

  13. Dynamic provisioning of local and remote compute resources with OpenStack

    Science.gov (United States)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  14. Improving the Distribution of Resource in a Grid Computing Network Services

    Directory of Open Access Journals (Sweden)

    Najmeh fillolahe

    2016-03-01

    Full Text Available In this study the computational grid environment and a queuing theory based algorithm have been examined for distribution of resources in the computational grid that in which the resources are connected to each other in the form of a star topology. By using the concepts of queue system and how to distribute the subtasks, this algorithm supply the workload power for distribution of existing resources while implementation of tasks in the shortest time. In the first phase of the algorithm it can be seen by computation of consumed time for tasks and subtasks that the grid system reduces the average response time generally. But in the second phase due to the lack of load balance between resources and imbalance in distribution of subtasks between them, in addition to establishing of workload balance, the tasks’ response time also has been increased in long-term. And in third phase in addition to establishing of workload balance, the average response time also has been reduced. Thus by using this algorithm tow important factorize. efficiency and load balance has been enhanced as far as possible. Also the distribution of subtasks in the grid environment and allocation of resources to them is implemented by considering this tow factors.

  15. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  16. Scheduling real-time indivisible loads with special resource allocation requirements on cluster computing

    Directory of Open Access Journals (Sweden)

    Abeer Hamdy

    2010-10-01

    Full Text Available The paper presents a heuristic algorithm to schedule real time indivisible loads represented as directed sequential task graph on a cluster computing. One of the cluster nodes has some special resources (denoted by special node that may be needed by one of the indivisible loads

  17. Towards Self Configured Multi-Agent Resource Allocation Framework for Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    M.N.Faruk

    2014-05-01

    Full Text Available The construction of virtualization and Cloud computing environment to assure numerous features such as improved flexibility, stabilized energy efficiency with minimal operating costs for IT industry. However, highly unpredictable workloads can create demands to promote quality-of-service assurance in the mean while promising competent resource utilization. To evade breach on SLA’s (Service-Level Agreements or may have unproductive resource utilization, In a virtual environment resource allocations must be tailored endlessly during the execution for the dynamic application workloads. In this proposed work, we described a hybrid approach on self-configured resource allocation model in cloud environments based on dynamic workloads application models. We narrated a comprehensive setup of a delegate stimulated enterprise application, the new Virtenterprise_Cloudapp benchmark, deployed on dynamic virtualized cloud platform.

  18. Laparoscopic Management of Huge Cervical Myoma.

    Science.gov (United States)

    Peker, Nuri; Gündoğan, Savaş; Şendağ, Fatih

    To demonstrate the feasibility of laparoscopic management of a huge cervical myoma. Step-by-step video demonstration of the surgical procedure (Canadian Task Force classification III-C). Uterine myoma is the most common benign neoplasm of the female reproductive tract, with an estimated incidence of 25% to 30% at reproductive age [1,2]. Patients generally have no symptoms; however, those with such symptoms as severe pelvic pain, heavy uterine bleeding, or infertility may be candidates for surgery. The traditional management is surgery; however, uterine artery embolization or hormonal therapy using a gonadotropin-releasing hormone agonist or a selective estrogen receptor modulator should be preferred as the medical approach. Surgical management should be performed via laparoscopy or laparotomy; however, the use of laparoscopic myomectomy is being debated for patients with huge myomas. Difficulties in the excision, removal, and repair of myometrial defects, increased operative time, and blood loss are factors keeping physicians away from laparoscopic myomectomy [1,2]. A 40-year-old gravida 0, para 0 woman was admitted to our clinic with complaints of chronic pelvic pain, dyspareunia, and infertility. Her health history was unremarkable. Ultrasonographic examination revealed a 14 × 10-cm myoma in the cervical region. On bimanual examination, an immobile solid mass originating from the uterine cervix and filling the pouch of Douglas was palpated. The patient was informed of the findings, and laparoscopic myomectomy was recommended because of her desire to preserve her fertility. Abdominopelvic examination revealed a huge myoma filling and enlarging the cervix. Myomectomy was performed using standard technique as described elsewhere. A transverse incision was made using a harmonic scalpel. The myoma was fixed with a corkscrew manipulator and enucleated. Once bleeding was controlled, the myoma bed was filled with Spongostan to prevent possible bleeding from leakage

  19. Allocating Tactical High-Performance Computer (HPC) Resources to Offloaded Computation in Battlefield Scenarios

    Science.gov (United States)

    2013-12-01

    devices. Offloading solutions such as Cuckoo (12), MAUI(13), COMET(14), and ThinkAir(15) offload applications via Wi-Fi or 3G networks to servers or...Soldier Smartphone Program. Information Week, 2010. 12. Kemp, R.; Palmer, N.; Kielmann, T.; Bal, H. Cuckoo : A Computation Offloading Framework for...ARMY RESEARCH LAB RDRL CIH S TAMIM SOOKOOR DALE SHIRES DAVID BRUNO RONDA TAYLOR SONG PARK 20 INTENTIONALLY LEFT BLANK. 21

  20. Current status and prospects of computational resources for natural product dereplication: a review.

    Science.gov (United States)

    Mohamed, Ahmed; Nguyen, Canh Hao; Mamitsuka, Hiroshi

    2016-03-01

    Research in natural products has always enhanced drug discovery by providing new and unique chemical compounds. However, recently, drug discovery from natural products is slowed down by the increasing chance of re-isolating known compounds. Rapid identification of previously isolated compounds in an automated manner, called dereplication, steers researchers toward novel findings, thereby reducing the time and effort for identifying new drug leads. Dereplication identifies compounds by comparing processed experimental data with those of known compounds, and so, diverse computational resources such as databases and tools to process and compare compound data are necessary. Automating the dereplication process through the integration of computational resources has always been an aspired goal of natural product researchers. To increase the utilization of current computational resources for natural products, we first provide an overview of the dereplication process, and then list useful resources, categorizing into databases, methods and software tools and further explaining them from a dereplication perspective. Finally, we discuss the current challenges to automating dereplication and proposed solutions.

  1. Laparoscopic Management of Huge Myoma Nascendi.

    Science.gov (United States)

    Peker, Nuri; Gündoğan, Savas; Şendağ, Fatih

    To demonstrate the feasibility of laparoscopic management of a huge myoma nascendi. Step-by-step video demonstration of the surgical procedure (Canadian Task Force classification III-C). Uterine myoma is the most common benign neoplasm of the female reproductive tract, with an estimated incidence of 25% to 30% at reproductive age [1,2]. Patients generally have no symptoms; however, those with such symptoms as severe pelvic pain, heavy uterine bleeding, or infertility may be candidates for surgery. The traditional management is surgery; however, uterine artery embolization or hormonal therapy using a gonadotropin-releasing hormone agonist or a selective estrogen receptor modulator should be preferred as the medical approach. Surgical management should be performed via laparoscopy or laparotomy; however, the use of laparoscopic myomectomy is being debated for patients with huge myomas. Difficulties in the excision, removal, and repair of myometrial defects, increased operative time, and blood loss are factors keeping physicians away from laparoscopic myomectomy [1,2]. A 35-year-old woman was admitted to our clinic with complaints of chronic pelvic pain and heavy menstrual bleeding. Her medical history included multiple hospitalizations for blood transfusions, along with a recently measured hemoglobin level of 9.5 g/dL and a hematocrit value of 29%. She had never been married and had no children. Pelvic ultrasonography revealed a 12 × 10-cm uterine myoma located on the posterior side of the corpus uteri and protruding through to the cervical channel. This was a huge intramural submucous myoma in close proximity to the endometrial cavity and spreading through the myometrium. On vaginal examination, the myoma was found to extend into the vagina through the cervical channel. Laparoscopic myomectomy was planned because of the patient's desire for fertility preservation. Abdominopelvic exploration revealed a huge myoma filling the posterior side of the corpus uteri and

  2. An Extensible Scientific Computing Resources Integration Framework Based on Grid Service

    Science.gov (United States)

    Cui, Binge; Chen, Xin; Song, Pingjian; Liu, Rongjie

    Scientific computing resources (e.g., components, dynamic linkable libraries, etc) are very valuable assets for the scientific research. However, due to historical reasons, most computing resources can’t be shared by other people. The emergence of Grid computing provides a turning point to solve this problem. The legacy applications can be abstracted and encapsulated into Grid service, and they may be found and invoked on the Web using SOAP messages. The Grid service is loosely coupled with the external JAR or DLL, which builds a bridge from users to computing resources. We defined an XML schema to describe the functions and interfaces of the applications. This information can be acquired by users by invoking the “getCapabilities” operation of the Grid service. We also proposed the concept of class pool to eliminate the memory leaks when invoking the external jars using reflection. The experiment shows that the class pool not only avoids the PermGen space waste and Tomcat server exception, but also significantly improves the application speed. The integration framework has been implemented successfully in a real project.

  3. Measuring the impact of computer resource quality on the software development process and product

    Science.gov (United States)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  4. An open-source computational and data resource to analyze digital maps of immunopeptidomes

    Energy Technology Data Exchange (ETDEWEB)

    Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J.; Schuster, Heiko; Ternette, Nicola; Alpizar, Adan; Schittenhelm, Ralf B.; Ramarathinam, Sri Harsha; Lindestam-Arlehamn, Cecilia S.; Koh, Ching Chiek; Gillet, Ludovic; Rabsteyn, Armin; Navarro, Pedro; Kim, Sangtae; Lam, Henry; Sturm, Theo; Marcilla, Miguel; Sette, Alessandro; Campbell, David; Deutsch, Eric W.; Moritz, Robert L.; Purcell, Anthony; Rammensee, Hans-Georg; Stevanovic, Stevan; Aebersold, Ruedi

    2015-07-08

    We present a novel proteomics-based workflow and an open source data and computational resource for reproducibly identifying and quantifying HLA-associated peptides at high-throughput. The provided resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra and the analysis of quantitative digital maps of HLA peptidomes generated by SWATH mass spectrometry (MS). This is the first community-based study towards the development of a robust platform for the reproducible and quantitative measurement of HLA peptidomes, an essential step towards the design of efficient immunotherapies.

  5. IMPROVING FAULT TOLERANT RESOURCE OPTIMIZED AWARE JOB SCHEDULING FOR GRID COMPUTING

    Directory of Open Access Journals (Sweden)

    K. Nirmala Devi

    2014-01-01

    Full Text Available Workflow brokers of existing Grid Scheduling Systems are lack of cooperation mechanism which causes inefficient schedules of application distributed resources and it also worsens the utilization of various resources including network bandwidth and computational cycles. Furthermore considering the literature, all of these existing brokering systems primarily evolved around models of centralized hierarchical or client/server. In such models, vital responsibility such as resource discovery is delegated to the centralized server machines, thus they are associated with well-known disadvantages regarding single point of failure, scalability and network congestion at links that are leading to the server. In order to overcome these issues, we implement a new approach for decentralized cooperative workflow scheduling in a dynamically distributed resource sharing environment of Grids. The various actors in the system namely the users who belong to multiple control domains, workflow brokers and resources work together enabling a single cooperative resource sharing environment. But this approach ignored the fact that each grid site may have its own fault-tolerance strategy because each site is itself an autonomous domain. For instance, if a grid site handles the job check-pointing mechanism, each computation node must have the ability of periodical transmission of transient state of the job execution by computational node to the server. When there is a failure of job, it will migrate to another computational node and resume from the last stored checkpoint. A Glow worm Swarm Optimization (GSO for job scheduling is used to address the issue of heterogeneity in fault-tolerance of computational grid but Weighted GSO that overcomes the position update imperfections of general GSO in a more efficient manner shown during comparison analysis. This system supports four kinds of fault-tolerance mechanisms, including the job migration, job retry, check-pointing and

  6. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  7. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  8. Huge Tongue Lipoma: A Case Report

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Damghani

    2015-03-01

    Full Text Available Introduction: Lipomas are among the most common tumors of the human body. However, they are uncommon in the oral cavity and are observed as slow growing, painless, and asymptomatic yellowish submucosal masses. Surgical excision is the treatment of choice and recurrence is not expected.    Case Report: The case of a 30-year-old woman with a huge lipoma on the tip of her tongue since 3 years, is presented. She had difficulty with speech and mastication because the tongue tumor was filling the oral cavity. Clinical examination revealed a yellowish lesion, measuring 8 cm in maximum diameter, protruding from the lingual surface. The tumor was surgically excised with restoration of normal tongue function and histopathological examination of the tumor confirmed that it was a lipoma.   Conclusion:  Tongue lipoma is rarely seen and can be a cause of macroglossia. Surgical excision for lipoma is indicated for symptomatic relief and exclusion of associated malignancy.

  9. Galaxies Collide to Create Hot, Huge Galaxy

    Science.gov (United States)

    2009-01-01

    This image of a pair of colliding galaxies called NGC 6240 shows them in a rare, short-lived phase of their evolution just before they merge into a single, larger galaxy. The prolonged, violent collision has drastically altered the appearance of both galaxies and created huge amounts of heat turning NGC 6240 into an 'infrared luminous' active galaxy. A rich variety of active galaxies, with different shapes, luminosities and radiation profiles exist. These galaxies may be related astronomers have suspected that they may represent an evolutionary sequence. By catching different galaxies in different stages of merging, a story emerges as one type of active galaxy changes into another. NGC 6240 provides an important 'missing link' in this process. This image was created from combined data from the infrared array camera of NASA's Spitzer Space Telescope at 3.6 and 8.0 microns (red) and visible light from NASA's Hubble Space Telescope (green and blue).

  10. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    OpenAIRE

    Steponas Jonušauskas; Agota Giedrė Raišienė

    2011-01-01

    Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication wid...

  11. On state-dependant sampling for nonlinear controlled systems sharing limited computational resources

    OpenAIRE

    Alamir, Mazen

    2007-01-01

    21 pages. soumis à la revue "IEEE Transactions on Automatic Control"; International audience; In this paper, a framework for dynamic monitoring of sampling periods for nonlinear controlled systems is proposed. This framework is particularly adapted to the context of controlled systems sharing limited computational resources. The proposed scheme can be used in a cascaded structure with any feedback scheduling design. Illustrative examples are given to assess the efficiency of the proposed fram...

  12. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  13. A Dynamic Resource Allocation Method for Parallel DataProcessing in Cloud Computing

    Directory of Open Access Journals (Sweden)

    V. V. Kumar

    2012-01-01

    Full Text Available Problem statement: One of the Cloud Services, Infrastructure as a Service(IaaS provides a Compute resourses for demand in various applications like Parallel Data processing. The computer resources offered in the cloud are extremely dynamic and probably heterogeneous. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today’s IaaS clouds for both, task scheduling and execution. Particular tasks of processing a job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. However, the current algorithms does not consider the resource overload or underutilization during the job execution. In this study, we have focussed on increasing the efficacy of the scheduling algorithm for the real time Cloud Computing services. Approach: Our Algorithm utilizes the Turnaround time Utility effieciently by differentiating it into a gain function and a loss function for a single task. The algorithm also assigns high priority for task of early completion and less priority for abortions /deadlines issues of real time tasks. Results: The algorithm has been implemented on both preemptive and Non-premptive methods. The experimental results shows that it outperfoms the existing utility based scheduling algorithms and also compare its performance with both preemptive and Non-preemptive scheduling methods. Conculsion: Hence, a novel Turnaround time utility scheduling approach which focuses on both high priority and the low priority tasks that arrives for scheduling is proposed.

  14. A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv; Jayaraman, Prem Prakash; Kolodziej, Joanna; Balaji, Pavan; Zeadally, Sherali; Malluhi, Qutaibah Marwan; Tziritas, Nikos; Vishnu, Abhinav; Khan, Samee U.; Zomaya, Albert

    2014-06-06

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.

  15. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    Science.gov (United States)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-07-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi-Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources.

  16. Provable Data Possession of Resource-constrained Mobile Devices in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jian Yang

    2011-07-01

    Full Text Available Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP faced. The most important features in PDP are: 1 supporting for public, unlimited numbers of times of verification; 2 supporting for dynamic data update; 3 efficiency of storage space and computing. In mobile cloud computing, mobile end-users also need the PDP service. However, the computing workloads and storage burden of client in existing PDP schemes are too heavy to be directly used by the resource-constrained mobile devices. To solve this problem, with the integration of the trusted computing technology, this paper proposes a novel public PDP scheme, in which the trusted third-party agent (TPA takes over most of the calculations from the mobile end-users. By using bilinear signature and Merkle hash tree (MHT, the scheme aggregates the verification tokens of the data file into one small signature to reduce communication and storage burden. MHT is also helpful to support dynamic data update. In our framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM chips, and the needed computing workload and storage space is fit for mobile devices. Our scheme realizes provable secure storage service for resource-constrained mobile devices in mobile cloud computing.

  17. HUBBLE SPIES HUGE CLUSTERS OF STARS FORMED

    Science.gov (United States)

    2002-01-01

    BY ANCIENT ENCOUNTER This stunningly beautiful image [right] taken with the NASA Hubble Space Telescope shows the heart of the prototypical starburst galaxy M82. The ongoing violent star formation due to an ancient encounter with its large galactic neighbor, M81, gives this galaxy its disturbed appearance. The smaller picture at upper left shows the entire galaxy. The image was taken in December 1994 by the Kitt Peak National Observatory's 0.9-meter telescope. Hubble's view is represented by the white outline in the center. In the Hubble image, taken by the Wide Field and Planetary Camera 2, the huge lanes of dust that crisscross M82's disk are another telltale sign of the flurry of star formation. Below the center and to the right, a strong galactic wind is spewing knotty filaments of hydrogen and nitrogen gas. More than 100 super star clusters -- very bright, compact groupings of about 100,000 stars -- are seen in this detailed Hubble picture as white dots sprinkled throughout M82's central region. The dark region just above the center of the picture is a huge dust cloud. A collaboration of European and American scientists used these clusters to date the ancient interaction between M82 and M81. About 600 million years ago, a region called 'M82 B' (the bright area just below and to the left of the central dust cloud) exploded with new stars. Scientists have discovered that this ancient starburst was triggered by the violent encounter with M81. M82 is a bright (eighth magnitude), nearby (12 million light-years from Earth) galaxy in the constellation Ursa Major (the Great Bear). The Hubble picture was taken Sept. 15, 1997. The natural-color composite was constructed from three Wide Field and Planetary Camera 2 exposures, which were combined in chromatic order: 4,250 seconds through a blue filter (428 nm); 2,800 seconds through a green filter (520 nm); and 2,200 seconds through a red (820 nm) filter. Credits for Hubble image: NASA, ESA, R. de Grijs (Institute of

  18. A novel agent based autonomous and service composition framework for cost optimization of resource provisioning in cloud computing

    Directory of Open Access Journals (Sweden)

    Aarti Singh

    2017-01-01

    Full Text Available A cloud computing environment offers a simplified, centralized platform or resources for use when needed at a low cost. One of the key functionalities of this type of computing is to allocate the resources on an individual demand. However, with the expanding requirements of cloud user, the need of efficient resource allocation is also emerging. The main role of service provider is to effectively distribute and share the resources which otherwise would result into resource wastage. In addition to the user getting the appropriate service according to request, the cost of respective resource is also optimized. In order to surmount the mentioned shortcomings and perform optimized resource allocation, this research proposes a new Agent based Automated Service Composition (A2SC algorithm comprising of request processing and automated service composition phases and is not only responsible for searching comprehensive services but also considers reducing the cost of virtual machines which are consumed by on-demand services only.

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  20. Power-Aware Resource Reconfiguration Using Genetic Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Li Deng

    2016-01-01

    Full Text Available Cloud computing enables scalable computation based on virtualization technology. However, current resource reallocation solution seldom considers the stability of virtual machine (VM placement pattern. Varied workloads of applications would lead to frequent resource reconfiguration requirements due to repeated appearance of hot nodes. In this paper, several algorithms for VM placement (multiobjective genetic algorithm (MOGA, power-aware multiobjective genetic algorithm (pMOGA, and enhanced power-aware multiobjective genetic algorithm (EpMOGA are presented to improve stability of VM placement pattern with less migration overhead. The energy consumption is also considered. A type-matching controller is designed to improve evolution process. Nondominated sorting genetic algorithm II (NSGAII is used to select new generations during evolution process. Our simulation results demonstrate that these algorithms all provide resource reallocation solutions with long stabilization time of nodes. pMOGA and EpMOGA also better balance the relationship of stabilization and energy efficiency by adding number of active nodes as one of optimal objectives. Type-matching controller makes EpMOGA superior to pMOGA.

  1. Exploring Graphics Processing Unit (GPU Resource Sharing Efficiency for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Teng Li

    2013-11-01

    Full Text Available The increasing incorporation of Graphics Processing Units (GPUs as accelerators has been one of the forefront High Performance Computing (HPC trends and provides unprecedented performance; however, the prevalent adoption of the Single-Program Multiple-Data (SPMD programming model brings with it challenges of resource underutilization. In other words, under SPMD, every CPU needs GPU capability available to it. However, since CPUs generally outnumber GPUs, the asymmetric resource distribution gives rise to overall computing resource underutilization. In this paper, we propose to efficiently share the GPU under SPMD and formally define a series of GPU sharing scenarios. We provide performance-modeling analysis for each sharing scenario with accurate experimentation validation. With the modeling basis, we further conduct experimental studies to explore potential GPU sharing efficiency improvements from multiple perspectives. Both further theoretical and experimental GPU sharing performance analysis and results are presented. Our results not only demonstrate the significant performance gain for SPMD programs with the proposed efficient GPU sharing, but also the further improved sharing efficiency with the optimization techniques based on our accurate modeling.

  2. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  3. A parallel solver for huge dense linear systems

    Science.gov (United States)

    Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.

    2011-11-01

    HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system

  4. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  5. A Safety Resource Allocation Mechanism against Connection Fault for Vehicular Cloud Computing

    Directory of Open Access Journals (Sweden)

    Tianpeng Ye

    2016-01-01

    Full Text Available The Intelligent Transportation System (ITS becomes an important component of the smart city toward safer roads, better traffic control, and on-demand service by utilizing and processing the information collected from sensors of vehicles and road side infrastructure. In ITS, Vehicular Cloud Computing (VCC is a novel technology balancing the requirement of complex services and the limited capability of on-board computers. However, the behaviors of the vehicles in VCC are dynamic, random, and complex. Thus, one of the key safety issues is the frequent disconnections between the vehicle and the Vehicular Cloud (VC when this vehicle is computing for a service. More important, the connection fault will disturb seriously the normal services of VCC and impact the safety works of the transportation. In this paper, a safety resource allocation mechanism is proposed against connection fault in VCC by using a modified workflow with prediction capability. We firstly propose the probability model for the vehicle movement which satisfies the high dynamics and real-time requirements of VCC. And then we propose a Prediction-based Reliability Maximization Algorithm (PRMA to realize the safety resource allocation for VCC. The evaluation shows that our mechanism can improve the reliability and guarantee the real-time performance of the VCC.

  6. Context-aware computing-based reducing cost of service method in resource discovery and interaction

    Institute of Scientific and Technical Information of China (English)

    TANG Shan-cheng; HOU Yi-bin

    2004-01-01

    Reducing cost of service is an important goal for resource discovery and interaction technologies. The shortcomings of transhipment-method and hibernation-method are to increase holistic cost of service and to slower resource discovery respectively. To overcome these shortcomings, a context-aware computing-based method is developed. This method, firstly,analyzes the courses of devices using resource discovery and interaction technologies to identify some types of context related to reducing cost of service, then, chooses effective methods such as stopping broadcast and hibernation to reduce cost of service according to information supplied by the context but not the transhipment-method's simple hibernations. The results of experiments indicate that under the worst condition this method overcomes the shortcomings of transhipment-method, makes the "poor" devices hibernate longer than hibernation-method to reduce cost of service more effectively, and discovers resources faster than hibernation-method; under the best condition it is far better than hibernation-method in all aspects.

  7. Resources and Approaches for Teaching Quantitative and Computational Skills in the Geosciences and Allied Fields

    Science.gov (United States)

    Orr, C. H.; Mcfadden, R. R.; Manduca, C. A.; Kempler, L. A.

    2016-12-01

    Teaching with data, simulations, and models in the geosciences can increase many facets of student success in the classroom, and in the workforce. Teaching undergraduates about programming and improving students' quantitative and computational skills expands their perception of Geoscience beyond field-based studies. Processing data and developing quantitative models are critically important for Geoscience students. Students need to be able to perform calculations, analyze data, create numerical models and visualizations, and more deeply understand complex systems—all essential aspects of modern science. These skills require students to have comfort and skill with languages and tools such as MATLAB. To achieve comfort and skill, computational and quantitative thinking must build over a 4-year degree program across courses and disciplines. However, in courses focused on Geoscience content it can be challenging to get students comfortable with using computational methods to answers Geoscience questions. To help bridge this gap, we have partnered with MathWorks to develop two workshops focused on collecting and developing strategies and resources to help faculty teach students to incorporate data, simulations, and models into the curriculum at the course and program levels. We brought together faculty members from the sciences, including Geoscience and allied fields, who teach computation and quantitative thinking skills using MATLAB to build a resource collection for teaching. These materials, and the outcomes of the workshops are freely available on our website. The workshop outcomes include a collection of teaching activities, essays, and course descriptions that can help faculty incorporate computational skills at the course or program level. The teaching activities include in-class assignments, problem sets, labs, projects, and toolboxes. These activities range from programming assignments to creating and using models. The outcomes also include workshop

  8. A probabilistic algorithm for interactive huge genome comparison.

    Science.gov (United States)

    Courtois, P R; Moncany, M L

    1995-12-01

    We designed a new probabilistic algorithm, named PAGEC (probabilistic algorithm for genome comparison), which allowed a highly interactive study of long genomic strings. The comparison between two nucleic acid sequences is based on the creation of multiple index tables, which drastically reduces processing time for huge genomes, e.g. 13 min for a 4 Mb/4 Mb comparison. PAGEC lowered the need for memory when compared with other types of algorithm and took into account the low resolution of the final representation (paper or computer screen). Considering that standard printers permit a 300 d.p.i. resolution, the loss of computed information due to the probabilistic conception of the algorithm was not usually noticeable in the present study, mainly due to increased genomic sizes. Refinement was possible through an interactive zooming system, which enabled the visualization of the lexical base sequences of a considered part of both of the studied genomes. Biological examples of computation based on yeast and animal nucleic acid sequences presented in this paper reveal the flexibility of the PAGEC program, which is a valuable tool for genetic studies as it offers a solution to an important problem that will become even more important as time passes.

  9. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    The general potential of computer games for teaching and learning is becoming widely recognized. In particular, within the application contexts of primary and lower secondary education, the relevance and value and computer games seem more accepted, and the possibility and willingness to incorporate...... computer games as a possible resource at the level of other educational resources seem more frequent. For some reason, however, to apply computer games in processes of teaching and learning at the high school level, seems an almost non-existent event. This paper reports on study of incorporating...

  10. Method to Reduce the Computational Intensity of Offshore Wind Energy Resource Assessments Using Cokriging

    Science.gov (United States)

    Dvorak, M. J.; Boucher, A.; Jacobson, M. Z.

    2009-12-01

    Wind energy represents the fastest growing renewable energy resource, sustaining double digit growth for the past 10 years with approximately 94,000 MW installed by the end of 2007. Although winds over the ocean are generally stronger and often located closer to large urban electric load centers, offshore wind turbines represent about 1% of installed capacity. In order to evaluate the economic potential of an offshore wind resource, wind resource assessments typically involve running large mesoscale model simulations, validated with sparse in-situ meteorological station data. These simulations are computationally expensive limiting their temporal coverage. Although a wealth of other wind data does exist (e.g. QuickSCAT satellite, SAR satellite, radar/SODAR wind profiler, and radiosounde) these data are often ignored or interpolated trivially because of the widely varying spatial and temporal resolution. A spatio-temporal cokriging approach with non-parametric covariances was developed to interpolate these empirical data and compare it with previously validated surface winds output by the PSU/NCAR MM5 for coastal California. The spatio-temporal covariance model is assumed to be the product of a spatial and a temporal covariance component. The temporal covariance is derived from in-situ wind speed measurements at 10 minutes intervals measured by offshore buoys and variograms are calculated non-parametrically using a FFT. Spatial covariance tables are created using MM5 or QuikSCAT data with a similar 2D FFT method. The cokriging system was initially validated by predicting “missing” hours of PSU/NCAR MM5 data and has displayed reasonable skill. QuikSCAT satellite winds were also substituted for MM5 data when calculating the spatial covariance, with the goal of reducing the computer time needed to accurately predict a wind energy resource.

  11. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  12. Umbilicoplasty in children with huge umbilical hernia

    Directory of Open Access Journals (Sweden)

    Akakpo-Numado Gamedzi Komlatsè

    2014-01-01

    Full Text Available Background: Huge umbilical hernias (HUH are voluminous umbilical hernia (UH that are frequent in black African children. Several surgical techniques are used in their treatment for umbilical reconstruction, but techniques using skin flaps provide better aesthetic results. In this study, we presented our technique of umbilicoplasty in HUH, and its results. Patients and Methods: It is a retrospective study on children treated for HUH, from January 2012 to December 2013. The UH was called HUH when its basis diameter (BD exceeds 3 cm. Every HUH was characterised by its height, BD and morphology. Our technique was a two lateral flaps technique; the flaps are symmetrical and drawn so as to reconstitute the different parts of the umbilicus. The results were appreciated with criteria, including the peripheral ring and the central depression of the neo-umbilicus. Results : Twelve children were concerned (7 boys and 5 girls. Their mean age was 5 years and 6 months. The mean BD was 5.6 cm (extremes 3 and 8 cm, and the mean height of the HUH was 7.45 cm (extremes 3 and 9 cm. All underwent umbilicoplasty. In early post-operative period, two children presented a transitory subcutaneous hematoma. Late complications were granulation tissue with two children, and cheloid scar with one. With a mean follow-up of 10 months, we had 10 excellent results and two fair results according to our criteria. Conclusion: Our two lateral flaps umbilicoplasty is well-adapted to HUH in children. It is simple and assures a satisfactory anatomical and cosmetic result.

  13. Hepatectomy for huge hepatocellular carcinoma: single institute's experience.

    Science.gov (United States)

    Yang, Lianyue; Xu, Jiangfeng; Ou, Dipeng; Wu, Wei; Zeng, Zhijun

    2013-09-01

    The surgical resection of huge hepatocellular carcinoma (HCC) is still controversial. This study was designed to introduce our experience of liver resection for huge HCC and evaluate the safety and outcomes of hepatectomy for huge HCC. A total of 258 hepatic resections for the patients with huge HCC were analysed retrospectively from December 2002 to December 2011. The operative outcomes were compared with 293 patients with HCC >5.0 cm but huge HCC group and HCC >5.0 cm but huge HCC group has significantly a more longer overall and disease-free survival time than nodular huge HCC (P = 0.026, P = 0.022). Univariate and multivariate analysis revealed that the types of tumour, vascular invasion, and UICC stage were independent prognostic factors for overall survival (P = 0.047, P = 0.037, P = 0.033). Hepatic resection can be performed safely for huge HCC with a low mortality and favorable survival outcomes. Solitary huge HCC has the better surgical outcomes than nodular huge HCC.

  14. Public-Resource Computing: Un nuevo paradigma para la computación y la ciencia

    OpenAIRE

    2006-01-01

    En este artículo se explora el concepto de Computación de Recursos Públicos (Public-Resource Computing), una idea que se ha venido desarrollando con gran éxito desde hace algunos años en la comunidad científica y que consiste en el aprovechamiento de los recursos de computación que se encuentran disponibles en los millones de PC que existen en el mundo conectados a internet. Se discute el proyecto SETI@home, el más exitoso representante de este concepto, y se describe la plataforma BOINC (Ber...

  15. The model of localized business community economic development under limited financial resources: computer model and experiment

    Directory of Open Access Journals (Sweden)

    Berg Dmitry

    2016-01-01

    Full Text Available Globalization processes now affect and are affected by most of organizations, different type resources, and the natural environment. One of the main restrictions initiated by these processes is the financial one: money turnover in global markets leads to its concentration in the certain financial centers, and local business communities suffer from the money lack. This work discusses the advantages of complementary currency introduction into a local economics. By the computer simulation with the engineered program model and the real economic experiment it was proved that the complementary currency does not compete with the traditional currency, furthermore, it acts in compliance with it, providing conditions for the sustainable business community development.

  16. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2011-12-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources.Methodology—meta analysis, survey and descriptive analysis.Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, thatsignificant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  17. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Agota Giedrė Raišienė

    2013-08-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, that significant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  18. Integration and Exposure of Large Scale Computational Resources Across the Earth System Grid Federation (ESGF)

    Science.gov (United States)

    Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.

    2015-12-01

    As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.

  19. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    Directory of Open Access Journals (Sweden)

    Cesar Torres-Huitzil

    2013-01-01

    Full Text Available Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k×k kernel requires of k2−1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA devices. Implementation results show that the architecture is able to compute max/min filters, on 1024×1024 images with up to 255×255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding.

  20. A Comparative Study of Load Balancing Algorithms in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Cloud Computing is a new trend emerging in IT environment with huge requirements of infrastructure and resources. Load Balancing is an important aspect of cloud computing environment. Efficient load balancing scheme ensures efficient resource utilization by provisioning of resources to cloud users on demand basis in pay as you say manner. Load Balancing may even support prioritizing users by applying appropriate scheduling criteria. This paper presents various load balancing schemes in differ...

  1. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  2. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods.

    Science.gov (United States)

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  4. Monitoring of Computing Resource Use of Active Software Releases at ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2017-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  5. Colonic Angiodysplasia with a Huge Submucosal Hematoma in the Sigmoid Colon.

    Science.gov (United States)

    Shimizu, Takayuki; Koike, Daisuke; Nomura, Yukihiro; Ooe, Kenji

    2016-01-01

    Colonic angiodysplasia (AD) with bleeding as a comorbidity in the aging population is being increasingly reported. However, to our knowledge, there is no report on colonic AD accompanied by a huge hematoma. Herein, we report a case of colonic AD with a huge submucosal hematoma. A 75-year-old man with sudden melena was referred to our hospital. Helical computed tomographic angiography (CTA) revealed bleeding from the sigmoid colon. Additionally, colonoscopy showed a huge submucosal hematoma with bleeding in the sigmoid colon. As endoscopic hemostasis was difficult, sigmoidectomy was performed. The pathological diagnosis was colonic AD. The present case indicates that colonic AD should be considered in the differential diagnosis for melena. In addition, the case shows that helical CTA, which is a noninvasive imaging modality, is useful for the diagnosis of colonic AD and is as effective as colonoscopy and angiography for diagnosis.

  6. Giant pulmonary teratoma with huge splenic lymphangiomatosis: a very rare case.

    Science.gov (United States)

    Alsubaie, Hemail M; Alsubaie, Khaled M; Mahfouz, Mohammed Eid

    2017-09-01

    Teratomas are tumors composed of tissues derived from more than one germ cell line. They manifested with a great variety of clinical and radiological features. We report a case of a giant left hemithorax teratoma in a female with huge spleen tumor and review the relevant literature. A 38-year-old female with progressively aggravating dyspnea at rest from a mild trauma. Absent breath sounds on the left side. There was splenomegaly. Computed tomography scan revealed a huge mass (20 × 15 × 18 cm), containing elements of heterogeneous density in the left hemithorax. The spleen tumor was occupying most of the spleen without any other abdominal manifestations. The patient underwent left thoracotomy and laparoscopic splenectomy. Histopathological examination revealed a benign mature teratoma and cystic lymphangiomatosis of the spleen. To the best of our knowledge and after reviewing the available literature this is the first case of huge mature pulmonary teratoma with large cystic spleen lymphangiomatosis.

  7. Huge splenic epidermoid cyst with elevation of serum CA19-9 level.

    Science.gov (United States)

    Matsumoto, Sayo; Mori, Toshifumi; Miyoshi, Jinsei; Imoto, Yoshitaka; Shinomiya, Hirohiko; Wada, Satoshi; Nakao, Toshihiro; Shinohara, Hisamitsu; Yoshida, Sadahiro; Izumi, Keisuke; Okazaki, Jun; Muguruma, Naoki; Takayama, Tetsuji

    2015-01-01

    A 30-year-old female was referred to our hospital for further examination of liver dysfunction. A huge, soft mass was noted in her left upper quadrant on physical examination. Abdominal ultrasonography and computed tomography revealed a huge cystic tumor of 20 cm in the hilus of the spleen. Serum CA19-9 was 491 U/ml, and splenectomy was performed under suspicion of a malignant cystic tumor. The inner surface of the cyst was lined by squamous epithelial cells that were immunohistochemically positive for CA19-9. Serum CA19-9 level was normalized after the surgery. Our case of a very rare, huge epidermoid cyst of the spleen suggests that measurement of the serum CA19-9 level is useful for evaluating therapeutic efficacy of a splenic epidermoid cyst.

  8. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  9. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  10. Research of cloud computing resource scheduling model%云环境下资源调度模型研究

    Institute of Scientific and Technical Information of China (English)

    刘赛; 李绪蓉; 万麟瑞; 陈韬

    2013-01-01

    In the cloud computing environment, resource scheduling management is one of the key technologies. This paper describes a cloud computing resource scheduling model and explains the relationship between entities in the resource scheduling process of cloud computing and cloud computing environments. According to the physical server resource properties, a scheduling model that comprehensively considers the cloud computing resources loads is established, and the artificial and automatic virtual machine migration technology is used to balance the load of the physical servers in the cloud computing environment. The experimental results show that this resource scheduling model not only supports balancing the resource load but also improves the virtualization degree and flexibility degree of the resource pool. Finally, the future research directions are discussed.%云计算环境下资源调度管理是云计算的关键技术之一.介绍了一种云计算下资源调度模型,阐述了云计算资源调度流程和云计算环境下实体之间的关系.根据物理服务器的资源属性,建立了一种综合考虑云计算资源负载的调度模型,利用人工加自动的虚拟机迁移技术实现云计算中物理服务器的负载均衡.通过仿真实验分析和比较,该资源调度模型不但可以很好地实现资源负载均衡,而且可以提高资源池虚拟化和弹性化程度.最后展望了下一步的研究方向.

  11. Managing Security in Advanced Computational Infrastructure

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Proposed by Education Ministry of China, Advanced Computational Infrastructure (ACI) aims at sharing geographically distributed high-performance computing and huge-capacity data resource among the universities of China. With the fast development of large-scale applications in ACI, the security requirements become more and more urgent. The special security needs in ACI is first analyzed in this paper, and security management system based on ACI is presented. Finally, the realization of security management system is discussed.

  12. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    Science.gov (United States)

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  13. Using multiple metaphors and multimodalities as a semiotic resource when teaching year 2 students computational strategies

    Science.gov (United States)

    Mildenhall, Paula; Sherriff, Barbara

    2017-06-01

    Recent research indicates that using multimodal learning experiences can be effective in teaching mathematics. Using a social semiotic lens within a participationist framework, this paper reports on a professional learning collaboration with a primary school teacher designed to explore the use of metaphors and modalities in mathematics instruction. This video case study was conducted in a year 2 classroom over two terms, with the focus on building children's understanding of computational strategies. The findings revealed that the teacher was able to successfully plan both multimodal and multiple metaphor learning experiences that acted as semiotic resources to support the children's understanding of abstract mathematics. The study also led to implications for teaching when using multiple metaphors and multimodalities.

  14. Development of a Computer-Based Resource for Inclusion Science Classrooms

    Science.gov (United States)

    Olsen, J. K.; Slater, T.

    2005-12-01

    Current instructional issues necessitate educators start with curriculum and determine how educational technology can assist students in achieving positive learning goals, functionally supplementing the classroom instruction. Technology projects incorporating principles of situated learning have been shown to provide effective framework for learning, and computer technology has been shown to facilitate learning among special needs students. Students with learning disabilities may benefit from assistive technology, but these resources are not always utilized during classroom instruction: technology is only effective if teachers view it as an integral part of the learning process. The materials currently under development are in the domain of earth and space science, part of the Arizona 5-8 Science Content Standards. The concern of this study is to determine a means of assisting inclusive education that is both feasible and effective in ensuring successful science learning outcomes for all students whether regular education or special needs.

  15. Optimization of Dynamically Generated SQL Queries for Tiny-Huge, Huge-Tiny Problem

    Directory of Open Access Journals (Sweden)

    Arjun K Sirohi

    2013-03-01

    Full Text Available In most new commercial business software applications like Customer Relationship Management, the datais stored in the database layer which is usually a Relational Database Management System (RDBMS likeOracle, DB2 UDB or SQL Server. To access data from these databases, Structured Query Language (SQLqueries are used that are generated dynamically at run time based on defined business models and businessrules. One such business rule is visibility- the capability of the application to restrict data access based onthe role and responsibility of the user logged in to the application. This is generally achieved by appendingsecurity predicates in the form of sub-queries to the main query based on the roles and responsibility of theuser. In some cases, the outer query may be more restrictive while in other cases, the security predicatesmay be more restrictive. This often results in a dilemma for the cost-based optimizer (CBO of the backenddatabase whether to drive from the outer query or drive from the security predicate sub-queries. Thisdilemma is sometimes called the “Tiny-Huge, Huge-Tiny” problem and results in serious performancedegradation by way of increased response times on the application User Interface (UI. This paperprovides a case study of a new approach to vastly reduce this CBO dilemma by a combination of denormalizedcolumns and re-writing of the security predicates’ sub-queries at run-time, thereby levelling theouter and security sub-queries. This approach results in more stable execution plans in the database andmuch better performance of such SQLs, effectively leading to higher performance and scalability of theapplication.

  16. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  17. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction.

    Science.gov (United States)

    Nezarat, Amin; Dastghaibifard, G H

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.

  18. Late Intervention-Related Complication - A Huge Subepicardial Hematoma.

    Science.gov (United States)

    Ko, Po-Yen; Chang, Chih-Ping; Yang, Chen-Chia; Lin, Jen-Jyh

    2013-05-01

    A 75-year-old man had a history of triple vessel coronary artery disease. In August 2009, he had undergone successful percutaneous coronary intervention to the left circumflex coronary artery (LCX) for management of an in-stent restenosis (ISR) lesion. However, in September 2010, he began experiencing recurrent episodes of exertional chest pain. Chest radiography showed the left cardiac border bulging upwards. Transthoracic echocardiography and chest computed tomography revealed a huge oval mass of about 10.4 cm × 7.9 cm × 8.6 cm, which showed calcification and was obliterating the LCX. Subsequent coronary angiography revealed significant instent restenosis, with extravasation of a small amount of contrast material at the stent location, suggesting that the coronary artery had ruptured. We implanted a polytetrafluoroethylene-covered stent to seal the coronary perforation and to release the occlusion. The patient was symptom-free and had an uneventful outcome until the 1-year follow up. Coronary artery perforation; Covered stent; Hematoma.

  19. A huge posteromedial mediastinal cyst complicated with vertebral dislodgment

    Directory of Open Access Journals (Sweden)

    Manoussaridis Jordan T

    2006-08-01

    Full Text Available Abstract Background Mediastinal cysts compromise almost 20% of all mediastinal masses with bronchogenic subtype accounting for 60% of all cystic lesions. Although compression of adjoining soft tissues is usual, spinal complications and neurological symptoms are outmost rare and tend to characterize almost exclusively the neuroenteric cysts. Case presentation A young patient with intermittent, dull pain in his back and free medical history presented in the orthopaedic department of our hospital. There, the initial clinical and radiologic evaluation revealed a mediastinal mass and the patient was referred to the thoracic surgery department for further exploration. The following computed tomography (CT and magnetic resonance imaging (MRI shown a huge mediastinal cyst compressing the T4-T6 vertebral bodies. The neurological symptoms of the patient were attributed to this specific pathology due to the complete agreement between the location of the cyst and the nervous rule area of the compressed thoracic vertebrae. Despite our strongly suggestions for surgery the patient denied any treatment. Conclusion In controversy with the common faith that the spine plays the role of the natural barrier to the further expansion of cystic lesions, our case clearly indicates that, exceptionally, mediastinal cysts may cause severe vertebral complications. Therefore, early excision should be considered especially in young patients or where close follow up is uncertain.

  20. GRID : unlimited computing power on your desktop Conference MT17

    CERN Document Server

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  1. Huge Pericardial Cyst Misleading Symptoms of COPD

    Directory of Open Access Journals (Sweden)

    Göktürk Fındık

    2012-04-01

    Full Text Available Pericardial cysts are rare benign congenital mediastinal lesions. It accounts 30% of all mediastinal cysts. They are usually asemptomatic. They can produce the compression of the mediastinal structures typically caused the symptoms of dyspnea, thoracic pain, tachicardia and cough due to the unusual large size of the cyst. It can performed symptoms of lung atelectasia. The case was a sixty-five years old woman followed with a diagnosis of COPD for seven years. The patient was admitted to our center with the diagnosis of elevation of the right hemidiaphragm on chest radiography. The computed tomography revealed a cystic lesion adjacent to the right hemidiaphragm and cyst excision was performed via right thoracotomy. Patient%u2019s postoperative clinical findings indicated that the symptoms of COPD regressed completely and the patient did not require any further bronchodilator therapy. The aim of this case report is to demonstrate that the pericardial cysts can be missed in chest radiographs and impression of cysts may cause COPD like symptoms in these patients.

  2. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  3. IMPROVING RESOURCE UTILIZATION USING QoS BASED LOAD BALANCING ALGORITHM FOR MULTIPLE WORKFLOWS IN IAAS CLOUD COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    L. Shakkeera

    2013-06-01

    Full Text Available loud computing is the extension of parallel computing, distributed computing and grid computing. It provides secure, quick, convenient data storage and net computing services through the internet. The services are available to user in pay per-use-on-demand model. The main aim of using resources from cloud is to reduce the cost and to increase the performance in terms of request response time. Thus, optimizing the resource usage through efficient load balancing strategy is crucial. The main aim of this paper is to develop and implement an Optimized Load balancing algorithm in IaaS virtual cloud environment that aims to utilize the virtual cloud resources efficiently. It minimizes the cost of the applications by effectively using cloud resources and identifies the virtual cloud resources that must be suitable for all the applications. The web application is created with many modules. These modules are considered as tasks and these tasks are submitted to the load balancing server. The server which consists our load balancing policies redirect the tasks to the corresponding virtual machines created by KVM virtual machine manager as per the load balancing algorithm. If the size of the database inside the machine exceeds then the load balancing algorithm uses the other virtual machines for further incoming request. The load balancing strategy are evaluated for various QoS performance metrics like cost, average execution times, throughput, CPU usage, disk space, memory usage, network transmission and reception rate, resource utilization rate and scheduling success rate for the number of virtual machines and it improves the scalability among resources using load balancing techniques.

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  5. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    Science.gov (United States)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  6. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  7. Biliary obstruction due to a huge simple hepatic cyst treated with laparoscopic resection.

    Science.gov (United States)

    Kaneya, Yohei; Yoshida, Hiroshi; Matsutani, Takeshi; Hirakata, Atsushi; Matsushita, Akira; Suzuki, Seiji; Yokoyama, Tadashi; Maruyama, Hiroshi; Sasajima, Koji; Uchida, Eiji

    2011-01-01

    Most hepatic cysts are asymptomatic, but complications occasionally occur. We describe a patient with biliary obstruction due to a huge simple hepatic cyst treated with laparoscopic resection. A 60-year-old Japanese woman was admitted to our hospital because of a nontender mass in the right upper quadrant of the abdomen. Laboratory tests revealed the following: serum total bilirubin, 0.6 mg/dL; serum aspartate aminotransferase, 100 IU/L; serum alanine aminotransferase, 78 IU/L; serum alkaline phosphatase, 521 IU/L; and serum gamma glutamic transpeptidase, 298 IU/L. Abdominal computed tomography, ultrasonography, and magnetic resonance cholangiopancreatography revealed a huge hepatic cyst, 13 cm in diameter, at the hepatic hilum, accompanied by dilatation of the intrahepatic bile duct and obstruction of the common bile duct. We diagnosed biliary obstruction due to a huge hepatic cyst at the hepatic hilum, and laparoscopic surgery was performed. A huge hepatic cyst was seen at the hepatic hilum. After needle puncture of the huge cyst, the anterior wall of the cyst was unroofed, and cholecystectomy was done. Intraoperative cholangiography through a cystic duct revealed stenosis of the duct. Subsequent decapsulation of the cyst was performed in front of the common bile duct. After this procedure, cholangiography revealed that the stenosis of the common bile duct had resolved. Histopathological examination of the surgical specimen confirmed the hepatic cyst was benign. The postoperative course was uneventful, and the results of liver function tests normalized. The patient was discharged 7 days after operation. Computed tomography 3 months after operation revealed disappearance of the hepatic cyst and no dilatation of the intrahepatic bile duct.

  8. Power Efficient Resource Allocation for Clouds Using Ant Colony Framework

    CERN Document Server

    Chimakurthi, Lskrao

    2011-01-01

    Cloud computing is one of the rapidly improving technologies. It provides scalable resources needed for the ap- plications hosted on it. As cloud-based services become more dynamic, resource provisioning becomes more challenging. The QoS constrained resource allocation problem is considered in this paper, in which customers are willing to host their applications on the provider's cloud with a given SLA requirements for performance such as throughput and response time. Since, the data centers hosting the applications consume huge amounts of energy and cause huge operational costs, solutions that reduce energy consumption as well as operational costs are gaining importance. In this work, we propose an energy efficient mechanism that allocates the cloud resources to the applications without violating the given service level agreements(SLA) using Ant colony framework.

  9. Scarless surgery for a huge liver cyst: A case report.

    Science.gov (United States)

    Kashiwagi, Hiroyuki; Kawachi, Jun; Isogai, Naoko; Ishii, Masanori; Miyake, Katsunori; Shimoyama, Rai; Fukai, Ryota; Ogino, Hidemitsu

    2017-09-01

    Symptomatic or complicated liver cysts sometimes require surgical intervention and laparoscopic fenestration is the definitive treatment for these cysts. We performed minimally invasive surgery, hybrid natural orifice transluminal endoscopic surgery (NOTES) without scarring, for a huge liver cyst. An 82-year-old female presented with a month-long history of right upper abdominal pain. We diagnosed her condition as a huge liver cyst by morphological studies. She denied any history of abdominal trauma. Her serum CEA and CA19-9 were normal and a serum echinococcus serologic test was negative. Laparoscopic fenestration, using a hybrid NOTES procedure via a transvaginal approach, was performed for a huge liver cyst because we anticipated difficulty with an umbilical approach, such as single incision laparoscopic surgery (SILS). Her post-operative course was uneventful and she was discharged from our hospital three days after surgery. Pain killers were not required during and after hospitalization. No recurrence of the liver cyst or bulging was detected by clinical examination two years later. A recent trend of laparoscopic procedure has been towards minimizing the number of incisions to achieve less invasiveness. This hybrid NOTES, with a small incision for abdominal access, along with vaginal access, enabled painless operation for a huge liver cyst. We report a huge liver cyst treated by hybrid NOTES. This approach is safe, less invasive, and may be the first choice for a huge liver cyst. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Cost and Performance-Based Resource Selection Scheme for Asynchronous Replicated System in Utility-Based Computing Environment

    Directory of Open Access Journals (Sweden)

    Wan Nor Shuhadah Wan Nik

    2017-04-01

    Full Text Available A resource selection problem for asynchronous replicated systems in utility-based computing environment is addressed in this paper. The needs for a special attention on this problem lies on the fact that most of the existing replication scheme in this computing system whether implicitly support synchronous replication and/or only consider read-only job. The problem is undoubtedly complex to be solved as two main issues need to be concerned simultaneously, i.e. 1 the difficulty on predicting the performance of the resources in terms of job response time, and 2 an efficient mechanism must be employed in order to measure the trade-off between the performance and the monetary cost incurred on resources so that minimum cost is preserved while providing low job response time. Therefore, a simple yet efficient algorithm that deals with the complexity of resource selection problem in utility-based computing systems is proposed in this paper. The problem is formulated as a Multi Criteria Decision Making (MCDM problem. The advantages of the algorithm are two-folds. On one fold, it hides the complexity of resource selection process without neglecting important components that affect job response time. The difficulty on estimating job response time is captured by representing them in terms of different QoS criteria levels at each resource. On the other fold, this representation further relaxed the complexity in measuring the trade-offs between the performance and the monetary cost incurred on resources. The experiments proved that our proposed resource selection scheme achieves an appealing result with good system performance and low monetary cost as compared to existing algorithms.

  11. Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.

    Science.gov (United States)

    Li, Min; Zhang, John Zenghui; Xia, Fei

    2016-04-12

    Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.

  12. Huge hepatocellular carcinoma with multiple intrahepatic metastases: An aggressive multimodal treatment.

    Science.gov (United States)

    Yasuda, Satoshi; Nomi, Takeo; Hokuto, Daisuke; Yamato, Ichiro; Obara, Shinsaku; Yamada, Takatsugu; Kanehiro, Hiromichi; Nakajima, Yoshiyuki

    2015-01-01

    Huge hepatocellular carcinoma (HCC) possesses a potential risk for spontaneous rupture, which leads to a life-threatening complication with a high mortality rate. In addition, a large HCC is frequently accompanied by intrahepatic metastases. We describe, the case of a 74-year-old woman with a huge extrahepatically expanding HCC with multiple intrahepatic metastases who was treated by liver resection with repeated transcatheter arterial chemoembolization (TACE). To prevent tumor rupture or bleeding, we performed right hepatectomy. After the operation, TACE was applied for multiple intrahepatic metastases in the remnant liver. Furthermore, the elevated protein induced vitamin K absence (PIVKA II) level had decreased to limits within the normal range. Three months after the first TACE, computed tomography revealed several recurrences in the liver. TACE was applied for the second and third time and the tumors were well controlled. Although, liver resection is occasionally performed for patients with huge HCC to avoid spontaneous tumor rupture, only surgical approach might not be sufficient for such advanced HCC. To achieve long-term survival, it is necessary to control the residual intrahepatic tumors. We could control multiple intrahepatic metastases with repeated TACEs after hepatectomy. Multimodal treatment involving hepatectomy and TACE might be a good treatment strategy for patients with huge HCC with multiple intrahepatic metastases if the tumors are localized in the liver without distant or peritoneal metastasis. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. A Huge Subcutaneous Hematoma in an Adult with Kasabach-Merritt Syndrome.

    Science.gov (United States)

    Wu, Kuan-Lin; Liao, Chiung-Ying; Chang, Chen-Kuang; Ho, Shang-Yun; Tyan, Yeu-Sheng; Huang, Yuan-Chun

    2017-06-19

    BACKGROUND Kasabach-Merritt syndrome is a potentially fatal disease that consists of hemangioma(s) with thrombocytopenia, microangiopathic hemolytic anemia, and coagulopathy. Extensive hemangiomatosis is rare. We present the radiological features and treatment strategy of a young adult suffering from Kasabach-Merritt syndrome with widespread hemangiomas and an infected huge hematoma in the right thigh. CASE REPORT A 33-year-old Taiwanese male presented with a painful 20-cm mass over his right thigh and gross hematuria for 2 days. Hemangiomatosis was bioptically proven in infancy and the patient was under regular follow-up. Physical examination revealed normal heart rate, respiratory rate, and body temperature. Multiple palpable lumps with brown and purple areas of skin over the neck, trunk, and right thigh were noted. Laboratory examinations revealed thrombocytopenia anemia and elevated fibrin degradation products. There were no signs of sepsis. Blood transfusion and steroid therapy were executed. Computed tomography showed a huge complicated subcutaneous hematoma in the right thigh. Drainage of the huge hematoma was performed and antibiotics were prescribed. After the local infection in the right thigh and the bleeding tendency were controlled, the patient was discharged in a stable condition two weeks later. CONCLUSIONS A huge infected hematoma and widespread hemangiomas are extremely rare complications of Kasabach-Merritt syndrome. There are no known treatment guidelines currently available. Our patient was successfully treated with steroids, drainage, and antibiotics.

  14. How to Make the Best Use of Limited Computer Resources in French Primary Schools.

    Science.gov (United States)

    Parmentier, Christophe

    1988-01-01

    Discusses computer science developments in French primary schools and describes strategies for using computers in the classroom most efficiently. Highlights include the use of computer networks; software; artificial intelligence and expert systems; computer-assisted learning (CAL) and intelligent CAL; computer peripherals; simulation; and teaching…

  15. A Practitioner Model of the Use of Computer-Based Tools and Resources to Support Mathematics Teaching and Learning.

    Science.gov (United States)

    Ruthven, Kenneth; Hennessy, Sara

    2002-01-01

    Analyzes the pedagogical ideas underpinning teachers' accounts of the successful use of computer-based tools and resources to support the teaching and learning of mathematics. Organizes central themes to form a pedagogical model capable of informing the use of such technologies in classroom teaching and generating theoretical conjectures for…

  16. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    Science.gov (United States)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  17. Data Resources for the Computer-Guided Discovery of Bioactive Natural Products.

    Science.gov (United States)

    Chen, Ya; de Bruyn Kops, Christina; Kirchmair, Johannes

    2017-08-30

    Natural products from plants, animals, marine life, fungi, bacteria, and other organisms are an important resource for modern drug discovery. Their biological relevance and structural diversity make natural products good starting points for drug design. Natural product-based drug discovery can benefit greatly from computational approaches, which are a valuable precursor or supplementary method to in vitro testing. We present an overview of 25 virtual and 31 physical natural product libraries that are useful for applications in cheminformatics, in particular virtual screening. The overview includes detailed information about each library, the extent of its structural information, and the overlap between different sources of natural products. In terms of chemical structures, there is a large overlap between freely available and commercial virtual natural product libraries. Of particular interest for drug discovery is that at least ten percent of known natural products are readily purchasable and many more natural products and derivatives are available through on-demand sourcing, extraction and synthesis services. Many of the readily purchasable natural products are of small size and hence of relevance to fragment-based drug discovery. There are also an increasing number of macrocyclic natural products and derivatives becoming available for screening.

  18. Disposal of waste computer hard disk drive: data destruction and resources recycling.

    Science.gov (United States)

    Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming

    2013-06-01

    An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.

  19. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE Model of Water Resources and Water Environments

    Directory of Open Access Journals (Sweden)

    Guohua Fang

    2016-09-01

    Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.

  20. Single-Path Sigma from a Huge Dataset in Taiwan

    Science.gov (United States)

    Sung, Chih-Hsuan; Lee, Chyi-Tyi

    2014-05-01

    Ground-motion variability, which was used in the probabilistic seismic hazard analysis (PSHA) in computing annual exceedence probability, is composed of random variability (aleatory uncertainty) and model uncertainty (epistemic uncertainty). Finding random variability of ground motions has become an important issue in PSHA, and only the random variability can be used in deriving the annual exceedence probability of ground-motion. Epistemic uncertainty will be put in the logic tree to estimate the total uncertainty of ground-motion. In the present study, we used about 18,859 records from 158 shallow earthquakes (Mw > 3.0, focal depth ≤ 35 km, each station has at least 20 records) form the Taiwan Strong-Motion Instrumentation Program (TSMIP) network to analyse the random variability of ground-motion. First, a new ground-motion attenuation model was established by using this huge data set. Second, the residuals from the median attenuation were analysed by direct observation on inter-event variability and site-specific variability. Finally, the single-path variability was found by a moving-window method on either single-earthquake residuals or single-station residuals. A variogram method was also used to find minimum variability for intra-event residuals and inter-event residuals, respectively. Results reveal that 90% of the single-path sigma σSP are ranging from 0.219 to 0.254 (ln unit) and are 58% to 64% smaller than the total sigma (σT =0.601). The single-site sigma (σSS) are also 39%-43% smaller. If we use only random variability (single-path sigma) in PSHA, then the resultant hazard level would be 28% and 25% lower than the traditional one (using total sigma) in 475-year and in 2475-year return period, respectively, in Taipei.

  1. Integrating GRID Tools to Build a Computing Resource Broker:Activities of DataGrid WP1

    Institute of Scientific and Technical Information of China (English)

    C.Anglano; S.Barale; 等

    2001-01-01

    Resources on a computational Grid are geographically istributed,heterogeneous in nature,owned by different individuals or organizations with their own scheduling policies,have different access cost models with dynamically varying loads and availability conditions.This maker traditional approaches to workload management,load balancing and scheduling inappropriate.The first work package(WP1)of the EU-funded DataGrid project is adddressing the issue of optimzing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of -date(collected in a finite amonut of time at a very loosely coupled site).We describe the DataGrid approach in integrating existing software components(from Condor,GGlobus,etc.)to build a Grid Resource Broker,and the early efforts to define a workable scheduling strategy.

  2. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); Zwahlen, Daniel [Kantonsspital Graubuenden, Department of Radiotherapy, Chur (Switzerland); Bodis, Stephan [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); University Hospital Zurich, Department of Radiation Oncology, Zurich (Switzerland)

    2016-09-15

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [German] Ziel dieser Studie war es, den aktuellen Stand der Infrastruktur und Personalausstattung der

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  4. Computation of groundwater resources and recharge in Chithar River Basin, South India.

    Science.gov (United States)

    Subramani, T; Babu, Savithri; Elango, L

    2013-01-01

    Groundwater recharge and available groundwater resources in Chithar River basin, Tamil Nadu, India spread over an area of 1,722 km(2) have been estimated by considering various hydrological, geological, and hydrogeological parameters, such as rainfall infiltration, drainage, geomorphic units, land use, rock types, depth of weathered and fractured zones, nature of soil, water level fluctuation, saturated thickness of aquifer, and groundwater abstraction. The digital ground elevation models indicate that the regional slope of the basin is towards east. The Proterozoic (Post-Archaean) basement of the study area consists of quartzite, calc-granulite, crystalline limestone, charnockite, and biotite gneiss with or without garnet. Three major soil types were identified namely, black cotton, deep red, and red sandy soils. The rainfall intensity gradually decreases from west to east. Groundwater occurs under water table conditions in the weathered zone and fluctuates between 0 and 25 m. The water table gains maximum during January after northeast monsoon and attains low during October. Groundwater abstraction for domestic/stock and irrigational needs in Chithar River basin has been estimated as 148.84 MCM (million m(3)). Groundwater recharge due to monsoon rainfall infiltration has been estimated as 170.05 MCM based on the water level rise during monsoon period. It is also estimated as 173.9 MCM using rainfall infiltration factor. An amount of 53.8 MCM of water is contributed to groundwater from surface water bodies. Recharge of groundwater due to return flow from irrigation has been computed as 147.6 MCM. The static groundwater reserve in Chithar River basin is estimated as 466.66 MCM and the dynamic reserve is about 187.7 MCM. In the present scenario, the aquifer is under safe condition for extraction of groundwater for domestic and irrigation purposes. If the existing water bodies are maintained properly, the extraction rate can be increased in future about 10% to 15%.

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  6. Anaesthetic management in a case of huge plunging ranula.

    Science.gov (United States)

    Sheet, Jagabandhu; Mandal, Anamitra; Sengupta, Swapnadeep; Jana, Debaleena; Mukherji, Sudakshina; Swaika, Sarbari

    2014-01-01

    Plunging ranula is a rare form of mucous retention cyst arising from submandibular and sublingual salivary glands, which may occasionally become huge occupying the whole of the floor of the mouth and extending into the neck, thus, restricting the neck movement as well as disfiguring the normal airway anatomy. Without fiberoptic assistance, blind or retrograde nasal intubation remains valuable choices in this type of situation. Here, we present a case of successful management of airway by blind nasal intubation in a patient posted for excision of a huge plunging ranula.

  7. Huge Gastric Teratoma in an 8-Year Old Boy.

    Science.gov (United States)

    Sisodiya, Rajpal S; Ratan, Simmi K; Man, Parveen K

    2016-01-01

    Gastric teratoma is very rare tumor and usually presents in early infancy. An 8-year-old boy presented with a huge mass in abdomen extending from epigastrium to the pelvis. Ultrasound and CT scan of abdomen revealed a huge mass with solid and cystic components and internal calcifications. The preoperative diagnosis was a teratoma but not specifically gastric one. At operation, it was found to be gastric teratoma. The mass was excised completely with part of the stomach wall. The histopathology confirmed it to be mature gastric teratoma. The rarity of the teratoma with delayed presentation prompted us to report the case.

  8. Anaesthetic challenges in a patient presenting with huge neck teratoma

    Directory of Open Access Journals (Sweden)

    Gaurav Jain

    2013-01-01

    Full Text Available Paediatric airway management is a great challenge even for an experienced anaesthesiologist. Difficult airway in huge cervical teratoma further exaggerates the complexity. This case report is intended at describing the intubation difficulties that were confronted during the airway management of a three year old girl presenting with huge neck teratoma and respiratory distress. This patient was successfully intubated with uncuffed endotracheal tubes in second attempt under inhalational anaesthesia with halothane and spontaneous ventilation. This case exemplifies the importance of careful preoperative workup of an anticipated difficult airway in paediatric patients with neck swelling to minimize any perioperative complications.

  9. Anaesthetic management in a case of huge plunging ranula

    Science.gov (United States)

    Sheet, Jagabandhu; Mandal, Anamitra; Sengupta, Swapnadeep; Jana, Debaleena; Mukherji, Sudakshina; Swaika, Sarbari

    2014-01-01

    Plunging ranula is a rare form of mucous retention cyst arising from submandibular and sublingual salivary glands, which may occasionally become huge occupying the whole of the floor of the mouth and extending into the neck, thus, restricting the neck movement as well as disfiguring the normal airway anatomy. Without fiberoptic assistance, blind or retrograde nasal intubation remains valuable choices in this type of situation. Here, we present a case of successful management of airway by blind nasal intubation in a patient posted for excision of a huge plunging ranula. PMID:25886120

  10. Editorial: Special issue on resources for the computer security and information assurance curriculum: Issue 1Curriculum Editorial Comments, Volume 1 and Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Frincke, Deb; Ouderkirk, Steven J.; Popovsky, Barbara

    2006-12-28

    This is a pair of articles to be used as the cover editorials for a special edition of the Journal of Educational Resources in Computing (JERIC) Special Edition on Resources for the Computer Security and Information Assurance Curriculum, volumes 1 and 2.

  11. Performances Evaluation of a Novel Hadoop and Spark Based System of Image Retrieval for Huge Collections

    Directory of Open Access Journals (Sweden)

    Luca Costantini

    2015-01-01

    Full Text Available A novel system of image retrieval, based on Hadoop and Spark, is presented. Managing and extracting information from Big Data is a challenging and fundamental task. For these reasons, the system is scalable and it is designed to be able to manage small collections of images as well as huge collections of images. Hadoop and Spark are based on the MapReduce framework, but they have different characteristics. The proposed system is designed to take advantage of these two technologies. The performances of the proposed system are evaluated and analysed in terms of computational cost in order to understand in which context it could be successfully used. The experimental results show that the proposed system is efficient for both small and huge collections.

  12. The huge Package for High-dimensional Undirected Graph Estimation in R

    Science.gov (United States)

    Zhao, Tuo; Liu, Han; Roeder, Kathryn; Lafferty, John; Wasserman, Larry

    2015-01-01

    We describe an R package named huge which provides easy-to-use functions for estimating high dimensional undirected graphs from data. This package implements recent results in the literature, including Friedman et al. (2007), Liu et al. (2009, 2012) and Liu et al. (2010). Compared with the existing graph estimation package glasso, the huge package provides extra features: (1) instead of using Fortan, it is written in C, which makes the code more portable and easier to modify; (2) besides fitting Gaussian graphical models, it also provides functions for fitting high dimensional semiparametric Gaussian copula models; (3) more functions like data-dependent model selection, data generation and graph visualization; (4) a minor convergence problem of the graphical lasso algorithm is corrected; (5) the package allows the user to apply both lossless and lossy screening rules to scale up large-scale problems, making a tradeoff between computational and statistical efficiency. PMID:26834510

  13. [Resection of a Huge Gastrointestinal Stromal Tumor of the Stomach Following Neoadjuvant Chemotherapy with Imatinib].

    Science.gov (United States)

    Sato, Yoshihiro; Karasawa, Hideaki; Aoki, Takeshi; Imoto, Hirofumi; Tanaka, Naoki; Watanabe, Kazuhiro; Abe, Tomoya; Nagao, Munenori; Ohnuma, Shinobu; Musha, Hiroaki; Takahashi, Masanobu; Motoi, Fuyuhiko; Naitoh, Takeshi; Ishioka, Chikashi; Unno, Michiaki

    2016-11-01

    We report a case of a huge gastric gastrointestinal stromal tumor(GIST)that was safely resected followingpreoperative imatinib therapy. A 72-year-old woman was hospitalized with severe abdominal distension. Computed tomography revealed a 27×17 cm tumor in the left upper abdominal cavity. The patient was diagnosed with high risk GIST by EUS-FNA. We initiated preoperative adjuvant chemotherapy with imatinib to achieve a reduction of operative risks and functional preservation. After 6 months of chemotherapy, CT showed a reduction in the tumor size and the patient underwent partial gastrectomy and partial resection of the diaphragm. Histologically, most of the tumor cells were replaced by hyalinized collagen and viable cells were scattered only around the blood vessels. Neoadjuvant chemotherapy with imatinib has the potential to become an important therapeutic option for the treatment of huge GISTs.

  14. A New Approach for a Better Load Balancing and a Better Distribution of Resources in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Abdellah IDRISSI

    2015-10-01

    Full Text Available Cloud computing is a new paradigm where data and services of Information Technology are provided via the Internet by using remote servers. It represents a new way of delivering computing resources allowing access to the network on demand. Cloud computing consists of several services, each of which can hold several tasks. As the problem of scheduling tasks is an NP-complete problem, the task management can be an important element in the technology of cloud computing. To optimize the performance of virtual machines hosted in cloud computing, several algorithms of scheduling tasks have been proposed. In this paper, we present an approach allowing to solve the problem optimally and to take into account the QoS constraints based on the different user requests. This technique, based on the Branch and Bound algorithm, allows to assign tasks to different virtual machines while ensuring load balance and a better distribution of resources. The experimental results show that our approach gives very promising results for an effective tasks planning.

  15. Computer Resources for Schools: Notes for Teachers and Students. [Educational Activities Kit.

    Science.gov (United States)

    Computer Museum, Boston, MA.

    This kit features an introduction to the Computer Museum, a history of computer technology, and notes on how a computer works including hardware and software. A total of 20 exhibits are described with brief questions for use as a preview of the exhibit or as ideas for post-visit discussions. There are 24 classroom activities about the history and…

  16. Dynamic scheduling model of computing resource based on MAS cooperation mechanism

    Institute of Scientific and Technical Information of China (English)

    JIANG WeiJin; ZHANG LianMei; WANG Pu

    2009-01-01

    Allocation of grid resources aims at improving resource utility and grid application performance. Currently, the algorithms proposed for this purpose do not fit well the autonomic, dynamic, distributive and heterogeneous features of the grid environment. According to MAS (multi-agent system) cooperation mechanism and market bidding game rules, a model of allocating allocation of grid resources based on market economy is introduced to reveal the relationship between supply and demand. This model can make good use of the studying and negotiating ability of consumers' agent and takes full consideration of the consumer's behavior, thus rendering the application and allocation of resource of the consumers rational and valid. In the meantime, the utility function of consumer Is given; the existence and the uniqueness of Nash equilibrium point in the resource allocation game and the Nash equilibrium solution are discussed. A dynamic game algorithm of allocating grid resources is designed. Experimental results demonstrate that this algorithm diminishes effectively the unnecessary latency, improves significantly the smoothness of response time, the ratio of throughput and resource utility, thus rendering the supply and demand of the whole grid resource reasonable and the overall grid load balanceable.

  17. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Bigeleisen, Jacob; Berne, Bruce J.; Coton, F. Albert; Scheraga, Harold A.; Simmons, Howard E.; Snyder, Lawrence C.; Wiberg, Kenneth B.; Wipke, W. Todd

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered.

  18. Imaging of huge lingual thyroid gland with goitre

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C.C.; Chen, C.Y.; Chen, F.H.; Lee, G.W.; Hsiao, H.S. [Nat. Defense Medical Centre, Taipei (Taiwan, Province of China). Dept. of Diagnostic Radiol.; Zimmermann, R.A. [Department of Radiology, The Children`s Hospital of Philadelphia, 34th St. and Civic Blvd., Philadelphia, PA 19014 (United States)

    1998-05-01

    We present the CT and MRI findings in a 75-year-old woman with a huge pathologically proven lingual thyroid which underwent goitrous degeneration. CT and MRI showed a midline, tongue-based, exophytic mass with areas of necrosis and heterogeneous contrast enhancement, as seen in large goitres in the normal thyroid gland. (orig.) With 1 fig., 7 refs.

  19. Smart Cities as Support and Legacy of Huge Sport Events

    Directory of Open Access Journals (Sweden)

    TAURION, C.

    2012-12-01

    Full Text Available In this paper we discuss the concept of a smart city and the importance of huge Sport events as an incentive to the creation of the infrastructure necessary for the development of cities that provide quality of life for all its citizens using information technology.

  20. The big, large and huge case of state-building

    DEFF Research Database (Denmark)

    Harste, Gorm

      Using communication theory as point of departure, it is not evident how to study macro phenomena. Michel Foucault limited his studies to a non-Grand Theory when studying discursive events. At the same time, Charles Tilly wrote about Big Structures, Large Processes, Huge Comparisons when trying...

  1. [Experience of surgical treatment of huge mediastinal tumors].

    Science.gov (United States)

    Li, Yuanbo; Zhang, Yi; Xu, Qingsheng; Su, Lei; Zhi, Xiuyi; Wang, Ruotian; Qian, Kun; Hu, Mu; Liu, Lei

    2014-09-23

    The diagnosis and surgical treatment of 36 huge mediastinal tumors were summarized in order to evaluate the effect and safety of the operation. Thirty-six huge mediastinal tumor patients treated in our department from June 2006 to June 2013 were retrospective analyzed, of whom clinical manifestations, diagnosis, surgical treatment and prognosis were carefully collected. Twenty-three cases were men and 13 were women. The average age was 39.2 years old. The pathology turned out to be benign in 23 cases and malignant in 13 cases. Complete resection was achieved in 34 cases while palliative resection in 2 cases with no perioperative death. Six cases had developed postoperative complications but all recovered after active treatment. Patients who had been diagnosed with benign tumors were all alive after follow-up periods of 6 months to 7 years. Nine malignat tumor patients developed recurrence or metastasis, including seven deaths. Surgery played a vital role in the diagnosis and treatment of huge mediastinal tumors. Preoperative diagnosis, accurate surgical approach and careful operation were the key to successful treatment. Benign huge mediastinal tumors had excellent prognosis with surgery.

  2. A Huge Ovarian Dermoid Cyst: Successful Laparoscopic Total Excision.

    Science.gov (United States)

    Uyanikoglu, Hacer; Dusak, Abdurrahim

    2017-08-01

    Giant ovarian cysts, ≥15 cm in diameter, are quite rare in women of reproductive age. Here, we present a case of ovarian cyst with unusual presentation treated by laparoscopic surgery. On histology, mass was found to be mature cystic teratoma. The diagnostic and management challenges posed by this huge ovarian cyst were discussed in the light of the literature.

  3. Resource discovery algorithm based on hierarchical model and Conscious search in Grid computing system

    Directory of Open Access Journals (Sweden)

    Nasim Nickbakhsh

    2017-03-01

    Full Text Available The distributed system of Grid subscribes the non-homogenous sources at a vast level in a dynamic manner. The resource discovery manner is very influential on the efficiency and of quality the system functionality. The “Bitmap” model is based on the hierarchical and conscious search model that allows for less traffic and low number of messages in relation to other methods in this respect. This proposed method is based on the hierarchical and conscious search model that enhances the Bitmap method with the objective to reduce traffic, reduce the load of resource management processing, reduce the number of emerged messages due to resource discovery and increase the resource according speed. The proposed method and the Bitmap method are simulated through Arena tool. This proposed model is abbreviated as RNTL.

  4. A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.

    Science.gov (United States)

    Moretti, Loris; Sartori, Luca

    2016-10-01

    Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Huge Intramural Hematoma in a Thrombosed Middle Cerebral Artery Aneurysm: A Case Report.

    Science.gov (United States)

    Kim, Hak Jin; Lee, Sang Won; Lee, Tae Hong; Kim, Young Soo

    2015-09-01

    We describe a case of a huge intramural hematoma in a thrombosed middle cerebral artery aneurysm. A 47-year-old female patient with liver cirrhosis and thrombocytopenia presented to the neurosurgical unit with a 5-day history of headache and cognitive dysfunction. Magnetic resonance imaging and computed tomography of the brain showed a thrombosed aneurysm located in the right middle cerebral artery with a posteriorly located huge intramural hematoma mimicking an intracerebral hematoma. Imaging studies and cerebrospinal fluid analysis showed no evidence of subarachnoid hemorrhage. Angiography showed a partially thrombosed aneurysm at the origin of the right anterior temporal artery and an incidental aneurysm at the bifurcation of the right middle cerebral artery. Both aneurysms were embolized by coiling. After embolization, the thrombosed aneurysmal sac and intramural hematoma had decreased in size 4 days later and almost completely disappeared 8 months later. This is the first reported case of a nondissecting, nonfusiform aneurysm with a huge intramural hematoma, unlike that of a dissecting aneurysm.

  6. Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

    2012-07-14

    The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  8. University Students and Ethics of Computer Technology Usage: Human Resource Development

    Science.gov (United States)

    Iyadat, Waleed; Iyadat, Yousef; Ashour, Rateb; Khasawneh, Samer

    2012-01-01

    The primary purpose of this study was to determine the level of students' awareness about computer technology ethics at the Hashemite University in Jordan. A total of 180 university students participated in the study by completing the questionnaire designed by the researchers, named the Computer Technology Ethics Questionnaire (CTEQ). Results…

  9. Using Free Computational Resources to Illustrate the Drug Design Process in an Undergraduate Medicinal Chemistry Course

    Science.gov (United States)

    Rodrigues, Ricardo P.; Andrade, Saulo F.; Mantoani, Susimaire P.; Eifler-Lima, Vera L.; Silva, Vinicius B.; Kawano, Daniel F.

    2015-01-01

    Advances in, and dissemination of, computer technologies in the field of drug research now enable the use of molecular modeling tools to teach important concepts of drug design to chemistry and pharmacy students. A series of computer laboratories is described to introduce undergraduate students to commonly adopted "in silico" drug design…

  10. University Students and Ethics of Computer Technology Usage: Human Resource Development

    Science.gov (United States)

    Iyadat, Waleed; Iyadat, Yousef; Ashour, Rateb; Khasawneh, Samer

    2012-01-01

    The primary purpose of this study was to determine the level of students' awareness about computer technology ethics at the Hashemite University in Jordan. A total of 180 university students participated in the study by completing the questionnaire designed by the researchers, named the Computer Technology Ethics Questionnaire (CTEQ). Results…

  11. Huge uterine-cervical diverticulum mimicking as a cyst

    Directory of Open Access Journals (Sweden)

    S Chufal

    2012-01-01

    Full Text Available Here we report an incidental huge uterine-cervical diverticulum from a total abdominal hysterectomy specimen in a perimenopausal woman who presented with acute abdominal pain. The diverticulum was mimicking with various cysts present in the lateral side of the female genital tract. Histopathological examination confirmed this to be a cervical diverticulum with communication to uterine cavity through two different openings. They can attain huge size if left ignored for long duration and present a diagnostic challenge to clinicians, radiologists, as well as pathologists because of its extreme rarity. Therefore, diverticula should also be included as a differential diagnosis. Its histopathological confirmation also highlights that diverticula can present as an acute abdomen, requiring early diagnosis with appropriate timely intervention. Immunohistochemistry CD 10 has also been used to differentiate it from a mesonephric cyst.

  12. Huge uterine-cervical diverticulum mimicking as a cyst.

    Science.gov (United States)

    Chufal, S; Thapliyal, Naveen; Gupta, Manoj; Pangtey, Nirmal

    2012-01-01

    Here we report an incidental huge uterine-cervical diverticulum from a total abdominal hysterectomy specimen in a perimenopausal woman who presented with acute abdominal pain. The diverticulum was mimicking with various cysts present in the lateral side of the female genital tract. Histopathological examination confirmed this to be a cervical diverticulum with communication to uterine cavity through two different openings. They can attain huge size if left ignored for long duration and present a diagnostic challenge to clinicians, radiologists, as well as pathologists because of its extreme rarity. Therefore, diverticula should also be included as a differential diagnosis. Its histopathological confirmation also highlights that diverticula can present as an acute abdomen, requiring early diagnosis with appropriate timely intervention. Immunohistochemistry CD 10 has also been used to differentiate it from a mesonephric cyst.

  13. A case of huge primary liposarcoma in the liver

    Institute of Scientific and Technical Information of China (English)

    Liang-Mou Kuo; Hong-Shiue Chou; Kun-Ming Chan; Ming-Chin Yu; Wei-Chen Lee

    2006-01-01

    Primary liver liposarcoma is a rare disease. Because of its rarity, the knowledge of the clinical course, management, and prognosis of primary liver liposarcoma are all limited for clinicians. A 61-year-old female patient who suffered from a huge primary liposarcoma in the central portion of the liver had the clinical presentations of fever, nausea, vomiting, jaundice, and body weight loss.The huge tumor was resected successfully. However,the tumor recurred repeatedly and she had repeated hepatectomies to remove the tumor. Thetumor became aggravating after repeated surgeries. Eventually, the patient had cervical spinal metastasis of liposarcoma and she survived for 26 months after liver liposarcoma was diagnosed. Although the tumor may become aggravating after repeated surgeries, repeated hepatectomies are still the best policy to achieve a long-term survival for the patients.

  14. Black Hole Firewalls Require Huge Energy of Measurement

    CERN Document Server

    Hotta, Masahiro; Funo, Ken

    2013-01-01

    The unitary moving mirror model is one of the best quantum systems for checking the reasoning of the firewall paradox in quantum black holes. The reasoning of Almheiri et al. inevitably raises a firewall paradox in the model. We resolve this paradox from the viewpoint of the energy cost of quantum measurements. No firewall with a deadly, huge energy flux appears, as long as the energy for the measurement is much smaller than the ultraviolet cutoff scale.

  15. Severe microphthalmos with cyst and unusually huge dermolipoma.

    Science.gov (United States)

    Li, Weidong; Zhang, Ping; Chen, Qiwen; Ye, Xuelian; Li, Jianqun; Yan, Jianhua

    2015-03-01

    The purpose of this study was to report an unusual case of severe microphthalmos, together with an orbital cyst and huge ocular surface dermolipoma. This is a clinical report relating clinical features as well as imaging and histopathologic findings, along with surgical management of the patient. A 5-month-old Chinese male infant was referred, with 2 large masses in the left eye that were present since birth. Ocular examination results revealed a complete absence of any eye structures in the left orbit. In its place were 2 large masses between the left upper and lower palpebral fissure. One was a 3 × 3 × 2.5-cm spherical red tumor with a smooth surface. The other was a large solid spherical tumor, 4 × 4 × 5 cm, covered with normal skin located in the temporal region and attached to the red mass by a pedicle. Orbital magnetic resonance imaging examination findings confirmed that no eye structures were present in the left orbit. However, a cystic lesion was found in the left orbit, with a low signal on T1-weighted imaging and high signal on T2-weighted imaging, and another huge spherical heterogeneous mass was located "outside" the left orbit. Anterior orbitotomy by conjunctival incision was performed under general anesthesia. A spherical cystic mass of 1.5 × 1.5 × 1.6 cm, a small eye of 0.7 × 0.7 × 0.6 cm, and a huge dermolipoma were removed completely. Pathologic examination results confirmed the diagnosis of severe microphthalmos, together with orbital dermoid cyst and dermolipoma. This rare case demonstrates that severe microphthalmos with a cyst may be completely covered by conjunctiva and associated with an unusually huge dermolipoma.

  16. Huge plastic bezoar: a rare cause of gastrointestinal obstruction.

    Science.gov (United States)

    Yaka, Mbarek; Ehirchiou, Abdelkader; Alkandry, Tariq Tajdin Sifeddine; Sair, Khalid

    2015-01-01

    Bezoars are rare causes of gastrointestinal obstruction. Basically, they are of four types: trichobezoars, phytobezoars, pharmacobezoars, and lactobezoars. Some rare types of bezoars are also known. In this article a unique case of plastic bezoars is presented. We describe a girl aged 14 years who ingested large amounts of plastic material used for knitting chairs and charpoys. The conglomerate of plastic threads, entrapped food material and other debris, formed a huge mass occupying the whole stomach and extended into small bowel.

  17. A Huge Cystic Retroperitoneal Lymphangioma Presenting with Back Pain

    Science.gov (United States)

    Kubachev, Kubach; Abdullaev, Elbrus; Babyshin, Valentin; Neronov, Dmitriy; Abdullaev, Abakar

    2016-01-01

    Retroperitoneal lymphangioma is a rare location and type of benign abdominal tumors. The clinical presentation of this rare disease is nonspecific, ranging from abdominal distention to sepsis. Here we present a 73-year-old female patient with 3-month history of back pain. USG and CT revealed a huge cystic mass which was surgically excised and appeared to be lymphangioma on histopathology. PMID:27843456

  18. Huge Left Atrium Accompanied by Normally Functioning Prosthetic Valve.

    Science.gov (United States)

    Sabzi, Feridoun

    2015-01-01

    Giant left atria are defined as those measuring larger than 8 cm and are typically found in patients who have rheumatic mitral valve disease with severe regurgitation. Enlargement of the left atrium may create compression of the surrounding structures such as the esophagus, pulmonary veins, respiratory tract, lung, inferior vena cava, recurrent laryngeal nerve, and thoracic vertebrae and lead to dysphagia, respiratory dysfunction, peripheral edema, hoarse voice, or back pain. However, a huge left atrium is usually associated with rheumatic mitral valve disease but is very rare in a normally functioning prosthetic mitral valve, as was the case in our patient. A 46-year-old woman with a past medical history of mitral valve replacement and chronic atrial fibrillation was admitted to our hospital with a chief complaint of cough and shortness of breath, worsened in the last month. Physical examination showed elevated jugular venous pressure, respiratory distress, cardiac cachexia, heart failure, hepatomegaly, and severe edema in the legs. Chest radiography revealed an inconceivably huge cardiac sell-out. Transthoracic echocardiography demonstrated a huge left atrium, associated with thrombosis, and normal function of the prosthetic mitral valve. Cardiac surgery with left atrial exploration for the extraction of the huge thrombosis and De Vega annuloplasty for tricuspid regurgitation were carried out. The postoperative course was eventful due to right ventricular failure and low cardiac output syndrome; and after two days, the patient expired with multiple organ failure. Thorough literature review showed that our case was the largest left atrium (20 × 22 cm) reported thus far in adults with a normal prosthetic mitral valve function.

  19. Huge Left Atrium Accompanied by Normally Function- ing Prosthetic Valve

    Directory of Open Access Journals (Sweden)

    Feridoun Sabzi

    2015-10-01

    Full Text Available Giant left atria are defined as those measuring larger than 8 cm and are typically found in patients who have rheumatic mitral valve disease with severe regurgitation. Enlargement of the left atrium may create compression of the surrounding structures such as the esophagus, pulmonary veins, respiratory tract, lung, inferior vena cava, recurrent laryngeal nerve, and thoracic vertebrae and lead to dysphagia, respiratory dysfunction, peripheral edema, hoarse voice, or back pain. However, a huge left atrium is usually associated with rheumatic mitral valve disease but is very rare in a normally functioning prosthetic mitral valve, as was the case in our patient. A 46-year-old woman with a past medical history of mitral valve replacement and chronic atrial fibrillation was admitted to our hospital with a chief complaint of cough and shortness of breath, worsened in the last month. Physical examination showed elevated jugular venous pressure, respiratory distress, cardiac cachexia, heart failure, hepatomegaly, and severe edema in the legs. Chest radiography revealed an inconceivably huge cardiac sell-out. Transthoracic echocardiography demonstrated a huge left atrium, associated with thrombosis, and normal function of the prosthetic mitral valve. Cardiac surgery with left atrial exploration for the extraction of the huge thrombosis and De Vega annuloplasty for tricuspid regurgitation were carried out. The postoperative course was eventful due to right ventricular failure and low cardiac output syndrome; and after two days, the patient expired with multiple organ failure. Thorough literature review showed that our case was the largest left atrium (20 × 22 cm reported thus far in adults with a normal prosthetic mitral valve function.

  20. Huge Nevus Lipomatosus Cutaneous Superficialis on Back: An Unusual Presentation.

    Science.gov (United States)

    Das, Dipti; Das, Anupam; Bandyopadhyay, Debabrata; Kumar, Dhiraj

    2015-01-01

    Nevus lipomatosus cutaneous superficialis (NLCS) is a benign dermatosis, histologically characterized by the presence of mature ectopic adipocytes in the dermis. We hereby report a case of a 10-year-old boy who presented with multiple huge swellings on the scapular regions and lower back. The lesions were surmounted by small papules, along with peau-d orange appearance at places. Histology showed features consistent with NLCS. The case is being reported for the unusual clinical presentation.

  1. Huge nevus lipomatosus cutaneous superficialis on back: An unusual presentation

    Directory of Open Access Journals (Sweden)

    Dipti Das

    2015-01-01

    Full Text Available Nevus lipomatosus cutaneous superficialis (NLCS is a benign dermatosis, histologically characterized by the presence of mature ectopic adipocytes in the dermis. We hereby report a case of a 10-year-old boy who presented with multiple huge swellings on the scapular regions and lower back. The lesions were surmounted by small papules, along with peau-d orange appearance at places. Histology showed features consistent with NLCS. The case is being reported for the unusual clinical presentation.

  2. Huge Intravascular Tumor Extending to the Heart: Leiomyomatosis

    Directory of Open Access Journals (Sweden)

    Suat Doganci

    2015-01-01

    Full Text Available Intravenous leiomyomatosis (IVL is a rare neoplasm characterized by histologically benign-looking smooth muscle cell tumor mass, which is growing within the intrauterine and extrauterine venous system. In this report we aimed to present an unusual case of IVL, which is originating from iliac vein and extended throughout to right cardiac chambers. A 49-year-old female patient, who was treated with warfarin sodium due to right iliac vein thrombosis, was admitted to our department with intermittent dyspnea, palpitation, and dizziness. Physical examination was almost normal except bilateral pretibial edema. On magnetic resonance venography, there was an intravenous mass, which is originated from right internal iliac vein and extended into the inferior vena cava. Transthoracic echocardiography and transesophageal echocardiography revealed a huge mass extending from the inferior vena cava through the right atrium, with obvious venous occlusion. Thoracic, abdominal, and pelvic MR showed an intravascular mass, which is concordant with leiomyomatosis. Surgery was performed through median sternotomy. A huge mass with 25-cm length and 186-gr weight was excised through right atrial oblique incision, on beating heart with cardiopulmonary bypass. Histopathologic assessment was compatible with IVL. Exact strategy for the surgical treatment of IVL is still controversial. We used one-stage approach, with complete resection of a huge IVL extending from right atrium to right iliac vein. In such cases, high recurrence rate is a significant problem; therefore it should be kept in mind.

  3. Multimodality treatment with radiotherapy for huge hepatocellular carcinoma.

    Science.gov (United States)

    Han, Hee Ji; Kim, Mi Sun; Cha, Jihye; Choi, Jin Sub; Han, Kwang Hyub; Seong, Jinsil

    2014-01-01

    For huge hepatocellular carcinoma (HCC), therapeutic decisions have varied from local therapy to systemic therapy, with radiotherapy (RT) playing only a palliative role. In this study, we investigated whether multimodality treatment involving RT could be effective in huge HCC. This study was performed in 116 patients with HCC >10 cm. The number of patients in stage II, III and IV was 12, 54 and 50, respectively. RT was given as a combined modality in most patients. The median dose was 45 Gy, with 1.8 Gy per fraction. The median overall survival (OS) and progression-free survival (PFS) were 14.8 and 6.5 months, respectively. The median infield PFS was not reached. Infield failure, outfield intrahepatic and extrahepatic failure were observed in 8.6, 18.1, and 12.1% of patients, respectively. For OS and PFS, number of tumors, initial alpha-fetoprotein (AFP) level, treatment response, percent AFP decrement, and hepatic resection were significant prognostic factors. Tumor characteristics and treatment response were significantly different between long-term survivors and the other patients. Although huge HCC presents an aggressive clinical course, multimodality approaches involving RT can offer an opportunity for prolonged survival. © 2014 S. Karger AG, Basel.

  4. Huge Intravascular Tumor Extending to the Heart: Leiomyomatosis.

    Science.gov (United States)

    Doganci, Suat; Kaya, Erkan; Kadan, Murat; Karabacak, Kubilay; Erol, Gökhan; Demirkilic, Ufuk

    2015-01-01

    Intravenous leiomyomatosis (IVL) is a rare neoplasm characterized by histologically benign-looking smooth muscle cell tumor mass, which is growing within the intrauterine and extrauterine venous system. In this report we aimed to present an unusual case of IVL, which is originating from iliac vein and extended throughout to right cardiac chambers. A 49-year-old female patient, who was treated with warfarin sodium due to right iliac vein thrombosis, was admitted to our department with intermittent dyspnea, palpitation, and dizziness. Physical examination was almost normal except bilateral pretibial edema. On magnetic resonance venography, there was an intravenous mass, which is originated from right internal iliac vein and extended into the inferior vena cava. Transthoracic echocardiography and transesophageal echocardiography revealed a huge mass extending from the inferior vena cava through the right atrium, with obvious venous occlusion. Thoracic, abdominal, and pelvic MR showed an intravascular mass, which is concordant with leiomyomatosis. Surgery was performed through median sternotomy. A huge mass with 25-cm length and 186-gr weight was excised through right atrial oblique incision, on beating heart with cardiopulmonary bypass. Histopathologic assessment was compatible with IVL. Exact strategy for the surgical treatment of IVL is still controversial. We used one-stage approach, with complete resection of a huge IVL extending from right atrium to right iliac vein. In such cases, high recurrence rate is a significant problem; therefore it should be kept in mind.

  5. Efficient visualization of unsteady and huge scalar and vector fields

    Science.gov (United States)

    Vetter, Michael; Olbrich, Stephan

    2016-04-01

    The simulation of climate data tends to produce very large data sets, which hardly can be processed in classical post-processing visualization applications. Within the most traditional post-processing scenarios the visualization pipeline consisting of the processes data generation, visualization mapping and rendering is distributed into two parts over the network or separated via file transfer: the data generation on a supercomputer on the one hand and the other tasks on a special visualization system on the other hand. That way either temporary data sets with huge volume have to be transferred over the network, which leads to bandwidth bottlenecks and volume limitations. As an alternative all simulation and visualization processes are integrated in a monolithic application, where just 2D pixel data is stored, which reduces the user's possibilities for 3D interaction with visualization to frame skipping. Within the Climate Visualization Lab - as part of the Cluster of Excellence "Integrated Climate System Analysis and Prediction" (CliSAP at the University of Hamburg, in cooperation with the German Climate Computing Center (DKRZ) - we plan to integrate a different approach, which has been proven to be successful in former meteorology applications, e.g. PALM (Parallel Large Eddy Simulation Model). Our software framework DSVR is based on the separation of the process chain between the mapping and the rendering processes. We have developed a parallelized visualization library based on MPI and evaluated on various supercomputers. DSVR can be used to integrate the visualization into a parallel simulation model to support in-situ processing, resulting in a sequence of time-based geometric 3D objects which can be interactively rendered in a separate 3D viewer application. To meet the actual requirements (a) to visualize existing data sets, (b) to support more than rectilinear grids, and (c) to integrate in-situ processing in the ICON model, all based on our DSVR framework

  6. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  7. Tracking the Flow of Resources in Electronic Waste - The Case of End-of-Life Computer Hard Disk Drives.

    Science.gov (United States)

    Habib, Komal; Parajuly, Keshav; Wenzel, Henrik

    2015-10-20

    Recovery of resources, in particular, metals, from waste flows is widely seen as a prioritized option to reduce their potential supply constraints in the future. The current waste electrical and electronic equipment (WEEE) treatment system is more focused on bulk metals, where the recycling rate of specialty metals, such as rare earths, is negligible compared to their increasing use in modern products, such as electronics. This study investigates the challenges in recovering these resources in the existing WEEE treatment system. It is illustrated by following the material flows of resources in a conventional WEEE treatment plant in Denmark. Computer hard disk drives (HDDs) containing neodymium-iron-boron (NdFeB) magnets were selected as the case product for this experiment. The resulting output fractions were tracked until their final treatment in order to estimate the recovery potential of rare earth elements (REEs) and other resources contained in HDDs. The results further show that out of the 244 kg of HDDs treated, 212 kg comprising mainly of aluminum and steel can be finally recovered from the metallurgic process. The results further demonstrate the complete loss of REEs in the existing shredding-based WEEE treatment processes. Dismantling and separate processing of NdFeB magnets from their end-use products can be a more preferred option over shredding. However, it remains a technological and logistic challenge for the existing system.

  8. Methods of resource management and applications in computing systems based on cloud technology

    Directory of Open Access Journals (Sweden)

    Карина Андріївна Мацуєва

    2015-07-01

    Full Text Available This article describes the methods of resource management and applications that are parts of an information system for science research (ISSR. The control model of requests in ISSR is given and results of working real cloud system using the additional module of load distribution programmed in Python are presented 

  9. Recommended Computer End-User Skills for Business Students by Fortune 500 Human Resource Executives.

    Science.gov (United States)

    Zhao, Jensen J.

    1996-01-01

    Human resources executives (83 responses from 380) strongly recommended 11 and recommended 46 end-user skills for business graduates. Core skills included use of keyboard, mouse, microcomputer, and printer; Windows; Excel; and telecommunications functions (electronic mail, Internet, local area networks, downloading). Knowing one application of…

  10. Recommendations for protecting National Library of Medicine Computing and Networking Resources

    Energy Technology Data Exchange (ETDEWEB)

    Feingold, R.

    1994-11-01

    Protecting Information Technology (IT) involves a number of interrelated factors. These include mission, available resources, technologies, existing policies and procedures, internal culture, contemporary threats, and strategic enterprise direction. In the face of this formidable list, a structured approach provides cost effective actions that allow the organization to manage its risks. We face fundamental challenges that will persist for at least the next several years. It is difficult if not impossible to precisely quantify risk. IT threats and vulnerabilities change rapidly and continually. Limited organizational resources combined with mission restraints-such as availability and connectivity requirements-will insure that most systems will not be absolutely secure (if such security were even possible). In short, there is no technical (or administrative) {open_quotes}silver bullet.{close_quotes} Protection is employing a stratified series of recommendations, matching protection levels against information sensitivities. Adaptive and flexible risk management is the key to effective protection of IT resources. The cost of the protection must be kept less than the expected loss, and one must take into account that an adversary will not expend more to attack a resource than the value of its compromise to that adversary. Notwithstanding the difficulty if not impossibility to precisely quantify risk, the aforementioned allows us to avoid the trap of choosing a course of action simply because {open_quotes}it`s safer{close_quotes} or ignoring an area because no one had explored its potential risk. Recommendations for protecting IT resources begins with discussing contemporary threats and vulnerabilities, and then procedures from general to specific preventive measures. From a risk management perspective, it is imperative to understand that today, the vast majority of threats are against UNIX hosts connected to the Internet.

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  12. Aggregating Data for Computational Toxicology Applications: The U.S. Environmental Protection Agency (EPA Aggregated Computational Toxicology Resource (ACToR System

    Directory of Open Access Journals (Sweden)

    Elaine A. Cohen Hubal

    2012-02-01

    Full Text Available Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains, ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies, ExpoCastDB (detailed human exposure data from observational studies of selected chemicals, and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways. The EPA DSSTox (Distributed Structure-Searchable Toxicity program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built using open source tools and is freely available to download. This review describes the organization of the data repository and provides selected examples of use cases.

  13. Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key

    Science.gov (United States)

    Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.

    2017-01-01

    Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.

  14. A mathematical model for a distributed attack on targeted resources in a computer network

    Science.gov (United States)

    Haldar, Kaushik; Mishra, Bimal Kumar

    2014-09-01

    A mathematical model has been developed to analyze the spread of a distributed attack on critical targeted resources in a network. The model provides an epidemic framework with two sub-frameworks to consider the difference between the overall behavior of the attacking hosts and the targeted resources. The analysis focuses on obtaining threshold conditions that determine the success or failure of such attacks. Considering the criticality of the systems involved and the strength of the defence mechanism involved, a measure has been suggested that highlights the level of success that has been achieved by the attacker. To understand the overall dynamics of the system in the long run, its equilibrium points have been obtained and their stability has been analyzed, and conditions for their stability have been outlined.

  15. Sustainable supply chain management through enterprise resource planning (ERP): a model of sustainable computing

    OpenAIRE

    Broto Rauth Bhardwaj

    2015-01-01

    Green supply chain management (GSCM) is a driver of sustainable strategy. This topic is becoming increasingly important for both academia and industry. With the increasing demand for reducing carbon foot prints, there is a need to study the drivers of sustainable development. There is also need for developing the sustainability model. Using resource based theory (RBT) the present model for sustainable strategy has been developed. On the basis of data collected, the key drivers of sustainabili...

  16. Development of a Computational Framework for Stochastic Co-optimization of Water and Energy Resource Allocations under Climatic Uncertainty

    Science.gov (United States)

    Xuan, Y.; Mahinthakumar, K.; Arumugam, S.; DeCarolis, J.

    2015-12-01

    Owing to the lack of a consistent approach to assimilate probabilistic forecasts for water and energy systems, utilization of climate forecasts for conjunctive management of these two systems is very limited. Prognostic management of these two systems presents a stochastic co-optimization problem that seeks to determine reservoir releases and power allocation strategies while minimizing the expected operational costs subject to probabilistic climate forecast constraints. To address these issues, we propose a high performance computing (HPC) enabled computational framework for stochastic co-optimization of water and energy resource allocations under climate uncertainty. The computational framework embodies a new paradigm shift in which attributes of climate (e.g., precipitation, temperature) and its forecasted probability distribution are employed conjointly to inform seasonal water availability and electricity demand. The HPC enabled cyberinfrastructure framework is developed to perform detailed stochastic analyses, and to better quantify and reduce the uncertainties associated with water and power systems management by utilizing improved hydro-climatic forecasts. In this presentation, our stochastic multi-objective solver extended from Optimus (Optimization Methods for Universal Simulators), is introduced. The solver uses parallel cooperative multi-swarm method to solve for efficient solution of large-scale simulation-optimization problems on parallel supercomputers. The cyberinfrastructure harnesses HPC resources to perform intensive computations using ensemble forecast models of streamflow and power demand. The stochastic multi-objective particle swarm optimizer we developed is used to co-optimize water and power system models under constraints over a large number of ensembles. The framework sheds light on the application of climate forecasts and cyber-innovation framework to improve management and promote the sustainability of water and energy systems.

  17. Computer and Video Games in Family Life: The Digital Divide as a Resource in Intergenerational Interactions

    Science.gov (United States)

    Aarsand, Pal Andre

    2007-01-01

    In this ethnographic study of family life, intergenerational video and computer game activities were videotaped and analysed. Both children and adults invoked the notion of a digital divide, i.e. a generation gap between those who master and do not master digital technology. It is argued that the digital divide was exploited by the children to…

  18. Planning and Development of the Computer Resource at Baylor College of Medicine.

    Science.gov (United States)

    And Others; Ogilvie, W. Buckner, Jr.

    1979-01-01

    Describes the development and implementation of a plan at Baylor College of Medicine for providing computer support for both the administrative and scientific/ research needs of the Baylor community. The cost-effectiveness of this plan is also examined. (Author/CMV)

  19. Computers for All Students: A Strategy for Universal Access to Information Resources.

    Science.gov (United States)

    Resmer, Mark; And Others

    This report proposes a strategy of putting networked computing devices into the hands of all students at institutions of higher education. It outlines the rationale for such a strategy, the options for financing, the required institutional support structure needed, and various implementation approaches. The report concludes that the resultant…

  20. The Portability of Computer-Related Educational Resources: An Overview of Issues and Directions.

    Science.gov (United States)

    Collis, Betty A.; De Diana, Italo

    1990-01-01

    Provides an overview of the articles in this special issue, which deals with the portability, or transferability, of educational computer software. Motivations for portable software relating to cost, personnel, and time are discussed, and factors affecting portability are described, including technical factors, educational factors, social/cultural…

  1. A NOVEL APPROACH FOR PATTERN ANALYSIS FROM HUGE DATAWAREHOUSE

    Directory of Open Access Journals (Sweden)

    BABITA

    2014-05-01

    Full Text Available Due to the tremendous growth of data and large databases, efficient extraction of required data has become a challenging task. This paper propose a novel approach for knowledge discovery from huge unlabeled temporal databases by employing a combination of HMM and K-means technique. We propose to recursively divide the entire database into clusters having similar characteristics, this process is repeated until we get the cluster’s where no further diversification is possible. Thereafter, the clusters are labeled for knowledge extraction for various purposes.

  2. A young woman with a huge paratubal cyst

    Directory of Open Access Journals (Sweden)

    Ceren Golbasi

    2016-09-01

    Full Text Available Paratubal cysts are asymptomatic embryological remnants. These cysts are usually diagnosed during adolescence and reproductive age. In general, their sizes are small but can be complicated by rupture, torsion, or hemorrhage. Paratubal cysts are often discovered fortuitously on routine ultrasound examination. We report a 19-year-old female patient who presented with irregular menses and abdominal pain. Ultrasound examination revealed a huge cystic mass at the right adnexial area. The diagnosis was confirmed as paratubal cyst during laporotomy and, hence, cystectomy and right salpingectomy were performed. [Cukurova Med J 2016; 41(3.000: 573-576

  3. Modeling huge sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modeling point sources, line sources, and surface sources is presented. Line and surface sources are modeled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces of the room. Point sources are modeled using a hybrid calculation...... method combining this ray-tracing method with image source modeling. With these three source types it is possible to model huge and complex sound sources in industrial environments. Compared to a calculation with only point sources, the use of extended sound sources is shown to improve the agreement...

  4. Huge pyometra in a postmenopausal age: a diagnostic dilemma

    Directory of Open Access Journals (Sweden)

    Pramila Yadav

    2015-10-01

    Full Text Available Pyometra in postmenopausal women is an extremely rare disease that hardly responds to the usual treatment of antibiotics therapy. Our case presented as a postmenopausal woman with a huge pyometra. Pyometra drainage was done with great difficulty after a blind biopsy. Endometrial and cervical biopsy followed by endometrial curettage was done. An intrauterine foley's catheter was kept for seven days and Histopathological report was suggestive of squamous cell carcinoma of cervix. [Int J Reprod Contracept Obstet Gynecol 2015; 4(5.000: 1549-1551

  5. Method and apparatus for offloading compute resources to a flash co-processing appliance

    Energy Technology Data Exchange (ETDEWEB)

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing -bung

    2015-10-13

    Solid-State Drive (SSD) burst buffer nodes are interposed into a parallel supercomputing cluster to enable fast burst checkpoint of cluster memory to or from nearby interconnected solid-state storage with asynchronous migration between the burst buffer nodes and slower more distant disk storage. The SSD nodes also perform tasks offloaded from the compute nodes or associated with the checkpoint data. For example, the data for the next job is preloaded in the SSD node and very fast uploaded to the respective compute node just before the next job starts. During a job, the SSD nodes perform fast visualization and statistical analysis upon the checkpoint data. The SSD nodes can also perform data reduction and encryption of the checkpoint data.

  6. Resources and costs for microbial sequence analysis evaluated using virtual machines and cloud computing.

    Directory of Open Access Journals (Sweden)

    Samuel V Angiuoli

    Full Text Available BACKGROUND: The widespread popularity of genomic applications is threatened by the "bioinformatics bottleneck" resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. RESULTS: We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2, which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. CONCLUSIONS: Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer invested

  7. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    Science.gov (United States)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  8. New resource for the computation of cartilage biphasic material properties with the interpolant response surface method.

    Science.gov (United States)

    Keenan, Kathryn E; Kourtis, Lampros C; Besier, Thor F; Lindsey, Derek P; Gold, Garry E; Delp, Scott L; Beaupre, Gary S

    2009-08-01

    Cartilage material properties are important for understanding joint function and diseases, but can be challenging to obtain. Three biphasic material properties (aggregate modulus, Poisson's ratio and permeability) can be determined using an analytical or finite element model combined with optimisation to find the material properties values that best reproduce an experimental creep curve. The purpose of this study was to develop an easy-to-use resource to determine biphasic cartilage material properties. A Cartilage Interpolant Response Surface was generated from interpolation of finite element simulations of creep indentation tests. Creep indentation tests were performed on five sites across a tibial plateau. A least-squares residual search of the Cartilage Interpolant Response Surface resulted in a best-fit curve for each experimental condition with corresponding material properties. These sites provided a representative range of aggregate moduli (0.48-1.58 MPa), Poisson's ratio (0.00-0.05) and permeability (1.7 x 10(- 15)-5.4 x 10(- 15) m(4)/N s) values found in human cartilage. The resource is freely available from https://simtk.org/home/va-squish.

  9. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    CERN Document Server

    Gomez, Andres; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machin...

  10. Recent advances in computational optimization

    CERN Document Server

    2013-01-01

    Optimization is part of our everyday life. We try to organize our work in a better way and optimization occurs in minimizing time and cost or the maximization of the profit, quality and efficiency. Also many real world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization. This book presents recent advances in computational optimization. The volume includes important real world problems like parameter settings for con- trolling processes in bioreactor, robot skin wiring, strip packing, project scheduling, tuning of PID controller and so on. Some of them can be solved by applying traditional numerical methods, but others need a huge amount of computational resources. For them it is shown that is appropriate to develop algorithms based on metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming etc...

  11. Studying the Earth's Environment from Space: Computer Laboratory Exercised and Instructor Resources

    Science.gov (United States)

    Smith, Elizabeth A.; Alfultis, Michael

    1998-01-01

    Studying the Earth's Environment From Space is a two-year project to develop a suite of CD-ROMs containing Earth System Science curriculum modules for introductory undergraduate science classes. Lecture notes, slides, and computer laboratory exercises, including actual satellite data and software, are being developed in close collaboration with Carla Evans of NASA GSFC Earth Sciences Directorate Scientific and Educational Endeavors (SEE) project. Smith and Alfultis are responsible for the Oceanography and Sea Ice Processes Modules. The GSFC SEE project is responsible for Ozone and Land Vegetation Modules. This document constitutes a report on the first year of activities of Smith and Alfultis' project.

  12. Computational Fluid Dynamics In GARUDA Grid Environment

    CERN Document Server

    Roy, Chandra Bhushan

    2011-01-01

    GARUDA Grid developed on NKN (National Knowledge Network) network by Centre for Development of Advanced Computing (C-DAC) hubs High Performance Computing (HPC) Clusters which are geographically separated all over India. C-DAC has been associated with development of HPC infrastructure since its establishment in year 1988. The Grid infrastructure provides a secure and efficient way of accessing heterogeneous resource . Enabling scientific applications on Grid has been researched for some time now. In this regard we have successfully enabled Computational Fluid Dynamics (CFD) application which can help CFD community as a whole in effective manner to carry out computational research which requires huge compuational resource beyond once in house capability. This work is part of current on-going project Grid GARUDA funded by Department of Information Technology.

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  14. Errors in Seismic Hazard Assessment are Creating Huge Human Losses

    Science.gov (United States)

    Bela, J.

    2015-12-01

    The current practice of representing earthquake hazards to the public based upon their perceived likelihood or probability of occurrence is proven now by the global record of actual earthquakes to be not only erroneous and unreliable, but also too deadly! Earthquake occurrence is sporadic and therefore assumptions of earthquake frequency and return-period are both not only misleading, but also categorically false. More than 700,000 people have now lost their lives (2000-2011), wherein 11 of the World's Deadliest Earthquakes have occurred in locations where probability-based seismic hazard assessments had predicted only low seismic low hazard. Unless seismic hazard assessment and the setting of minimum earthquake design safety standards for buildings and bridges are based on a more realistic deterministic recognition of "what can happen" rather than on what mathematical models suggest is "most likely to happen" such future huge human losses can only be expected to continue! The actual earthquake events that did occur were at or near the maximum potential-size event that either already had occurred in the past; or were geologically known to be possible. Haiti's M7 earthquake, 2010 (with > 222,000 fatalities) meant the dead could not even be buried with dignity. Japan's catastrophic Tohoku earthquake, 2011; a M9 Megathrust earthquake, unleashed a tsunami that not only obliterated coastal communities along the northern Japanese coast, but also claimed > 20,000 lives. This tsunami flooded nuclear reactors at Fukushima, causing 4 explosions and 3 reactors to melt down. But while this history of huge human losses due to erroneous and misleading seismic hazard estimates, despite its wrenching pain, cannot be unlived; if faced with courage and a more realistic deterministic estimate of "what is possible", it need not be lived again. An objective testing of the results of global probability based seismic hazard maps against real occurrences has never been done by the

  15. Isolated huge aneurysm of the left main coronary artery in a 22-year-old patient with type 1 neurofibromatosis.

    Science.gov (United States)

    Pontailler, Margaux; Vilarem, Didier; Paul, Jean-François; Deleuze, Philippe H

    2015-03-01

    A 22-year-old patient with neurofibromatosis type 1 presented with acute chest pain. A computed tomography scan and coronary angiography revealed a partially thrombosed huge aneurysm of the left main coronary artery. Despite medical treatment, the patient's angina recurred. The patient underwent a coronary bypass grafting operation and surgical exclusion of the aneurysm. Postoperative imaging disclosed good permeability of the 3 coronary artery bypass grafts and complete thrombosis of the excluded aneurysm.

  16. The prostatic utricle cyst with huge calculus and hypospadias: A case report and a review of the literature

    OpenAIRE

    Wang, Weigang; WANG, YUANTAO; Zhu, Dechun; Yan, Pengfei; Dong, Biao; Zhou, Honglan

    2015-01-01

    Prostatic utricle cysts with calculus and hypospadias are rare. There are a few reported cases. We present a case of a prostatic utricle cyst with huge calculus in a 25-year-old male. He had a history of left cryptorchidism and surgery for penoscrotal hypospadias in his infancy. He was referred for frequent micturition, urgency of urination, urine pain, terminal hematuria, and dysuria. A computed tomography (CT) revealed a retrovesical cystic lesion of low density, showing a 5 × 5-cm calcific...

  17. Huge Left Ventricular Thrombus and Apical Ballooning associated with Recurrent Massive Strokes in a Septic Shock Patient

    Directory of Open Access Journals (Sweden)

    Hyun-Jung Lee

    2016-02-01

    Full Text Available The most feared complication of left ventricular thrombus (LVT is the occurrence of systemic thromboembolic events, especially in the brain. Herein, we report a patient with severe sepsis who suffered recurrent devastating embolic stroke. Transthoracic echocardiography revealed apical ballooning of the left ventricle with a huge LVT, which had not been observed in chest computed tomography before the stroke. This case emphasizes the importance of serial cardiac evaluation in patients with stroke and severe medical illness.

  18. Democratizing Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Ryoo, Jean J.

    2015-01-01

    Computer science programs are too often identified with a narrow stratum of the student population, often white or Asian boys who have access to computers at home. But because computers play such a huge role in our world today, all students can benefit from the study of computer science and the opportunity to build skills related to computing. The…

  19. Democratizing Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Ryoo, Jean J.

    2015-01-01

    Computer science programs are too often identified with a narrow stratum of the student population, often white or Asian boys who have access to computers at home. But because computers play such a huge role in our world today, all students can benefit from the study of computer science and the opportunity to build skills related to computing. The…

  20. Attentional Resource Allocation and Cultural Modulation in a Computational Model of Ritualized Behavior

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Sørensen, Jesper

    2016-01-01

    ritualized behaviors are perceptually similar across a range of behavioral domains, symbolically mediated experience-dependent information (so-called cultural priors) modulate perception such that communal ceremonies appear coherent and culturally meaningful, while compulsive behaviors remain incoherent and......How do cultural and religious rituals influence human perception and cognition, and what separates the highly patterned behaviors of communal ceremonies from perceptually similar precautionary and compulsive behaviors? These are some of the questions that recent theoretical models and empirical......, in some cases, pathological. In this study, we extend a qualitative model of human action perception and understanding to include ritualized behavior. Based on previous experimental and computational studies, the model was simulated using instrumental and ritualized representations of realistic motor...

  1. Attentional Resource Allocation and Cultural Modulation in a Computational Model of Ritualized Behavior

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Sørensen, Jesper

    2015-01-01

    ritualized behaviors are perceptually similar across a range of behavioral domains, symbolically mediated experience-dependent information (so-called cultural priors) modulate perception such that communal ceremonies appear coherent and culturally meaningful, while compulsive behaviors remain incoherent and......How do cultural and religious rituals influence human perception and cognition, and what separates the highly patterned behaviors of communal ceremonies from perceptually similar precautionary and compulsive behaviors? These are some of the questions that recent theoretical models and empirical......, in some cases, pathological. In this study, we extend a qualitative model of human action perception and understanding to include ritualized behavior. Based on previous experimental and computational studies, the model was simulated using instrumental and ritualized representations of realistic motor...

  2. II - Detector simulation for the LHC and beyond : how to match computing resources and physics requirements

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  3. I - Detector Simulation for the LHC and beyond: how to match computing resources and physics requirements

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  4. Internet resources for dentistry: computer, Internet, reference, and sites for enhancing personal productivity of the dental professional.

    Science.gov (United States)

    Guest, G F

    2000-08-15

    At the onset of the new millennium the Internet has become the new standard means of distributing information. In the last two to three years there has been an explosion of e-commerce with hundreds of new web sites being created every minute. For most corporate entities, a web site is as essential as the phone book listing used to be. Twenty years ago technologist directed how computer-based systems were utilized. Now it is the end users of personal computers that have gained expertise and drive the functionality of software applications. The computer, initially invented for mathematical functions, has transitioned from this role to an integrated communications device that provides the portal to the digital world. The Web needs to be used by healthcare professionals, not only for professional activities, but also for instant access to information and services "just when they need it." This will facilitate the longitudinal use of information as society continues to gain better information access skills. With the demand for current "just in time" information and the standards established by Internet protocols, reference sources of information may be maintained in dynamic fashion. News services have been available through the Internet for several years, but now reference materials such as online journals and digital textbooks have become available and have the potential to change the traditional publishing industry. The pace of change should make us consider Will Rogers' advice, "It isn't good enough to be moving in the right direction. If you are not moving fast enough, you can still get run over!" The intent of this article is to complement previous articles on Internet Resources published in this journal, by presenting information about web sites that present information on computer and Internet technologies, reference materials, news information, and information that lets us improve personal productivity. Neither the author, nor the Journal endorses any of the

  5. Distributed and parallel approach for handle and perform huge datasets

    Science.gov (United States)

    Konopko, Joanna

    2015-12-01

    Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.

  6. Anaethetic management of a neonate with huge cystic hygroma.

    Directory of Open Access Journals (Sweden)

    Bindi Palkhiwala

    2013-01-01

    Full Text Available We discuss here the case of a 7 day old neonate with huge cystic hygroma on the left side of the neck invading the major vessels of neck, facial nerve, strap muscles and sternocleidomastoid. Anasethtic implications in this case were maintaining airway patency after induction, difficult intubation, risk perioperative dislodgement of tube and judgement of proper time for extubation. Following gaseous induction and adequate mask ventilation, patient was intubated with muscle relaxant. peroperatively to avoid accidental extubation, we choose to manually hold the ET tube after fixing it. At the end of relatively uneventful surgery, we could extubate the patient in OT. patient was shifted to NICU for observation. Post operatively on 3rd day, facial palsy was observed. Ptient was discharged on 21st day.

  7. Huge Intracanal lumbar Disc Herniation: a Review of Four Cases

    Directory of Open Access Journals (Sweden)

    Farzad Omidi-Kashani

    2016-01-01

    Full Text Available Lumbar disc herniation (LDH is the most common cause of sciatica and only in about 10% of the affected patients, surgical intervention is necessary. The side of the patient (the side of most prominent clinical complaints is usually consistent with the side of imaging (the side with most prominent disc herniation on imaging scans. In this case series, we presented our experience in four cases with huge intracanal LDH that a mismatch between the patient’s side and the imaging’s side was present. In these cases, for deciding to do the operation, the physicians need to rely more on clinical findings, but for deciding the side of discectomy, imaging characteristic (imaging side may be a more important criterion.

  8. A huge glandular odontogenic cyst occurring at posterior mandible

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Gi Chung; Han, Won Jeong; Kim, Eun Kyung [Dankook University College of Medicine, Seoul (Korea, Republic of)

    2004-12-15

    The glandular odontogenic cyst is a rare lesion described in 1987. It generally occurs at anterior region of mandible in adults over the age of 40 and has a slight tendency to recur. Histopathologically, a cystic cavity lined by a nonkeratinized, stratified squamous, or cuboidal epithelium varying in thickness is found including a superficial layer with glandular or pseudoglandular structures. A 21-year-old male visited Dankook University Dental Hospital with a chief complaint of swelling of the left posterior mandible. Radiographically, a huge multilocular radiolucent lesion involving impacted 3rd molar at the posterior mandible was observed. Buccolingual cortical expansion with partial perforation of buccal cortical bone was also shown. Histopathologically, this lesion was lined by stratified squamous epithelium with glandular structures in areas of plaque-like thickening. The final diagnosis was made as a glandular odontogenic cyst.

  9. Does China's Huge External Surplus Imply an Undervalued Renminbi?

    Institute of Scientific and Technical Information of China (English)

    Anthony J. Makin

    2007-01-01

    A pegged exchange rate regime has been pivotal to China's export-led development strategy. However, its huge trade surpluses and massive build up of international reserves have been matched by large deficits for major trading partners, creating acute policy concerns abroad, especially in the USA. This paper provides a straightforward conceptual framework for interpreting the effect of China's exchange rate policy on its own trade balance and that of trading partners in the context of discrepant economic growth rates. It shows how pegging the exchange rate when output is outstripping expenditure induces China's trade surpluses and counterpart deficits for its trading partners. An important corollary is that given its strictly regulated capital account, China's persistently large surpluses imply a significantly undervalued renminbi, which should gradually become more flexible.

  10. A huge 6.2 kilogram uterine myoma coinciding with omental leiomyosarcoma: case report.

    Science.gov (United States)

    Ruan, C W; Lee, C L; Yen, C F; Wang, C J; Soong, Y K

    1999-12-01

    Surgery for massive abdominal tumors is both interesting and challenging. We present a case involving a multiple uterine myoma weighing 6.2 Kg which coincided with omental leiomyosarcoma. To our knowledge, this is the first report of this type of condition in the English literature. A 44-year-old nulliparous woman had suffered from abdominal pain for a long time. A huge abdominal mass was palpated on physical examination. Computed tomography scanning revealed a huge pelvic-abdominal mass with the possibility of small bowel loops invaded by the mass. A 6-cm omental mass was incidentally found during the subsequent hysterectomy procedure. Perforation of the urinary bladder occurred during the dissection of adhesion. Resection of the omental mass, wide wedge resection of the invaded small bowel, primary repair of the bladder, and hysterectomy were performed. The final pathologic diagnosis was uterine leiomyomata with omental leiomyosarcoma. The patient returned home on postoperative day 14 and was well at the 18-month follow-up examination. The challenge of these tumors lies in their proper diagnosis and surgical management. More case reports and follow-up studies are needed to confirm the efficacy of their management.

  11. Adaptive TrimTree: Green Data Center Networks through Resource Consolidation, Selective Connectedness and Energy Proportional Computing

    Directory of Open Access Journals (Sweden)

    Saima Zafar

    2016-10-01

    Full Text Available A data center is a facility with a group of networked servers used by an organization for storage, management and dissemination of its data. The increase in data center energy consumption over the past several years is staggering, therefore efforts are being initiated to achieve energy efficiency of various components of data centers. One of the main reasons data centers have high energy inefficiency is largely due to the fact that most organizations run their data centers at full capacity 24/7. This results into a number of servers and switches being underutilized or even unutilized, yet working and consuming electricity around the clock. In this paper, we present Adaptive TrimTree; a mechanism that employs a combination of resource consolidation, selective connectedness and energy proportional computing for optimizing energy consumption in a Data Center Network (DCN. Adaptive TrimTree adopts a simple traffic-and-topology-based heuristic to find a minimum power network subset called ‘active network subset’ that satisfies the existing network traffic conditions while switching off the residual unused network components. A ‘passive network subset’ is also identified for redundancy which consists of links and switches that can be required in future and this subset is toggled to sleep state. An energy proportional computing technique is applied to the active network subset for adapting link data rates to workload thus maximizing energy optimization. We have compared our proposed mechanism with fat-tree topology and ElasticTree; a scheme based on resource consolidation. Our simulation results show that our mechanism saves 50%–70% more energy as compared to fat-tree and 19.6% as compared to ElasticTree, with minimal impact on packet loss percentage and delay. Additionally, our mechanism copes better with traffic anomalies and surges due to passive network provision.

  12. Water resources climate change projections using supervised nonlinear and multivariate soft computing techniques

    Science.gov (United States)

    Sarhadi, Ali; Burn, Donald H.; Johnson, Fiona; Mehrotra, Raj; Sharma, Ashish

    2016-05-01

    Accurate projection of global warming on the probabilistic behavior of hydro-climate variables is one of the main challenges in climate change impact assessment studies. Due to the complexity of climate-associated processes, different sources of uncertainty influence the projected behavior of hydro-climate variables in regression-based statistical downscaling procedures. The current study presents a comprehensive methodology to improve the predictive power of the procedure to provide improved projections. It does this by minimizing the uncertainty sources arising from the high-dimensionality of atmospheric predictors, the complex and nonlinear relationships between hydro-climate predictands and atmospheric predictors, as well as the biases that exist in climate model simulations. To address the impact of the high dimensional feature spaces, a supervised nonlinear dimensionality reduction algorithm is presented that is able to capture the nonlinear variability among projectors through extracting a sequence of principal components that have maximal dependency with the target hydro-climate variables. Two soft-computing nonlinear machine-learning methods, Support Vector Regression (SVR) and Relevance Vector Machine (RVM), are engaged to capture the nonlinear relationships between predictand and atmospheric predictors. To correct the spatial and temporal biases over multiple time scales in the GCM predictands, the Multivariate Recursive Nesting Bias Correction (MRNBC) approach is used. The results demonstrate that this combined approach significantly improves the downscaling procedure in terms of precipitation projection.

  13. The EGI-Engage EPOS Competence Center - Interoperating heterogeneous AAI mechanisms and Orchestrating distributed computational resources

    Science.gov (United States)

    Bailo, Daniele; Scardaci, Diego; Spinuso, Alessandro; Sterzel, Mariusz; Schwichtenberg, Horst; Gemuend, Andre

    2016-04-01

    manage the use of the subsurface of the Earth. EPOS started its Implementation Phase in October 2015 and is now actively working in order to integrate multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) - European wide organizations and e-Infrastructure providing community specific data and data products - and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into the Integrated Core Services (ICS) system, that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. The EPOS competence center (EPOS CC) goal is to tackle two of the main challenges that the ICS are going to face in the near future, by taking advantage of the technical solutions provided by EGI. In order to do this, we will present the two pilot use cases the EGI-EPOS CC is developing: 1) The AAI pilot, dealing with the provision of transparent and homogeneous access to the ICS infrastructure to users owning different kind of credentials (e.g. eduGain, OpenID Connect, X509 certificates etc.). Here the focus is on the mechanisms which allow the credential delegation. 2) The computational pilot, Improve the back-end services of an existing application in the field of Computational Seismology, developed in the context of the EC funded project VERCE. The application allows the processing and the comparison of data resulting from the simulation of seismic wave propagation following a real earthquake and real measurements recorded by seismographs. While the simulation data is produced directly by the users and stored in a Data Management System, the observations need to be pre-staged from institutional data-services, which are maintained by the community itself. This use case aims at exploiting the EGI FedCloud e-infrastructure for Data

  14. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    Science.gov (United States)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  15. Student use of computer tools designed to scaffold scientific problem-solving with hypermedia resources: A case study

    Science.gov (United States)

    Oliver, Kevin Matthew

    National science standards call for increasing student exposure to inquiry and real-world problem solving. Students can benefit from open-ended learning environments that stress the engagement of real problems and the development of thinking skills and processes. The Internet is an ideal resource for context-bound problems with its seemingly endless supply of resources. Problems may arise, however, since young students are cognitively ill-prepared to manage open-ended learning and may have difficulty processing hypermedia. Computer tools were used in a qualitative case study with 12 eighth graders to determine how such implements might support the process of solving open-ended problems. A preliminary study proposition suggested students would solve open-ended problems more appropriately if they used tools in a manner consistent with higher-order critical and creative thinking. Three research questions sought to identify: how students used tools, the nature of science learning in open-ended environments, and any personal or environmental barriers effecting problem solving. The findings were mixed. The participants did not typically use the tools and resources effectively. They successfully collected basic information, but infrequently organized, evaluated, generated, and justified their ideas. While the students understood how to use most tools procedurally, they lacked strategic understanding for why tool use was necessary. Students scored average to high on assessments of general content understanding, but developed artifacts suggesting their understanding of specific micro problems was naive and rife with misconceptions. Process understanding was also inconsistent, with some students describing basic problem solving processes, but most students unable to describe how tools could support open-ended inquiry. Barriers to effective problem solving were identified in the study. Personal barriers included naive epistemologies, while environmental barriers included a

  16. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    Science.gov (United States)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  17. MRI Verification of a Case of Huge Infantile Rhabdomyoma

    Science.gov (United States)

    Ramadani, Naser; Kreshnike, Kreshnike Dedushi; Muçaj, Sefedin; Kabashi, Serbeze; Hoxhaj, Astrit; Jerliu, Naim; Bejiçi, Ramush

    2016-01-01

    Introduction: Cardiac rhabdomyoma is type of benign myocardial tumor that is the most common fetal cardiac tumor. Cardiac rhabdomyomas are usually detected before birth or during the first year of life. They account for over 60% of all primary cardiac tumors. Case report: A 6 month old child with coughing and obstruction in breathing, was hospitalized in the Pediatric Clinic in UCCK, Pristine. The difficulty of breathing was heard and the pathological noise of the heart was noticed from the pediatrician. In the echo of the heart at the posterior and apico-lateral part of the left ventricle a tumoral mass was presented with the dimensions of 56 × 54 mm that forwarded the contractions of the left ventricle, the mass involved also the left ventricle wall and was not vascularized. The right ventricle was deformed and with the shifting of the SIV on the right the contractility was preserved. Aorta, the left arch and AP were normal with laminar circulation. The pericard was presented free. Radiography of thoracic organs was made; it resulted on cardiomegaly and significant bronchovascular drawing. It was completed with an MRI and it resulted on: Cardiomegaly due to large tumoral mass lesion (60×34 mm) involving lateral wall of left ventricle. It was isointense to the muscle on T1W images, markedly hyperintense on T2W images. There were a few septa or bant like hypointensities within lesion. On postcontrast study it showed avid enhancement. The left ventricle volume was decreased. Mild pericardial effusion was also noted. Surgical intervention was performed and it resulted on the histopathological aspect as a huge infantile rhadbomyoma. Conclusion: In most cases no treatment is required and these lesions regress spontaneously. Patients with left ventricular outflow tract obstruction or refractory arrhythmias respond well to surgical excision. Rhabdomyomas are frequently diagnosed by means of fetal echocardiography during the prenatal period. PMID:27147810

  18. Huge increases in bacterivores on freshly killed barley roots

    DEFF Research Database (Denmark)

    Christensen, S.; Griffiths, B.; Ekelund, Flemming

    1992-01-01

    Adding fresh roots to intact soil cores resulted in marked increases in microbial and microfaunal activity at the resource islands. Microbial activity increased in two phases following root addition. Respiratory activity and concentration of respiratory enzyme (dehydrogenase) in soil adhering to ...

  19. Communication, Control, and Computer Access for Disabled and Elderly Individuals. ResourceBook 4: Update to Books 1, 2, and 3.

    Science.gov (United States)

    Borden, Peter A., Ed.; Vanderheiden, Gregg C., Ed.

    This update to the three-volume first edition of the "Rehab/Education ResourceBook Series" describes special software and products pertaining to communication, control, and computer access, designed specifically for the needs of disabled and elderly people. The 22 chapters cover: speech aids; pointing and typing aids; training and communication…

  20. Computational Resources for GTL

    Energy Technology Data Exchange (ETDEWEB)

    Herbert M. Sauro

    2007-12-18

    This final report summarizes the work conducted under our three year DOE GTL grant ($459,402). The work involved a number of areas, including standardization, the Systems Biology Workbench, Visual Editors, collaboration with other groups and the development of new theory and algorithms. Our work has played a key part in helping to further develop SBML, the de facto standard for System Biology Model exchange and SBGN, the developing standard for visual representation for biochemical models. Our work has also made significant contributions to developing SBW, the systems biology workbench which is now very widely used in the community (roughly 30 downloads per day for the last three years, which equates to about 30,000 downloads in total). We have also used the DOE funding to collaborate extensively with nine different groups around the world. Finally we have developed new methods to reduce model size which are now used by all the major simulation packages, including Matlab. All in all, we consider the last three years to be highly productive and influential in the systems biology community. The project resulted in 16 peer review publications.

  1. Pulmonary hypertension with a huge thrombosis in main stem of pulmonary artery

    Institute of Scientific and Technical Information of China (English)

    杨萍; 曾红; 孟繁波; 赵林阳

    2001-01-01

    @@A huge thrombosis in the main stem of the pulmonary artery (PA) with pulmonary hypertension has rarely been reported. We present two cases diagnosed and treated in our hospital. One suffered from polyarteritis with a huge thrombus in PA revealed at autopsy. The second case had congenital heart disease of the patent artery duct; and the huge thrombus was found on echocardiogram and extirpated in surgery.

  2. Efficacy of hepatic resection for huge (≥ 10 cm) hepatocellular carcinoma: good prognosis associated with the uninodular subtype.

    Science.gov (United States)

    Zhu, Shao-Liang; Chen, Jie; Li, Hang; Li, Le-Qun; Zhong, Jian-Hong

    2015-01-01

    The value of hepatic resection (HR) for huge hepatocellular carcinomas (HCC) (≥ 10 cm in diameter) remains controversial. The aim of this study is to evaluate the efficacy of hepatic resection (HR) for patients with huge HCC. A total of 739 patients with huge HCC (≥ 10 cm in diameter) (huge HCC group, n = 244) or small HCC (huge HCC were identified based on Cox regression analyses. The hospital mortality of these two groups were similar (P = 0.252). The 5-year OS of huge HCC group and small HCC group were 30.3% and 51.9%, respectively (P huge HCC had a significant higher 5-year OS (50.6%) than mutinodular huge HCC (26.9%) (P = 0.016). Multivariate analysis revealed that uninodular huge HCC and absence of PVTT independently predicted better OS for huge HCC patients. HR is a safe and effective approach for the treatment of huge HCC, especially for the uninodular subtype.

  3. Building Low Cost Cloud Computing Systems

    Directory of Open Access Journals (Sweden)

    Carlos Antunes

    2013-06-01

    Full Text Available The actual models of cloud computing are based in megalomaniac hardware solutions, being its implementation and maintenance unaffordable to the majority of service providers. The use of jail services is an alternative to current models of cloud computing based on virtualization. Models based in utilization of jail environments instead of the used virtualization systems will provide huge gains in terms of optimization of hardware resources at computation level and in terms of storage and energy consumption. In this paper it will be addressed the practical implementation of jail environments in real scenarios, which allows the visualization of areas where its application will be relevant and will make inevitable the redefinition of the models that are currently defined for cloud computing. In addition it will bring new opportunities in the development of support features for jail environments in the majority of operating systems.

  4. Distributed data organization and parallel data retrieval methods for huge laser scanner point clouds

    Science.gov (United States)

    Hongchao, Ma; Wang, Zongyue

    2011-02-01

    This paper proposes a novel method for distributed data organization and parallel data retrieval from huge volume point clouds generated by airborne Light Detection and Ranging (LiDAR) technology under a cluster computing environment, in order to allow fast analysis, processing, and visualization of the point clouds within a given area. The proposed method is suitable for both grid and quadtree data structures. As for distribution strategy, cross distribution of the dataset would be more efficient than serial distribution in terms of non-redundant datasets, since a dataset is more uniformly distributed in the former arrangement. However, redundant datasets are necessary in order to meet the frequent need of input and output operations in multi-client scenarios: the first copy would be distributed by a cross distribution strategy while the second (and later) would be distributed by an iterated exchanging distribution strategy. Such a distribution strategy would distribute datasets more uniformly to each data server. In data retrieval, a greedy algorithm is used to allocate the query task to a data server, where the computing load is lightest if the data block needing to be retrieved is stored among multiple data servers. Experiments show that the method proposed in this paper can satisfy the demands of frequent and fast data query.

  5. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  7. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  8. Huge pelvi-abdominal malignant inflammatory myofibroblastic tumor with rapid recurrence in a 14-year-old boy

    Institute of Scientific and Technical Information of China (English)

    Chia-Hsun; Lu; Hsuan-Ying; Huang; Han-Koo; Chen; Jiin-Haur; Chuang; Shu-Hang; Ng; Sheung-Fat; Ko

    2010-01-01

    Inflammatory myofibroblastic tumor(IMT) is an uncommon benign neoplasm with locally aggressive behavior but malignant change is rare.We report an unusual case of pelvic-abdominal inflammatory myofibroblastic tumor with malignant transformation in a 14-year-old boy presenting with abdominal pain and 9 kg body weight loss in one month.Computed tomography revealed a huge pelvi-abdominal mass(30 cm),possibly originating from the pelvic extraperitoneal space,protruding into the abdomen leading to upward displace...

  9. Network Resources Optimization Scheduling Model in Cloud Computing%云计算中网络资源配比优化调度模型仿真

    Institute of Scientific and Technical Information of China (English)

    孟湘来; 马小雨

    2015-01-01

    Cloud computing server environment is different, once appear congestion network resources, the regional using different forms of network resource scheduling. The single way of network resource scheduling is difficult to meet the requirements of cloud computing network complexity. Put forward a kind of based on supply and demand equilibrium mechanism of cloud computing network planning model, quadratic weighted average method was used to construct network planning model limitation of cloud computing model to adjust the number assigned to the stretch of road network resources, USES the AGV control network congestion evaluation problems, analysis of cloud computing network equipment requirements and the balance between supply and demand mechanism, the number of nodes oriented, cost, and congestion degree three factors clear cloud computing network congestion intensity evaluation index system, determine the time limits and pressing for resources distribution. Experimental results show that, under this kind of model of cloud computing congestion relief efficiency, cost and utility degree is superior to the traditional model, has higher application value.%云计算服务器的环境不同,一旦出现网络资源拥塞,各区域采用的网络资源调度形式也不同。当前单一的网络资源调度方式很难满足云计算网络复杂性的要求。提出一种基于需求和供给均衡机制的云计算网络规划模型,采用二次加权平均方法构建云计算时效网络规划模型,模型不断调整已分配到路段上的网络资源数量,采用AGV控制网络堵塞评估问题,分析云计算网络设备需求的确定和供需平衡机制,面向节点数、成本以及拥塞程度三个因素明确云计算网络拥塞强度的评估指标体系,确定资源配送的时限要求和紧迫程度。实验结果说明,该种模型下的云计算拥塞救助效率、成本以及效用度都优于传统模型,具有较高的应用价值。

  10. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  11. Fibrous dysplasia of the rib presenting as a huge chest wall tumor: report of a case.

    Science.gov (United States)

    Chang, B S; Lee, S C; Harn, H J

    1994-07-01

    Fibrous dysplasia of the rib is not uncommon, but is rarely demonstrated as a huge chest wall mass with severe clinical symptoms. A 59-year-old patient, presenting with a huge, rapidly expanding chest wall tumor compressing the lung, liver and heart accompanied by chest pain and dyspnea, is reported. The tumor was success-fully excised by local radical resection.

  12. Huge van Bordeeus : een ridder van Karel de Grote op avontuur in het Oosten

    NARCIS (Netherlands)

    Lens, Maria Johanna

    2004-01-01

    'Huge van Bordeeus' is de dissertatie van Maria Lens. Hierin doet zij verslag van haar onderzoek naar de Middelnederlandse overlevering van een Franse tekst, 'Huon de Bordeaux', over de ridder Huge van Bordeeus. Deze veertiende-eeuwse ridder, leenman van Karel de Grote, moet de baard en vier tanden

  13. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    Energy Technology Data Exchange (ETDEWEB)

    Clouse, C. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Edwards, M. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCoy, M. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, M. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Verdon, C. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  14. Transarterial chemoembolization for huge hepatocellular carcinoma with diameter over ten centimeters: a large cohort study.

    Science.gov (United States)

    Xue, Tongchun; Le, Fan; Chen, Rongxin; Xie, Xiaoying; Zhang, Lan; Ge, Ningling; Chen, Yi; Wang, Yanhong; Zhang, Boheng; Ye, Shenglong; Ren, Zhenggang

    2015-03-01

    Patients with huge hepatocellular carcinoma >10 cm in diameter represent a special subgroup for treatment. To date, there are few data and little consensus on treatment strategies for huge hepatocellular carcinoma. In this study, we summarized the effects and safety of transarterial chemoembolization for huge hepatocellular carcinoma. A retrospective study was performed based on a large cohort of patients (n = 511) with huge hepatocellular carcinoma who underwent serial transarterial chemoembolization between January 2008 to December 2011 and were followed up until March 2013. We found median survival time was 6.5 months. On multivariate analysis, Child-Pugh class (A versus B) (p huge hepatocellular carcinoma and is recommended as a component of combination therapy. In addition, patients with good liver function and low alpha-fetoprotein levels may acquire greater survival benefits from transarterial chemoembolization.

  15. Information Resources Construction of Digital Library Based on Cloud Computing%基于云计算的数字图书馆信息资源建设

    Institute of Scientific and Technical Information of China (English)

    欧裕美

    2014-01-01

    介绍了数字图书馆信息资源建设现状,阐述了云计算的海量信息存储技术,讨论了云计算给数字图书馆信息资源建设带来的变革,探讨了基于云计算的数字图书馆信息资源建设面临的问题。%This paper introduces the current status of information resources construction of digital library, expounds the massive information storage technology of coud computing, discusses the changes brought about by the cloud computing to digital library, and probes into some problems existing in information resources construction of digital library.

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  17. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  18. 云计算异构资源整合的分析与应用%Analysis and Application of Cloud Computing Integration of Heterogeneous Resources

    Institute of Scientific and Technical Information of China (English)

    吴金龙

    2012-01-01

    Cloud computing is a computing model of the Internet-based public participation. For information dis.aster recovery hardware and software status of the Shanghai Center of State Grid Corporation, as well as the practical problems faced in the disaster recovery business, the proposed framework for the technical means to regulate the management tools, integrated applications address the integration of heterogeneous resources. Constructed in the introduction of cloud computing on the basis of the integration of heterogeneous resources layer, focusing described the key issues of the resource model, resource access specification, and operation and maintenance management system interface, to design and build a cloud computing resource management platform, currently has a comprehensive sewer minicomputers, servers and storage devices, and achieved significant economic and management benefits, and also pointed out that: optimize the integration of hardware and software resources is only part of the information integration, the limited role of the individual hardware and software integration, only organically dynamic deployment and application software integration and resources together, support each other, in order to obtain the maximum benefits.%云计算是一种基于互联网大众参与的计算模式。针对国家电网公司信息灾备上海中心的软硬件现状以及在灾备业务中面l临的实际问题,提出了以架构为技术手段,以规范为管理原则,综合解决异构资源整合的应用方案。在介绍构建云计算异构资源整合层的基础上,叙述了资源模型、资源接人规范以及运维管理系统接口等问题,设计构建了云计算资源管理平台,全面纳管目前拥有的小型机、服务器以及存储设备,取得了明显的经济和管理效益。实践表明:软硬件资源优化只是信息整合的一部分,单独的软硬件整合作用是有限的,只有与应用软件整

  19. Quality Assured Optimal Resource Provisioning and Scheduling Technique Based on Improved Hierarchical Agglomerative Clustering Algorithm (IHAC

    Directory of Open Access Journals (Sweden)

    A. Meenakshi

    2016-08-01

    Full Text Available Resource allocation is the task of convenient resources to different uses. In the context of an resources, entire economy, can be assigned by different means, such as markets or central planning. Cloud computing has become a new age technology that has got huge potentials in enterprises and markets. Clouds can make it possible to access applications and associated data from anywhere. The fundamental motive of the resource allocation is to allot the available resource in the most effective manner. In the initial phase, a representative resource usage distribution for a group of nodes with identical resource usage patterns is evaluated as resource bundle which can be easily employed to locate a group of nodes fulfilling a standard criterion. In the document, an innovative clustering-based resource aggregation viz. the Improved Hierarchal Agglomerative Clustering Algorithm (IHAC is elegantly launched to realize the compact illustration of a set of identically behaving nodes for scalability. In the subsequent phase concerned with energetic resource allocation procedure, the hybrid optimization technique is brilliantly brought in. The novel technique is devised for scheduling functions to cloud resources which duly consider both financial and evaluation expenses. The efficiency of the novel Resource allocation system is assessed by means of several parameters such the reliability, reusability and certain other metrics. The optimal path choice is the consequence of the hybrid optimization approach. The new-fangled technique allocates the available resource based on the optimal path.

  20. Resource Scheduling Strategy of SLA and QoS Under Cloud Computing%云环境下顾及SLA及QoS的资源调度策略

    Institute of Scientific and Technical Information of China (English)

    朱倩

    2016-01-01

    Considering that cloud computing does not require users paying attention to the bottomed system implementation, we take the technology of cloud computing as currently more popular distributed computing based services. However, efficient re-source allocation can reduce the excessive waste of resources, and increase user satisfaction by reducing cost, so as to improve the system performance. This paper realizes the accurate prediction on system performance with virtual technology in cloud computing platform, discussing the resource scheduling strategy based on Virtual Machine( VM) and ervice Level Agreement ( SLA) in a cloud environment from the service point of view. Simulation results show that the scheduling strategy is an effective method to improve the utilization of system resources, which has a certain practical value.%考虑到云技术是当前较为流行基于服务的分布式计算及其不需要用户关注底层的系统实现。有效的资源调配一方面能减少资源的过度浪费,或者减少成本以增加用户的满意度,最终提升系统的性能。本文通过对云计算平台资源的虚拟化技术,实现系统性能需求的精确预算。从服务的角度,探讨一种云环境下基于Virtual Machine ( VM)的顾及Service Level Agreement ( SLA)及Quality of Service ( QoS)的资源调度策略。模拟实验结果表明,本资源的调度策略是一种提高系统资源利用率的有效手段,具有一定的实用价值。

  1. 人力资源规划计算机辅助预测模型的设计%Computer Aided Prediction Model Design of Human Resources Planning

    Institute of Scientific and Technical Information of China (English)

    俞明; 余浩洋

    2013-01-01

    From the current situation of human resource planning, combined with the content and process of human resources plan, it designs the model of computer aided design prediction, and explains the basic structure and mathematic model. It designs an application model of human resources planning. It has done a total and classified planning, and has analyzed the results.%  从人力资源规划现状出发,结合人力资源规划的内容和步骤,进行了计算机辅助预测模型的设计,说明了其基本结构和数学模型。设计了人力资源规划的应用示例,进行了总量规划和分类规划,并对规划结果进行了分析,提出解决策略。

  2. Exploring Cloud Computing for Large-scale Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  3. 云计算环境下的DPSO资源负载均衡算法%DPSO resource load balancing in cloud computing

    Institute of Scientific and Technical Information of China (English)

    冯小靖; 潘郁

    2013-01-01

    Load balancing problem is one of the hot issues in cloud computing. Discrete particle swarm optimization algoritm is used to research load balancing on cloud computing environment. According to dynamic change of resources demand and low require of servers, each resource management node servers as node of the topological structure, and this paper establishes appropriate resource-task model which is resolved by DPSO. Verification results show that the algorithm enhances the utilization ratio and load balancing of resources.%负载均衡问题是云计算研究的热点问题之一.运用离散粒子群算法对云计算环境下的负载均衡问题进行研究,根据云计算环境下资源需求动态变化,并且对资源节点服务器的要求较低的特点,把各个资源节点当做网络拓扑结构中的各个节点,建立相应的资源-任务分配模型,运用离散粒子群算法实现资源负载均衡.验证表明,该算法提高了资源利用率和云计算资源的负载均衡.

  4. Cloud Computing Technology Applied in the Human Resource Management System%云计算技术在人力资源管理系统中的应用

    Institute of Scientific and Technical Information of China (English)

    王燕

    2013-01-01

      随着科技的发展和知识经济时代的来临,企业管理者逐步认识到人力资源管理的信息化将成为未来发展的必然趋势。云计算技术作为新一代的资源共享利用模式,具有需求服务自助化、服务可计量化的特点。将云计算技术引入人力资源管理系统,可对人才招聘、绩效管理和薪酬管理等方面产生重大影响,人力资源管理工作将更加流程化、标准化和透明化。%With the development of technology and the knowledge economy era coming,enterprise managers gradually realize that human resource management information technology will become a trend.As one of the next generation of resource sharing modes,cloud computing technology has the feature that demand service is on self and can be measured.Putting the cloud computing technology into human resource management system,it will have significant impact on talent recruitment,performance management and compensation management so that human resources management will be more streamlined,standardized and transparent.

  5. Elastic resource adjustment method for cloud computing data center%面向云计算数据中心的弹性资源调整方法

    Institute of Scientific and Technical Information of China (English)

    申京; 吴晨光; 郝洋; 殷波; 蔺艳斐

    2015-01-01

    To make resources purchase plan of service which has a variety of service quality requirements, this paper proposes an application performance oriented-cloud computing elastic resources adjustment method through the platform as a service and infrastructure as a service to sign agreement on allocation of resources based on Service-Level Agreement. Using the automatic scaling algorithm,this method adjusts virtual machine resources of load demand in the vertical level. In order to dynamically adjust allocation of resources to meet the needs of application service level,the cloud computing resources utilization rate is optimized. Simulation results are provided to show the effectiveness of the proposed method.%为了制定多种业务质量要求服务的资源购买方案,通过平台服务商与基础设施服务商之间签订基于服务等级协议的资源分配协议,提出一种面向应用性能的云计算弹性资源调整方法。该方法利用自动伸缩算法,在垂直层次上对负载需求的波动进行虚拟机资源调整,以实现动态调整分配资源量来满足应用的服务级别的需求,优化云计算资源利用率。最后通过仿真验证该算法的有效性。

  6. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    Energy Technology Data Exchange (ETDEWEB)

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  7. Synovial sarcoma presenting with huge mediastinal mass: a case report and review of literature

    Science.gov (United States)

    2013-01-01

    Background Synovial sarcoma presenting in the mediastinum is exceedingly rare. Furthermore, data addressing optimal therapy is limited. Herein we present a case where an attempt to downsize the tumor to a resectable state with chemotherapy was employed. Case presentation A 32 year female presented with massive pericardial effusion and unresectable huge mediastinal mass. Computed axial tomography scan - guided biopsy with adjunctive immunostains and molecular studies confirmed a diagnosis of synovial sarcoma. Following three cycles of combination Ifosfamide and doxorubicin chemotherapy, no response was demonstrated. The patient refused further therapy and had progression of her disease 4 months following the last cycle. Conclusion Synovial sarcoma presenting with unresectable mediastinal mass carry a poor prognosis. Up to the best of our knowledge there are only four previous reports where primary chemotherapy was employed, unfortunately; none of these cases had subsequent complete surgical resection. Identification of the best treatment strategy for patients with unresectable disease is warranted. Our case can be of benefit to medical oncologists and thoracic surgeons who might be faced with this unique and exceedingly rare clinical scenario. PMID:23800262

  8. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    Science.gov (United States)

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  9. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  14. Huge-scale molecular dynamics simulation of multibubble nuclei

    KAUST Repository

    Watanabe, Hiroshi

    2013-12-01

    We have developed molecular dynamics codes for a short-range interaction potential that adopt both the flat-MPI and MPI/OpenMP hybrid parallelizations on the basis of a full domain decomposition strategy. Benchmark simulations involving up to 38.4 billion Lennard-Jones particles were performed on Fujitsu PRIMEHPC FX10, consisting of 4800 SPARC64 IXfx 1.848 GHz processors, at the Information Technology Center of the University of Tokyo, and a performance of 193 teraflops was achieved, which corresponds to a 17.0% execution efficiency. Cavitation processes were also simulated on PRIMEHPC FX10 and SGI Altix ICE 8400EX at the Institute of Solid State Physics of the University of Tokyo, which involved 1.45 billion and 22.9 million particles, respectively. Ostwald-like ripening was observed after the multibubble nuclei. Our results demonstrate that direct simulations of multiscale phenomena involving phase transitions from the atomic scale are possible and that the molecular dynamics method is a promising method that can be applied to petascale computers. © 2013 Elsevier B.V. All rights reserved.

  15. The Influence Of Quality Services And The Human Resources Development To User Satisfaction For Accounting Computer Study At Local Government Officials Depok West Java

    Directory of Open Access Journals (Sweden)

    Asyari

    2015-08-01

    Full Text Available The benefit that is felt directly by the customer in using a computer accounting program into an expectation of users to a product produced by an accounting information system . the existence of accounting system will provide convenience in processing accounting data into an output of financial statements . investors and the public will be easy to read and profit earnings results thanks to sales of computer usage accounting . This study intends to seek clarity from the influence of quality of services and human resource development of the accounting computer user satisfaction . object of research is the environment of local government officials Depok West Java . The results showed that the effect on the service user satisfaction . And development of employees a significant effect on user satisfaction

  16. Multi-source information resources management in cloud computing environment%云计算环境下多源信息资源管理方法

    Institute of Scientific and Technical Information of China (English)

    徐达宇; 杨善林; 罗贺

    2012-01-01

    To realize the effective management for multi-source information resources under dynamic cloud computingenvironment, and to ensure efficient system operation, high quality resource sharing and real-time service providingof cloud computing system, the key problems and challenges were proposed on the basis of summarizing the research results in multi-source information resource cataloguing format and description language, discovery and matching mechanism, dynamic organization and allocation methods as well as real-time monitoring. The research prospect of multi-source information resource management in cloud computing was given, and a multi-source information man- agement framework in cloud computing was constructed. Its application in manufacturing was also discussed.%为了在动态云计算环境下对多源信息资源实现有效的管理,以保证云计算系统高效运行、资源优质共享和服务即时提供,在总结多源信息资源编目格式和描述语言、发现和匹配机制、动态组织和分配方法,以及即时监控等领域研究成果的基础上,提出了该领域所面临的关键问题和挑战,给出了云计算环境下多源信息资源管理领域的研究展望,构建了云计算环境下多源信息资源管理框架,并探讨了其在制造业背景下的应用。

  17. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  19. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  20. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  2. COMPUTING

    CERN Document Server

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  4. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Science.gov (United States)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  5. Applied Research in Human Resource Management System Computer%人力资源管理系统中计算机应用研究

    Institute of Scientific and Technical Information of China (English)

    葛航

    2014-01-01

    Information technology continues to develop, businesses are increasingly using computer technology management busi-ness, human resource management as the foundation for enterprise development management module to enhance the management efficiency has greatly changed. Based on the current status of the analysis of human resource management, and describes the applica-tion of computer technology in human resources management in the hope of contributing to the development of enterprises.%信息化技术不断发展,各行各业都在逐渐应用计算机技术管理企业,人力资源管理作为企业发展的基础管理模块,对于管理效率的提升也有很大改变。该文根据目前我国人力资源管理现状进行分析,并且阐述了计算机技术在人力管理中的应用,希望为企业发展做出贡献。

  6. Huge hepatocellular carcinoma with multiple intrahepatic metastases: An aggressive multimodal treatment

    Directory of Open Access Journals (Sweden)

    Satoshi Yasuda

    2015-01-01

    Conclusion: Multimodal treatment involving hepatectomy and TACE might be a good treatment strategy for patients with huge HCC with multiple intrahepatic metastases if the tumors are localized in the liver without distant or peritoneal metastasis.

  7. A Scheme of Collecting and Accounting Used by Cloud Computing Resource%一种云计算资源使用的采集记账方案

    Institute of Scientific and Technical Information of China (English)

    苏宇; 沈苏彬

    2015-01-01

    随着云计算技术的广泛普及与应用,云计算的计费逐渐成为云计算商业化中的重要功能.基础设施即服务(IaaS)是最基础的云服务,为用户提供的是基础设施资源服务.IaaS云的计费首先要解决如何对云资源使用的采集以及记账,而常规的采集方式并不适用于云计算环境.因此,研究云计算环境下资源使用的采集记账技术及实现方法具有实用价值.通过研究云计算环境下的资源采集方法,分析国内外云资源记账的研究现状和云计费的技术需求;选择异步消息传送的模块间通信方案,设计和实现了模块之间解耦,提高系统的可伸缩性和稳定性,降低系统性能开销的机制.通过基于OpenStack平台对资源使用的采集和记账原型系统的测试,表明异步消息传送能够正确地实现模块间数据的采集和传送,并具有较低的系统开销、较好的稳定性和可伸缩性.%With the popularity and application of cloud computing technology,billing is becoming one of the important functions in cloud computing commercialization. Infrastructure as a Service (IaaS) is the most basic cloud service,by which users are provided with a serv-ice of infrastructure resources. To provide the capability of billing for cloud computing,the capabilities of collecting and accounting for re-sources are needed,while the conventional approach cannot be applied to the cloud computing environments. Therefore,the study on tech-niques of collecting and accounting resource usage for cloud computing has practical value. By studying the resource collecting methods in cloud computing environment,the technical requirements of billing and research situation of cloud resource accounting at home and a-broad are analyzed. The distributed and asynchronous message passing is selected for communications,design and implement the mecha-nism of decoupling among modules,improving system scalability and stability,and reducing the

  8. A huge ovarian mucinous cystadenoma causing virilization, preterm labor, and persistent supine hypotensive syndrome during pregnancy.

    Science.gov (United States)

    Kucur, Suna Kabil; Acar, Canan; Temizkan, Osman; Ozagari, Aysim; Gozukara, Ilay; Akyol, Atif

    2016-01-01

    Mucinous cystadenoma (MC) of the ovary is an unilateral, multilocular cystic benign epithelial tumor. Supposed to be hormone responsive, MC reaches huge sizes during pregnancy. Aortocaval compression is common during pregnancy, especially when the pregnant woman is in the supine position. However, the compression recovers with a change in position. The authors report the first case of a huge mucinous cystadenoma of the ovary complicating pregnancy and causing virilization, premature labor, and persistent supine hypotensive syndrome.

  9. Robot-assisted laparoscopic resection of a huge pelvic tumor: A case report.

    Science.gov (United States)

    Jia, Zhuomin; Lyu, Xiangjun; Xu, Yong; Leonardi, Rosario; Zhang, Xu

    2016-07-04

    The traditional open surgery, for the treatment of huge tumor in the narrow space of pelvic cavity and in close proximity to pelvic organs and neurovascular structures, is very difficult and challenging. We report a case of huge neurilemmoma operated using the robot-assisted laparoscopy. We used interventional pre-operation embolization to control blood supply of tumor because MRI showed the tumor had a sufficient blood supply.

  10. SLA-Based Cloud Computing Resource Scheduling Mechanism%基于SLA的云计算资源调度机制研究

    Institute of Scientific and Technical Information of China (English)

    雷洁; 鄂雪妮; 桂雁军

    2014-01-01

    According to the deficiencies of both the task scheduling algorithm and resource load balancing algorithm of IaaS layer, SLA-based cloud computing resource scheduling mechanism was discussed based on the theory of SLA management and resource scheduling .A SLA-based resource scheduling framework was proposed .The SLA management mechanism oriented to the IaaS resources service provider and its contents were discussed .QoS assurance mechanism of SLA -based management was designed.The service for SLA was achieved by interaction of load balancing module and task scheduling module .The gain of IaaS service provider was maximized while resource utilization was also maximized under the QoS -constrained condition .%针对现有的IaaS层的资源调度研究在任务调度机制和资源负载均衡机制上存在的不足,以SLA管理、资源调度等理论为基础,结合现有研究成果,对基于SLA的云计算资源调度策略进行一些针对性的研究。提出了基于SLA的云计算资源调度框架,讨论了面向IaaS资源服务提供商SLA管理机制及内容,设计了基于SLA管理的QoS保证机制,与负载均衡模块和任务调度模块交互实现服务SLA的保证,有效实现IaaS资源服务提供商在任务QoS约束下最大化资源利用率的同时获得最大的收益。

  11. Graduate Enrollment Increases in Science and Engineering Fields, Especially in Engineering and Computer Sciences. InfoBrief: Science Resources Statistics.

    Science.gov (United States)

    Burrelli, Joan S.

    This brief describes graduate enrollment increases in the science and engineering fields, especially in engineering and computer sciences. Graduate student enrollment is summarized by enrollment status, citizenship, race/ethnicity, and fields. (KHR)

  12. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems

    Science.gov (United States)

    Li, Ying

    2016-09-01

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  13. The HUGE formula (hematocrit, urea and gender): association with cardiovascular risk.

    Science.gov (United States)

    Robles, N R; Felix, F J; Fernandez-Berges, D; Perez-Castán, J; Zaro, M J; Lozano, L; Alvarez-Palacios, P; Garcia-Trigo, A; Tejero, V; Morcillo, Y; Hidalgo, A B

    2013-07-01

    To evaluate the relationship between chronic renal failure (CFR) defined through HUGE (hematocrit, urea and gender) formula score and the patient's cardiovascular risk measured through cardiovascular disease antecedents such as ischemic cardiopathy, cerebrovascular disease and peripheral arterial disease. The sample consisted of 2,831 subjects. Mean age was 51.2±14.7 years and 53.5% were female. Serum creatinine, urea, hematocrit and 24h proteinuria were analyzed. HUGE score was calculated from gender, urea and hematocrit. GFR was estimated from uncalibrated serum creatinine using the abbreviated Modification of Diet in Renal Disease equation (MDRD-4). UAE was measured in first morning urine sample. Using HUGE formula 2.2% (n = 61) of subjects had CRF. Of them, 12 (19.7%) had cardiovascular disease history. Among patients without CRF (n = 2770), 194 subjects had history of previous cardiovascular diseases (0.07%; p HUGE definition of CRF was 3.25 (p = 0.001, Mantel-Haenszel test). CFR was associated to higher pulse pressure (PP) and increased urinary albumin excretion. A significant cardiovascular risk was associated to the diagnosis of CRF through HUGE formula. This relation was closer than the obtained using MDRD estimated GFR in spite of a bigger sample. HUGE formula seems to be a useful tool for diagnosing CRF and evaluate the cardiovascular risk of these patients.

  14. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    the learning game “Global Conflicts: Latin America” as a resource into the teaching and learning of a course involving the two subjects “English language learning” and “Social studies” at the final year in a Danish high school. The study adapts an explorative research design approach and investigates...

  15. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    Kearney David

    2007-01-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  16. A Gain-Computation Enhancements Resource Allocation for Heterogeneous Service Flows in IEEE 802.16 m Mobile Networks

    Directory of Open Access Journals (Sweden)

    Wafa Ben Hassen

    2012-01-01

    an access method. In IEEE 802.16 m standard, a contiguous method for subchannel construction is adopted in order to reduce OFDMA system complexity. In this context, we propose a new subchannel gain computation method depending on frequency responses dispersion. This method has a crucial role in the resource management and optimization. In a single service access, we propose a dynamic resource allocation algorithm at the physical layer aiming to maximize the cell data rate while ensuring fairness among users. In heterogeneous data traffics, we study scheduling in order to provide delay guaranties to real-time services, maximize throughput of non-real-time services while ensuring fairness to users. We compare performances to recent existing algorithms in OFDMA systems showing that proposed schemes provide lower complexity, higher total system capacity, and fairness among users.

  17. Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles

    Directory of Open Access Journals (Sweden)

    David Kearney

    2007-02-01

    Full Text Available We show how the limited electrical power and FPGA compute resources available in a swarm of small UAVs can be shared by moving FPGA tasks from one UAV to another. A software and hardware infrastructure that supports the mobility of embedded FPGA applications on a single FPGA chip and across a group of networked FPGA chips is an integral part of the work described here. It is shown how to allocate a single FPGA's resources at run time and to share a single device through the use of application checkpointing, a memory controller, and an on-chip run-time reconfigurable network. A prototype distributed operating system is described for managing mobile applications across the swarm based on the contents of a fuzzy rule base. It can move applications between UAVs in order to equalize power use or to enable the continuous replenishment of fully fueled planes into the swarm.

  18. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    Science.gov (United States)

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  19. Offloading Method for Efficient Use of Local Computational Resources in Mobile Location-Based Services Using Clouds

    Directory of Open Access Journals (Sweden)

    Yunsik Son

    2017-01-01

    Full Text Available With the development of mobile computing, location-based services (LBSs have been developed to provide services based on location information through communication networks or the global positioning system. In recent years, LBSs have evolved into smart LBSs, which provide many services using only location information. These include basic services such as traffic, logistic, and entertainment services. However, a smart LBS may require relatively complicated operations, which may not be effectively performed by the mobile computing system. To overcome this problem, a computation offloading technique can be used to perform certain tasks on mobile devices in cloud and fog environments. Furthermore, mobile platforms exist that provide smart LBSs. The smart cross-platform is a solution based on a virtual machine (VM that enables compatibility of content in various mobile and smart device environments. However, owing to the nature of the VM-based execution method, the execution performance is degraded compared to that of the native execution method. In this paper, we introduce a computation offloading technique that utilizes fog computing to improve the performance of VMs running on mobile devices. We applied the proposed method to smart devices with a smart VM (SVM and HTML5 SVM to compare their performances.

  20. Linear equations and rap battles: how students in a wired classroom utilized the computer as a resource to coordinate personal and mathematical positional identities in hybrid spaces

    Science.gov (United States)

    Langer-Osuna, Jennifer

    2015-03-01

    This paper draws on the constructs of hybridity, figured worlds, and cultural capital to examine how a group of African-American students in a technology-driven, project-based algebra classroom utilized the computer as a resource to coordinate personal and mathematical positional identities during group work. Analyses of several vignettes of small group dynamics highlight how hybridity was established as the students engaged in multiple on-task and off-task computer-based activities, each of which drew on different lived experiences and forms of cultural capital. The paper ends with a discussion on how classrooms that make use of student-led collaborative work, and where students are afforded autonomy, have the potential to support the academic engagement of students from historically marginalized communities.

  1. Identification and Mapping of Soils, Vegetation, and Water Resources of Lynn County, Texas, by Computer Analysis of ERTS MSS Data

    Science.gov (United States)

    Baumgardner, M. F.; Kristof, S. J.; Henderson, J. A., Jr.

    1973-01-01

    Results of the analysis and interpretation of ERTS multispectral data obtained over Lynn County, Texas, are presented. The test site was chosen because it embodies a variety of problems associated with the development and management of agricultural resources in the Southern Great Plains. Lynn County is one of ten counties in a larger test site centering around Lubbock, Texas. The purpose of this study is to examine the utility of ERTS data in identifying, characterizing, and mapping soils, vegetation, and water resources in this semiarid region. Successful application of multispectral remote sensing and machine-processing techniques to arid and seminarid land-management problems will provide valuable new tools for the more than one-third of the world's lands lying in arid-semiarid regions.

  2. Grid Computing

    Indian Academy of Sciences (India)

    2016-05-01

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers on demand. In this article,we describe the grid computing model and enumerate themajor differences between grid and cloud computing.

  3. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    OpenAIRE

    Williams, Samuel; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Irvine, CA

    2009-01-01

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to con...

  4. Multi-scale Adaptive Computational Ghost Imaging

    Science.gov (United States)

    Sun, Shuai; Liu, Wei-Tao; Lin, Hui-Zu; Zhang, Er-Feng; Liu, Ji-Ying; Li, Quan; Chen, Ping-Xing

    2016-11-01

    In some cases of imaging, wide spatial range and high spatial resolution are both required, which requests high performance of detection devices and huge resource consumption for data processing. We propose and demonstrate a multi-scale adaptive imaging method based on the idea of computational ghost imaging, which can obtain a rough outline of the whole scene with a wide range then accordingly find out the interested parts and achieve high-resolution details of those parts, by controlling the field of view and the transverse coherence width of the pseudo-thermal field illuminated on the scene with a spatial light modulator. Compared to typical ghost imaging, the resource consumption can be dramatically reduced using our scheme.

  5. Capacity Analysis of a Family Care Clinic Using Computer Simulation to Determine Optimal Enrollment Under Capitated Resource Allocation Constraints

    Science.gov (United States)

    1998-04-01

    throughout the Capacity Analysis 16 locations, entities, and resources within the simulation ( PROMODEL 1996). This will be especially useful when... PROMODEL Corporation. Paul, R., & Kuljis, J. (1995). A generic simulation package for organizing outpatient clinics. Proceedings of the HIMSS 1995 Winter...Command. PROMODEL Corporation. (1996). User’s guide to MedModel® healthcare simulation software. Orem, UT: PROMODEL Corporation. Searle, S. (1971

  6. A SURVEY ON STATE MONITORING OF COMPUTATIONAL RESOURCES IN CLOUD%云资源状态监控研究综述

    Institute of Scientific and Technical Information of China (English)

    洪斌; 彭甫阳; 邓波; 王东霞

    2016-01-01

    Cloud computing successfully achieves the efficient use of computational resources through internet sharing.The characteristics of cloud resources allocation such as the dynamics property,randomness and openness make the difficulty in QoS (Quality of Service ) assurance be increasingly noticeable.Through mining and analysing in depth the monitoring data,the monitoring technologies for resource state in cloud environment find timely the abnormal operation states in those computational resources,and make the prediction on resources usage state in the future according to historical operation data so as to timely discover potential performance bottlenecks and security threats, these provide the reliable and stable cloud services to users.In combination with instances,in the paper we introduce some representative research approaches in regard to resources states monitoring,including probability analysis,equation fitting and clustering analysis,etc.,and compare the performance features and limitations of different methods.In end of the paper,we discuss the technical challenges encountered by the monitoring technologies for cloud resource states in the aspects of data complexity and scale,and point out the future development trend such as redundancy removal and dimensionality reduction of primitive data,non-supervision highlighting in algorithm design and analysis, pushing the computational tasks to terminals,and synergies of analysis results,etc.%云计算通过网络共享成功实现了计算资源的高效利用。云资源分配的动态性、随机性、开放性使得云平台的服务质量保障难题日益突出。云环境下资源状态的监控技术通过深入挖掘分析监控数据,及时发现计算资源的异常运行状态。根据历史运行数据等对资源的未来使用状态做出预测。以便及时发现潜在的性能瓶颈和安全威胁,为用户提供可靠稳定的云服务。结合实例介绍了在资源状态监控方面有代

  7. Intrusion Detection System Inside Grid Computing Environment (IDS-IGCE

    Directory of Open Access Journals (Sweden)

    Basappa B. Kodada

    2012-01-01

    Full Text Available Grid Computing is a kind of important information technology which enables resource sharing globally to solve the large scale problem. It is based on networks and able to enable large scale aggregation and sharing of computational, data, sensors and other resources across institutional boundaries. Integrated Globus Tool Kit with Web services is to present OGSA (Open Grid Services Architecture as the standardservice grid architecture. In OGSA, everything is abstracted as a service, including computers, applications, data as well as instruments. The services and resources in Grid are heterogeneous and dynamic, and they also belong to different domains. Grid Services are still new to business system & asmore systems are being attached to it, any threat to it could bring collapse and huge harm. May be intruder come with a new form of attack. Grid Computing is a Global Infrastructure on the internet has led to asecurity attacks on the Computing Infrastructure. The wide varieties of IDS (Intrusion Detection System are available which are designed to handle the specific types of attacks. The technique of [27] will protect future attacks in Service Grid Computing Environment at the Grid Infrastructure but there is no technique can protect these types of attacks inside the grid at the node level. So this paper proposes the Architecture of IDS-IGCE (Intrusion Detection System – Inside Grid Computing Environment which can provide the protection against the complete threats inside the Grid Environment.

  8. Winning the Popularity Contest: Researcher Preference When Selecting Resources for Civil Engineering, Computer Science, Mathematics and Physics Dissertations

    Science.gov (United States)

    Dotson, Daniel S.; Franks, Tina P.

    2015-01-01

    More than 53,000 citations from 609 dissertations published at The Ohio State University between 1998-2012 representing four science disciplines--civil engineering, computer science, mathematics and physics--were examined to determine what, if any, preferences or trends exist. This case study seeks to identify whether or not researcher preferences…

  9. A Framework for Safe Composition of Heterogeneous SOA Services in a Pervasive Computing Environment with Resource Constraints

    Science.gov (United States)

    Reyes Alamo, Jose M.

    2010-01-01

    The Service Oriented Computing (SOC) paradigm, defines services as software artifacts whose implementations are separated from their specifications. Application developers rely on services to simplify the design, reduce the development time and cost. Within the SOC paradigm, different Service Oriented Architectures (SOAs) have been developed.…

  10. Newtonian self-gravitating system in a relativistic huge void universe model

    Science.gov (United States)

    Nishikawa, Ryusuke; Nakao, Ken-ichi; Yoo, Chul-Moon

    2016-12-01

    We consider a test of the Copernican Principle through observations of the large-scale structures, and for this purpose we study the self-gravitating system in a relativistic huge void universe model which does not invoke the Copernican Principle. If we focus on the the weakly self-gravitating and slowly evolving system whose spatial extent is much smaller than the scale of the cosmological horizon in the homogeneous and isotropic background universe model, the cosmological Newtonian approximation is available. Also in the huge void universe model, the same kind of approximation as the cosmological Newtonian approximation is available for the analysis of the perturbations contained in a region whose spatial size is much smaller than the scale of the huge void: the effects of the huge void are taken into account in a perturbative manner by using the Fermi-normal coordinates. By using this approximation, we derive the equations of motion for the weakly self-gravitating perturbations whose elements have relative velocities much smaller than the speed of light, and show the derived equations can be significantly different from those in the homogeneous and isotropic universe model, due to the anisotropic volume expansion in the huge void. We linearize the derived equations of motion and solve them. The solutions show that the behaviors of linear density perturbations are very different from those in the homogeneous and isotropic universe model.

  11. The prostatic utricle cyst with huge calculus and hypospadias: A case report and a review of the literature.

    Science.gov (United States)

    Wang, Weigang; Wang, Yuantao; Zhu, Dechun; Yan, Pengfei; Dong, Biao; Zhou, Honglan

    2015-01-01

    Prostatic utricle cysts with calculus and hypospadias are rare. There are a few reported cases. We present a case of a prostatic utricle cyst with huge calculus in a 25-year-old male. He had a history of left cryptorchidism and surgery for penoscrotal hypospadias in his infancy. He was referred for frequent micturition, urgency of urination, urine pain, terminal hematuria, and dysuria. A computed tomography (CT) revealed a retrovesical cystic lesion of low density, showing a 5 × 5-cm calcification. Retrograde urethrocystography showed a 5 × 5-cm high-density shadow in the posterior urethra. The cyst was incised by transperineal approach and the stone was clearly observed and removed. Urethral stricture repair was performed simultaneously. The patient recovered smoothly after surgery.

  12. 基于MABC算法的云计算资源调度策略%Cloud computing resource schedule strategy based on MABC algorithm

    Institute of Scientific and Technical Information of China (English)

    卢荣锐; 彭志平

    2013-01-01

    为了提高云计算服务集群资源调度和任务分配的优化效果,提出一种基于改进的人工蜂群优化算法的云计算资源调度策略.针对ABC算法后期收敛速度慢,容易陷入局部最优的问题,引入了控制因子调度策略,通过自适应调整搜索空间,动态地调整蜜蜂之间的信息度,不断地进行信息交换跳出局部最优从而获得全局最优解.在云计算仿真平台CloudSim进行实验,结果表明,此方法能够缩短云环境下的任务平均运行时间,有效地提高了资源利用率.%To improve the optimization problem of the cloud computing' s service cluster resource schedule and task schedule,this paper presents cloud computing resource schedule strategy based on modified artificial bee colony (MABC) algorithm.The ABC algorithm convergence speed is slow,and it is easy to fall into local optimum.It introduces the learning factor scheduling strategy to adjust the search space adaptively.By adjusting the bee among information dynamically,it can carry out information exchange constantly and jump out of local optimal so as to obtain the global optimized solution.Through the cloud simulation platform CloudSim' s simulation,the experimental results show that the improved algorithm can shorten the cloud environment task average run time,improves the utilization rate of resources effectively.

  13. Robotic resection of huge presacral tumors: case series and comparison with an open resection.

    Science.gov (United States)

    Oh, Jae Keun; Yang, Moon Sool; Yoon, Do Heum; Rha, Koon Ho; Kim, Keung Nyun; Yi, Seong; Ha, Yoon

    2014-06-01

    Clinical case series and analysis. The purpose of the present study is to evaluate the advantages and disadvantages of robotic presacral tumor resection compared with conventional open approach. Conventional open approach for huge presacral tumors in the retroperitoneal space often demands excessive hospitalization and poor cosmesis. Furthermore, narrow surgical field sometimes interrupt delicate procedures. Nine patients with huge (diameter >10 cm) presacral tumors underwent surgery. Five patients among them had robotic procedure and the others had open transperitoneal tumor resection. Operation time, blood loss, hospitalization, and complications were analyzed. Robotic presacral tumor resection showed shorter operation time, less bleeding, and shorter hospitalization. Moreover, there was no complication related to abdominal adhesion. Although robotic resection for presacral tumor still has limitations technically and economically, robotic resection for huge presacral tumors demonstrated advantages over open resection specifically for benign neurogenic tumors.

  14. Endovascular Treatment of a Huge Hepatic Artery Aneurysm by Coil Embolization Method: A Case Report.

    Science.gov (United States)

    Hemmati, Hossein; Karimian, Mehdi; Moradi, Habibollah; Farid Marandi, Kambiz; Haghdoost, Afrooz

    2015-07-01

    Hepatic artery aneurysms are rare but potentially life threatening. We describe a novel case of a successful endovascular coil embolization of a huge hepatic artery aneurysm. A 67-year-old woman presented with recent abdominal pain that had begun from 2 weeks before referring to our hospital. Sonographic and computerized tomographic (CT) findings revealed a huge hepatic artery aneurysm with 95 mm × 83 mm diameter. The patient underwent an endovascular technique. In aortic angiography, the celiac artery orifice and superior mesenteric artery were so narrow, so sonography was used in order to determine the exact position of the catheter in the celiac artery orifice. The aneurysm was thrombosed using coil embolization. Pulsation of the aneurysm immediately disappeared. Huge hepatic artery aneurysm can be safely treated using coil embolization.

  15. Collaborative treatment of huge intrathoracic meningoceles associated with neurofibromatosis type 1: a case report.

    Science.gov (United States)

    Cho, Deog Gon; Chang, Yong Jin; Cho, Kyu Do; Hong, Jae Taek

    2015-11-10

    An intrathoracic meningocele is a relatively rare disease, and it commonly accompanies neurofibromatosis type 1. Patients tend to have no symptom but if its size is too large and compresses a lung and neighboring organs, it needs shunt drainage or surgical resection. Herein, we present the case of a 52 year-old female patient with huge intrathoracic meningoceles associated with neurofibromatosis type 1, who has complained about chest discomfort and dyspnea at rest. As for a preliminary treatment, a neurosurgeon had performed a cystoperitoneal shunt, but the symptoms continued and the size of mass and the amount of pleural effusion did not change significantly. Therefore, the huge thoracic meningoceles were successfully treated through the thoracotomic approach in combination with lumbar puncture and cerebrospinal fluid drainage. It is reported that double huge intrathoracic meningoceles associated with neurofibromatosis type 1 was successfully treated by a shunting procedure followed by thoracotomic resection with collaboration of a neurosurgeon.

  16. 混合云市场的计算资源交易模型%Computing resource trading models in hybrid cloud market Com-puter Engineering and Applications, 2014, 50(18):25-32

    Institute of Scientific and Technical Information of China (English)

    孙英华; 吴哲辉; 郭振波; 顾卫东

    2014-01-01

    A computing resource trading model named HCRM(Hybrid Cloud Resource Market)is proposed based on hybrid cloud environment. The market structure, management layers and quality models of supply and demand are dis-cussed. A quality-aware double auction algorithm named QaDA(Quality-aware Double Auction)is designed and simulated. Compared with traditional CDA(Continuous Double Auction), the simulation results show that QaDA not only can guide reasonable price but also can obtain higher matching ratio and higher deal amount.%基于计算资源共享模型的研究,提出了混合云计算资源市场HCRM(Hybrid Cloud Resource Market)的功能架构,研究了市场管理层的交易管理机制,给出了计算资源的供需质量模型,设计了一种质量感知的双向拍卖算法QaDA(Quality-aware Double Auction)。仿真运行结果表明,与普通的连续双向拍卖算法CDA(Continuous Double Auction)相比,QaDA不仅可以引导用户合理定价,也能获得较高的匹配成功率和较高的交易总额。

  17. Huge Bilateral Paramesonephric Cysts in a 25 year old Nulliparous woman

    Science.gov (United States)

    Sagili, Haritha; Krishnan, Manikandan; Dasari, Papa

    2013-01-01

    Paraovarian cysts are uncommon adnexal masses which are usually asymptomatic. We describe a case of bilateral huge paramesonephric cysts in a nulliparous woman. A 25-year-old lady presented with abdominal distension for one year duration. Examination and imaging revealed large abdominopelvic cystic masses with no solid areas or septations. Intraoperatively there were huge bilateral paraovarian cysts which were excised. Histopathology revealed low cuboidal to ciliated columnar epithelium with no evidence of ovarian parenchyma suggestive of paramesonephric cyst. Paraovarian cyst should be included in the differential diagnosis of a cystic mass visualised on ultrasound. PMID:24392412

  18. Huge dissected ascending aorta associated with pseudo aneurysm and aortic coarctation feridoun.

    Science.gov (United States)

    Sabzi, Feridoun; Khosravi, Donya

    2015-07-01

    We report a unique case of chronic dissection of the ascending aorta complicated with huge and thrombotic pseudoaneurysm in a patient with coarctation of descending aorta. Preoperative investigations such as transesophageal echocardiography (TEE) confirmed the diagnosis of dissection. Intraoperative findings included a12 cm eccentric bulge of the right lateral side of dilated the ascending aorta filled with the clot and a circular shaped intimal tear communicating with an extended hematoma and dissection of the media layer. The rarity of the report is an association of the chronic dissection with huge pseudoaneurysm and coarctation. The patient underwent staged repair of an aneurysm and coarctation and had an uneventful postoperative recovery period.

  19. Computer resource model of the Matra-Bukkalja, Hungary, lignite deposit for production planning, inspection and control

    Energy Technology Data Exchange (ETDEWEB)

    Fust, A.; Zergi, I.

    1985-01-01

    For the planning of lignite surface mining, a reliable geologic model is needed which can be updated by new survey data and used for the operative control of production. A computer model is proposed to analyze control, planning and inspection of production. The model is composed of two components, one from the geologic survey data, and the other refined by the production data. The half variograms of the Matra-Bukkalja lignite deposits are presented. The model can be used for the checking of forecast data.

  20. A Community-Based Approach to Monitor Resource for Cloud Computing%基于社区模型的云资源监测

    Institute of Scientific and Technical Information of China (English)

    祁鑫; 李振

    2012-01-01

    云计算是一种新兴的商业计算模型,资源性能和负载监测是其重要的研究点.分析了传统的分布式计算资源监测策略,针对云计算环境,引入社区模型设计了层次式社区监测,提出了基于敏感因子的监测方法,以解决全局监控可能会带来的数据繁冗和无效问题.仿真实验表明,模型和策略在理论上是合理的,在效率上较传统监测系统有一定的提高.%Cloud computing is an emerging computing model, and the resource performance and load monitoring is an important research point. According to cloud computing environment, this paper analyzed the monitoring methods of traditional distributed system, designed a hierarchical model introducing community model, and proposed an approach based on sensitivity factors, to solve the problems of data redun-dancy and invalid in global monitoring. Simulation results show that the model and method is reasonable in theory, and the efficiency has been improved on some degree.

  1. 基于云计算的实训资源管理系统设计%Design of Training Resource Management System Based on Cloud Computing

    Institute of Scientific and Technical Information of China (English)

    楼桦

    2011-01-01

    Based on cloud computing technology,this paper designs a system of college′s training resource which may meet the requirement of openness,scalability and on-demand deployment,and builds a realizable cloud computing architecture.This architecture reflects the value of three cloud computing services including IaaS,PaaS and SaaS.Combining the testing system,this paper puts forward the technology and method to achieve implementation.%基于云计算技术,设计满足开放性、可扩展性、按需部署的高校实训资源管理系统,提出一种现实可行的云计算架构。体现基础设施即服务、平台即服务和软件即服务三个云计算服务形态在实训资源管理系统的应用价值,结合已实现的可运行系统给出了实现的技术和方法。

  2. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    Energy Technology Data Exchange (ETDEWEB)

    Computational Research Division, Lawrence Berkeley National Laboratory; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Berkeley; Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2009-05-04

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads per MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications.

  3. 基于云计算的教学资源管理系统的构建研究%Construction of teaching resources management system based on cloud computing

    Institute of Scientific and Technical Information of China (English)

    黄瑞; 刘剑桥

    2014-01-01

    基于云计算的教学资源管理是未来教学资源管理的发展方向。文章从云计算、系统功能模块、系统基本结构、系统编程模式和系统计算模式等方面对教学资源管理系统进行构建研究,为建设基于云计算的教学资源管理系统提供技术支持。%Teaching resources management based on cloud computing is the direction of development of teaching resources management in the future. In this paper, the cloud computing, function modules, basic architecture, programming model and com-puting model of system are studied, and it provides technical support for building teaching resources management system based on cloud computing.

  4. Methodology of problem-based learning engineering and technology and of its implementation with modern computer resources

    Science.gov (United States)

    Lebedev, A. A.; Ivanova, E. G.; Komleva, V. A.; Klokov, N. M.; Komlev, A. A.

    2017-01-01

    The considered method of learning the basics of microelectronic circuits and systems amplifier enables one to understand electrical processes deeper, to understand the relationship between static and dynamic characteristics and, finally, bring the learning process to the cognitive process. The scheme of problem-based learning can be represented by the following sequence of procedures: the contradiction is perceived and revealed; the cognitive motivation is provided by creating a problematic situation (the mental state of the student), moving the desire to solve the problem, to raise the question "why?", the hypothesis is made; searches for solutions are implemented; the answer is looked for. Due to the complexity of architectural schemes in the work the modern methods of computer analysis and synthesis are considered in the work. Examples of engineering by students in the framework of students' scientific and research work of analog circuits with improved performance based on standard software and software developed at the Department of Microelectronics MEPhI.

  5. Endovascular repair for a huge vertebral artery pseudoaneurysm caused by Behcet's disease

    Institute of Scientific and Technical Information of China (English)

    DONG Zhi-hui; FU Wei-guo; GUO Da-qiao; XU Xin; CHEN Bin; JIANG Jun-hao; YANG Jue; SHI Zheng-yu; WANG Yu-qi

    2006-01-01

    @@ Behcet's disease (BD), a multisystem chronic autoimmune process of unknown etiology,usually leads to arterial impairment. Isolated case reports have described BD-related arterial dissections, pseudoaneurysms or aneurysms.1-4 Recently, we successfully treated a huge vertebral artery pseudoaneurysm (VAPA) in a patient with BD by stent-grafting with preservation of the affected vertebral artery.

  6. Unusual Huge Keratoacanthoma in Sites of in the Previous Split-Thickness Skin Grafted Area

    Directory of Open Access Journals (Sweden)

    Fatih Uygur

    2009-09-01

    Full Text Available Keratoacanthoma (KA is a fairly common keratinizing, squamous neoplasm. The exact etiology of KA is unknown. However, ultraviolet radiation, trauma, chemical carcinogens, viral infections, immunosuppression, genetic factors, radiation and thermal burns have been accused of pathogenesis. In here, we represent an unusual huge KA arising from the previous reconstructed with split-thickness skin graft on the dorsal foot.

  7. Anatomic trisegmentectomy: An alternative treatment for huge or multiple hepatocellular carcinoma of right liver.

    Science.gov (United States)

    Jia, Changku; Weng, Jie; Qin, Qifan; Chen, Youke; Huang, Xiaolong; Fu, Yu

    2017-04-01

    The patients with huge (≥10cm) or multiple hepatocellular carcinoma (HCC) in the right liver and insufficient size of the remnant left liver can not be performed an operation of right hemihepatectomy because of that liver failure will occur post operation. We designed anatomic trisegmentectomy in right liver to increase the ratio of future liver remnant volume (%FLRV), thus increasing resectability of huge or multiple HCC. Thirteen patients were analyzed by preoperative CT scan for liver and tumor volumetries. If the right hemihepatectomy was done, %FLRV would be at the range of 29.6%-37.5%. However, if trisegmentectomy was done, %FLRV would increase by an average of 14.0%. So patients will not undergo postoperative liver failure due to sufficient %FLRV. Therefore, we designed anatomic trisegmentectomy, with retention of segment 5 or segment 8, to increase %FLRV and increase the resectability for huge or multiple HCC. After trisegmentectomy, the inflow and outflow of remnant liver were maintained well. Severe complications and mortality were not happened post operation. Of the 13 patients, 10 survived up to now. Of the 10 living cases, postoperative lung metastasis was found in 2 and intrahepatic recurrence was found in 1. These 3 patients survive with tumor after comprehensive therapies including oral administration of Sorafenib. Compared to right hemihepatectomy, anatomic trisegmentectomy in right liver guarantees the maximum preservation of %FLRV to increase the resectability of huge or multiple HCC, thus improving the overall resection rate. Copyright © 2017. Published by Elsevier Masson SAS.

  8. Hepatic arterial infusion chemotherapy for patients with huge unresectable hepatocellular carcinoma.

    Science.gov (United States)

    Tsai, Wei-Lun; Lai, Kwok-Hung; Liang, Huei-Lung; Hsu, Ping-I; Chan, Hoi-Hung; Chen, Wen-Chi; Yu, Hsien-Chung; Tsay, Feng-Woei; Wang, Huay-Min; Tsai, Hung-Chih; Cheng, Jin-Shiung

    2014-01-01

    The optimal treatment for huge unresectable hepatocellular carcinoma (HCC) remains controversial. The outcome of transcatheter arterial chemoembolization (TACE) for patients huge unresectable HCC is generally poor and the survival benefit of TACE in these patients is unclear. The aim of the study is to compare the effect of hepatic arterial infusion chemotherapy (HAIC) versus symptomatic treatment in patients with huge unresectable HCC. Since 2000 to 2005, patients with huge (size >8 cm) unresectable HCC were enrolled. Fifty-eight patients received HAIC and 44 patients received symptomatic treatment. In the HAIC group, each patient received 2.4+1.4 (range: 1-6) courses of HAIC. Baseline characteristics and survival were compared between the HAIC and symptomatic treatment groups. The HAIC group and the symptomatic treatment group were similar in baseline characteristics and tumor stages. The overall survival rates at one and two years were 29% and 14% in the HAIC group and 7% and 5% in the symptomatic treatment group, respectively. The patients in the HAIC group had significantly better overall survival than the symptomatic treatment group (Phuge unresectable HCC.

  9. A Huge Ovarian Cyst in a Middle-Aged Iranian Female

    Directory of Open Access Journals (Sweden)

    Mohammad Kazem Moslemi

    2010-05-01

    Full Text Available A 38-year-old Iranian woman was found to have a huge ovarian cystic mass. Her presenting symptom was vague abdominal pain and severe abdominal distention. She underwent laparotomy and after surgical removal, the mass was found to be mucinous cystadenoma on histology.

  10. Huge right atrial myxoma causing fixed tricuspid stenosis with constitutional symptoms.

    Science.gov (United States)

    Kuralay, Erkan; Cingöz, Faruk; Günay, Celalettin; Demirkiliç, Ufuk; Tatar, Harun

    2003-01-01

    Nonspecific constitutional symptoms are reported mostly in patients with left-atrial myxomas, which occur five times as often as its right-atrial counterpart. We present huge right-atrial myxoma, which obstructs tricuspid orifice with nonspecific constitutional symptoms without any pulmonary embolism attack.

  11. Preserving stability of huge agriculture machines with internal mobilities: Application to a grape harvester

    OpenAIRE

    Dieumet, D.; Thuilot, B.; Lenain, R.; Berducat, M.

    2012-01-01

    International audience; This paper proposes an algorithm for estimating on-line the rollover risk of huge machine moving on natural ground. The approach is based on the reconstruction of lateral load transfer thanks to an observer, able to take into account terrain specificities (grip conditions and geometry). Capabilities are tested through experiments on a grape harvester.

  12. Assessment Planning and Evaluation of Renewable Energy Resources: an Interactive Computer Assisted Procedure. [hydroelectricity, biomass, and windpower in the Pittsfield metropolitan region, Massachusetts

    Science.gov (United States)

    Aston, T. W.; Fabos, J. G.; Macdougall, E. B.

    1982-01-01

    Adaptation and derivation were used to develop a procedure for assessing the availability of renewable energy resources on the landscape while simultaneously accounting for the economic, legal, social, and environmental issues required. Done in a step-by-step fashion, the procedure can be used interactively at the computer terminals. Its application in determining the hydroelectricity, biomass, and windpower in a 40,000 acre study area of Western Massachusetts shows that: (1) three existing dam sites are physically capable of being retrofitted for hydropower; (2) each of three general areas has a mean annual windspeed exceeding 14 mph and is conductive to windpower; and (3) 20% of the total land area consists of prime agricultural biomass while 30% of the area is prime forest biomass land.

  13. Assessment Planning and Evaluation of Renewable Energy Resources: an Interactive Computer Assisted Procedure. [hydroelectricity, biomass, and windpower in the Pittsfield metropolitan region, Massachusetts

    Science.gov (United States)

    Aston, T. W.; Fabos, J. G.; Macdougall, E. B.

    1982-01-01

    Adaptation and derivation were used to develop a procedure for assessing the availability of renewable energy resources on the landscape while simultaneously accounting for the economic, legal, social, and environmental issues required. Done in a step-by-step fashion, the procedure can be used interactively at the computer terminals. Its application in determining the hydroelectricity, biomass, and windpower in a 40,000 acre study area of Western Massachusetts shows that: (1) three existing dam sites are physically capable of being retrofitted for hydropower; (2) each of three general areas has a mean annual windspeed exceeding 14 mph and is conductive to windpower; and (3) 20% of the total land area consists of prime agricultural biomass while 30% of the area is prime forest biomass land.

  14. Distributed Computing.

    Science.gov (United States)

    Ryland, Jane N.

    1988-01-01

    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  15. Exploring Tradeoffs in Demand-side and Supply-side Management of Urban Water Resources using Agent-based Modeling and Evolutionary Computation

    Science.gov (United States)

    Kanta, L.; Berglund, E. Z.

    2015-12-01

    Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS) framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger (1) increases in the volume of water pumped through inter-basin transfers from an external reservoir and (2) drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  16. 基于Web的计算资源发布的研究与实践%Research and practice on web-based computing resource publishing

    Institute of Scientific and Technical Information of China (English)

    吴志刚; 方滨兴; 马涛

    2001-01-01

    迅速发展的World Wide Web(Web)为Web计算资源发布提供了一个开放的、一致的平台。文中提出了Web计算资源代理发布模型,为提高这个模型中代理的可用性和可靠性,设计了两级的树代理结构和主-从代理结构,并在此基础上实现了一个原型系统WCRPS。%Fast developing World Wide Web (Web) provides an open, coincident platform for Web computing resource publishing. We put forward Web computing agent publishing model. To improve availa-bility and reliability of agents in the model, we designed two-level tree agent structure and primary-slave agent structure. And we realized a prototype system WCRPS on the basis of them.

  17. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    Science.gov (United States)

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system.

  18. TRUSTED CLOUD COMPUTING FRAMEWORK FOR HEALTHCARE SECTOR

    Directory of Open Access Journals (Sweden)

    Mervat Adib Bamiah

    2014-01-01

    Full Text Available Cloud computing is rapidly evolving due to its efficient characteristics such as cost-effectiveness, availability and elasticity. Healthcare organizations and consumers lose control when they outsource their sensitive data and computing resources to a third party Cloud Service Provider (CSP, which may raise security and privacy concerns related to data loss and misuse appealing threats. Lack of consumers’ knowledge about their data storage location may lead to violating rules and regulations of Health Insurance Portability and Accountability Act (HIPAA that can cost them huge penalty. Fear of data breach by internal or external hackers may decrease consumers’ trust in adopting cloud computing and benefiting from its promising features. We designed a Healthcare Trusted Cloud Computing (HTCC framework that maintains security, privacy and considers HIPAA regulations. HTCC framework deploys Trusted Computing Group (TCG technologies such as Trusted Platform Module (TPM, Trusted Software Stack (TSS, virtual Trusted Platform Module (vTPM, Trusted Network Connect (TNC and Self Encrypting Drives (SEDs. We emphasize on using strong multi-factor authentication access control mechanisms and strict security controls, as well as encryption for data at storage, in-transit and while process. We contributed in customizing a cloud Service Level Agreement (SLA by considering healthcare requirements. HTCC was evaluated by comparing with previous researchers’ work and conducting survey from experts. Results were satisfactory and showed acceptance of the framework. We aim that our proposed framework will assist in optimizing trust on cloud computing to be adopted in healthcare sector.

  19. [A case report of two-term surgery for focal progression of a huge liver metastasis and peritoneal dissemination from gastrointestinal stromal tumor during imatinib mesylate treatment].

    Science.gov (United States)

    Toyokawa, Takahiro; Teraoka, Hitoshi; Kitayama, Kisyu; Nomura, Shinya; Kanehara, Isao; Nishino, Hiroji

    2014-03-01

    We report a patient who underwent 2-term surgery to treat focal progression of a huge liver metastasis and peritoneal dissemination from a gastric gastrointestinal stromal tumor(GIST)during imatinib mesylate treatment. A 59-year-old man underwent an emergency surgery for perforative peritonitis caused by gastric GIST in June 2006 and a partial resection of the stomach in September 2006. Four years later, abdominal computed tomography(CT)detected a huge liver tumor that occupied the entire right lobe. We initiated imatinib mesylate treatment(400mg/day), and the patient maintained stable disease for several months. However, focal progression of the huge liver tumor and a peritoneal tumor at the splenic hilum were revealed by CT; therefore, an extended right hepatic resection was performed in August 2011 and a distal pancreatectomy, splenectomy, and partial resection of the stomach were performed in February 2012. The patient died of the primary disease at 16 months after the hepatic resection for focal progression.

  20. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  1. Using Computer Resources (Spreadsheet to Comprehend Rational Numbers Utilizando recursos computacionais (planilha na compreensão dos Números Racionais

    Directory of Open Access Journals (Sweden)

    Rosane Ratzlaff da Rosa

    2008-12-01

    Full Text Available This article reports on an investigation which sought to determine if the use of spreadsheets in the teaching of rational numbers in elementary education contributes to learning and improved learning retention. The study was carried out with a sample of students from two sixth-grade classes in a public school in Porto Alegre. Results indicated that the use of spreadsheets favored learning and made the classes more participatory for the students, who were able to visualize the processes they were working with. A second test applied five months after the first test showed that students who used the spreadsheets had greater learning retention of the contents. The results also show that the students felt comfortable with the technology, and almost all reported that they were more motivated by the use of computers in the classroom, despite less-than-ideal laboratory conditions. Key-words: Rational Numbers. Teaching with Spreadsheet. Teaching Rational Numbers using Spreadsheet.Este artigo relata uma investigação que procurou determinar se o uso de planilha como recurso no ensino dos números racionais na Educação Básica contribui para a aprendizagem e uma maior retenção dessa aprendizagem a médio prazo. A investigação foi realizada com uma amostra de alunos de duas turmas da sexta série de uma escola pública de Porto Alegre. Os resultados indicaram que o uso da planilha favorece a aprendizagem e torna as aulas mais participativas para os alunos, que conseguiram visualizar os processos com os quais trabalharam. Um segundo teste aplicado cinco meses após o primeiro mostrou que os alunos que utilizaram a planilha apresentaram uma maior retenção do conteúdo. Os resultados apontam ainda que os alunos se sentem à vontade com a tecnologia e quase todos disseram ficarem mais motivados com as aulas utilizando o computador apesar das condições do laboratório utilizado não ser a ideal. Palavras-chave: Números Racionais. Ensino com a

  2. Image microarrays derived from tissue microarrays (IMA-TMA): New resource for computer-aided diagnostic algorithm development.

    Science.gov (United States)

    Hipp, Jennifer A; Hipp, Jason D; Lim, Megan; Sharma, Gaurav; Smith, Lauren B; Hewitt, Stephen M; Balis, Ulysses G J

    2012-01-01

    Conventional tissue microarrays (TMAs) consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD) algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE), and image microarray maker (iMAM) enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA). We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ) algorithm. Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM) appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic bodies, was subsequently carried out on the

  3. Salvage transhepatic arterial embolization after failed stage I ALPPS in a patient with a huge HCC with chronic liver disease: A case report.

    Science.gov (United States)

    Wang, Zheng; Peng, Yuanfei; Sun, Qiman; Qu, Xudong; Tang, Min; Dai, Yajie; Tang, Zhaoyou; Lau, Wan Yee; Fan, Jia; Zhou, Jian

    2017-07-22

    The degree of hypertrophy of the future liver remnant (FLR) induced by associating liver partition and portal vein ligation for staged hepatectomy (ALPPS) in patients with HCC and chronic liver disease is often limited as compared with patients with a healthy liver. We reported a 53-year-old male who had a huge HCC (14.8×12×9.4cm) arising from a background of hepatitis B liver fibrosis (METAVIR score F3). The ratio of the FLR/standard liver volume (SLV) was 23.8%. After stage I ALPPS, volumetric assessment on postoperative day (POD) 7 and 13 showed insufficient FLR hypertrophy (FLR/SLV: 28.7% and 30.7%, respectively). A postoperative computed tomographic 3D reconstruction and hepatic angiography showed steal of arterial blood from the FLR to the huge tumour in the right liver. Salvage transhepatic arterial embolization (TAE) was performed to block the major arterial blood supply to the tumour on POD 13. The FLR/SLV increased to 42.5% in 7days. Stage II ALPPS consisting of right trisectionectomy was successfully performed. Salvage TAE which blocked the main arterial blood supply to the huge HCC improved the arterial supply with subsequent adequate and fast hypertrophy of the FLR to allow trisectionectomy in stage II ALPPS to be carried out. Salvage TAE after failed stage I ALPPS with inadequate hypertrophy of the FLR allowed trisectionectomy in stage II ALPPS to be carried out in a patient with a huge HCC with chronic liver disease. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  5. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface

  6. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    Full Text Available Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  7. Huge mucinous cystadenoma of ovary, describing a young patient: case report

    Directory of Open Access Journals (Sweden)

    Soheila Aminimoghaddam

    2017-08-01

    Conclusion: Ovarian cysts in young women who are associated with elevated levels of tumor markers and ascites require careful evaluation. Management of ovarian cysts depends on patient's age, size of the cyst, and its histopathological nature. Conservative surgery such as ovarian cystectomy or salpingo-oophorectomy is adequate in mucinous tumors of ovary. Multiple frozen sections are very important to know the malignant variation of this tumor and helps accurate patient management. Surgical expertise is required to prevent complications in huge tumors has distorted the anatomy, so gynecologic oncologist plays a prominent role in management. In this case, beside of the huge tumor and massive ascites uterine and ovaries were preserved by gynecologist oncologist and patient is well up to now.

  8. Huge gastric diospyrobezoars successfully treated by oral intake and endoscopic injection of Coca-Cola.

    Science.gov (United States)

    Chung, Y W; Han, D S; Park, Y K; Son, B K; Paik, C H; Jeon, Y C; Sohn, J H

    2006-07-01

    A diospyrobezoar is a type of phytobezoar that is considered to be harder than any other types of phytobezoars. Here, we describe a new treatment modality, which effectively and easily disrupted huge gastric diospyrobezoars. A 41-year-old man with a history of diabetes mellitus was admitted with lower abdominal pain and vomiting. Upper gastrointestinal endoscopy revealed three huge, round diospyrobezoars in the stomach. He was made to drink two cans of Coca-Cola every 6 h. At endoscopy the next day, the bezoars were partially dissolved and turned to be softened. We performed direct endoscopic injection of Coca-Cola into each bezoar. At repeated endoscopy the next day, the bezoars were completely dissolved.

  9. Successful Vaginal Delivery despite a Huge Ovarian Mucinous Cystadenoma Complicating Pregnancy: A Case Report

    Directory of Open Access Journals (Sweden)

    Dipak Mandi

    2013-12-01

    Full Text Available A 22-year-old patient with 9 months of amenorrhea and a huge abdominal swelling was admitted to our institution with an ultrasonography report of a multiloculated cystic space-occupying lesion, almost taking up the whole abdomen (probably of ovarian origin, along with a single live intrauterine fetus. She delivered vaginally a boy baby within 4 hours of admission without any maternal complication, but the baby had features of intrauterine growth restriction along with low birth weight. On the 8th postpartum day, the multiloculated cystic mass, which arose from the right ovary and weighed about 11 kg, was removed via laparotomy. A mucinous cystadenoma with no malignant cells in peritoneal washing was detected in histopathology examination. This report describes a rare case of a successful vaginal delivery despite a huge cystadenoma of the right ovary complicating the pregnancy.

  10. Huge Neck Masses Causing Respiratory Distress in Neonates: Two Cases of Congenital Cervical Teratoma.

    Science.gov (United States)

    Gezer, Hasan Özkan; Oğuzkurt, Pelin; Temiz, Abdulkerim; Bolat, Filiz Aka; Hiçsönmez, Akgün

    2016-12-01

    Congenital cervical teratomas are rare and usually large enough to cause respiratory distress in the neonatal period. We present two cases of congenital huge cystic neck masses in which distinguishing cervical cystic hygroma and congenital cystic teratoma was not possible through radiologic imaging techniques. Experience with the first case, which was initially diagnosed and treated as cystic hygroma by injection sclerotherapy, led to early suspicion and surgery in the second case. The masses were excised completely and histopathologic diagnoses were congenital teratoma in both patients. Our aim is to review congenital huge neck masses causing respiratory distress in early neonatal life to highlight this dilemma briefly with these interesting cases. Copyright © 2014. Published by Elsevier B.V.

  11. Huge echinococcal cyst of the liver managed by hepatectomy: Report of two cases.

    Science.gov (United States)

    Pavlidis, Efstathios T; Symeonidis, Nikolaos; Psarras, Kyriakos; Pavlidis, Theodoros E

    2017-01-01

    Echinococcocal cysts are predominantly located in the right liver. They are usually solitary and asymptomatic, but large cysts can cause compression symptoms. We report two cases of huge (25cm and 20cm in diameter, respectively) echinococcal cysts located in the left liver, which presented as a large palpable mass causing compression symptoms. Diagnosis was established with CT scan showing a cystic mass with the characteristic daughter cysts and reactive layer (pericystic wall) consisting of fibrous connective tissue and calcifications. Both patients were treated radically with left hepatectomy and had uneventful postoperative course and no recurrence upon follow-up. The treatment of liver echinococcal cysts represent a unique surgical challenge. Even though conservative approaches are less technically demanding, the radical approach with resection has better outcome with less recurrences, when performed by experienced surgeons. Resection rather than drainage is the management of choice for such huge liver echinococcal cysts. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Mitral valve regurgitation due to annular dilatation caused by a huge and floating left atrial myxoma.

    Science.gov (United States)

    Kaya, Mehmet; Ersoy, Burak; Yeniterzi, Mehmet

    2015-09-01

    We describe a case of mitral valve annular dilatation caused by a huge left atrial myxoma obstructing the mitral valve orifice. A 50-year-old man presenting with palpitation was found to have a huge left atrial myxoma protruding into the left ventricle during diastole, causing severe mitral regurgitation. The diagnosis was made with echocardiogram. Transoesophageal echocardiography revealed a solid mass of 75 × 55 mm. During operation, the myxoma was completely removed from its attachment in the atrium. We preferred to place a mechanical heart valve after an annuloplasty ring because of severely dilated mitral annulus and chordae elongation. The patient had an uneventful recovery. Our case suggests that immediate surgery, careful evaluation of mitral valve annulus preoperatively is recommended.

  13. Huge mass in right side of the heart: A rare case report.

    Science.gov (United States)

    Ghasemi, Reza; Ghanei-Motlagh, Fahimeh; Nazari, Susan; Yaghubi, Mohsen

    2016-11-01

    The presence of primary intracardiac tumors are scarce, and most of them are myxomas. We reported, in this paper, a case with huge mass in the right side of the heart. A 45-year-old man, with a complaint of bilateral lower limbs edema and exertional dyspnea, was admitted to intensive cardiac care unit. Cardiac auscultation revealed soft grade systolic murmur without any evidence of "tumor plop." Echocardiography showed a huge mobile mass in right side of the heart that suggested myxoma. Our patient underwent cardiac surgery with excision of 13 cm mass. Histopathological study was confirmed the diagnosis of mass type. In this case report, it shows that in the differential diagnosis of right-sided heart failure, the right sided myxoma must be considered. The preferable approach in patient with cardiac myxomas is surgical excision to alleviate symptoms, early identification, and removal.

  14. Huge Dissected Ascending Aorta Associated with Pseudo Aneurysm and Aortic Coarctation Feridoun

    Directory of Open Access Journals (Sweden)

    Feridoun Sabzi

    2015-10-01

    Full Text Available We report a unique case of chronic dissection of the ascending aorta complicated with huge and thrombotic pseudoaneurysm in a patient with coarctation of descending aorta. Preoperative investigations such as transesophageal echocardiography (TEE confirmed the diagnosis of dissection. Intraoperative findings included a12 cm eccentric bulge of the right lateral side of dilated the ascending aorta filled with the clot and a circular shaped intimal tear communicating with an extended hematoma and dissection of the media layer. The rarity of the report is an association of the chronic dissection with huge pseudoaneurysm and coarctation. The patient underwent staged repair of an aneurysm and coarctation and had an uneventful postoperative recovery period.

  15. A new pixels flipping method for huge watermarking capacity of the invoice font image.

    Science.gov (United States)

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Xu, Qishuai; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  16. Air conditioning management of huge rooms; Gestion climatique des locaux de grande dimension

    Energy Technology Data Exchange (ETDEWEB)

    Guitton, P. [Electricite de France (EDF), 78 - Chatou (France); Izard, J.L. [Ecole d`Architecture de Marseille-Luminy, 13 - Marseille-Luminy (France); Wurtz, E. [La Rochelle Universite, 17 - La Rochelle, LEPTAB (France)] [and others

    1999-09-01

    This conference was organized by the section `air-conditioning engineering` of the French society of thermal engineers (SFT). This document comprises the abridged versions of the communications and deals with: air-conditioning using displacement: experience feedback on tertiary applications and development of a dimensioning tool, thermal response of linear atria, application of the zonal method to the description of the temperature field and flows pattern inside an auditorium, theoretical and experimental study of air renewal inside industrial rooms, management of huge rooms, design of new optimized buildings and use of the TAS software, can TRNSYS and Comis codes be used for huge spaces?, experimental study of the thermal-aeraulic conditions generated by a displacement air-conditioning device. (J.S.)

  17. Development of Resource Sharing System Components for AliEn Grid Infrastructure

    CERN Document Server

    Harutyunyan, Artem

    2010-01-01

    The problem of the resource provision, sharing, accounting and use represents a principal issue in the contemporary scientific cyberinfrastructures. For example, collaborations in physics, astrophysics, Earth science, biology and medicine need to store huge amounts of data (of the order of several petabytes) as well as to conduct highly intensive computations. The appropriate computing and storage capacities cannot be ensured by one (even very large) research center. The modern approach to the solution of this problem suggests exploitation of computational and data storage facilities of the centers participating in collaborations. The most advanced implementation of this approach is based on Grid technologies, which enable effective work of the members of collaborations regardless of their geographical location. Currently there are several tens of Grid infrastructures deployed all over the world. The Grid infrastructures of CERN Large Hadron Collider experiments - ALICE, ATLAS, CMS, and LHCb which are exploi...

  18. Construction of customized sub-databases from NCBI-nr database for rapid annotation of huge metagenomic datasets using a combined BLAST and MEGAN approach

    OpenAIRE

    2013-01-01

    We developed a fast method to construct local sub-databases from the NCBI-nr database for the quick similarity search and annotation of huge metagenomic datasets based on BLAST-MEGAN approach. A three-step sub-database annotation pipeline (SAP) was further proposed to conduct the annotation in a much more time-efficient way which required far less computational capacity than the direct NCBI-nr database BLAST-MEGAN approach. The 1(st) BLAST of SAP was conducted using the original metagenomic d...

  19. Stochastic Huge-Resonance Caused by Coupling for a Globally Coupled Linear System

    Institute of Scientific and Technical Information of China (English)

    LI Jing-Hui

    2009-01-01

    In the paper, we investigate a globally coupled linear system with finite subunits subject to temporal periodic force and with multiplicative dichotomous noise.It is shown that, the global coupling among the subunits can hugely enhance the phenomenon of SR for the amplitude of the average mean field as the functions of the transition rate of the noise and that as the function of the frequency of the signal respectively.

  20. Therapeutic benefit of radiotherapy in huge (≥10 cm) unresectable hepatocellular carcinoma.

    Science.gov (United States)

    Kim, Kyung Hwan; Kim, Mi Sun; Chang, Jee Suk; Han, Kwang-Hyub; Kim, Do Young; Seong, Jinsil

    2014-05-01

    Huge (≥10 cm) hepatocellular carcinomas (HCCs) show dismal prognosis and only a limited number of cases are eligible for curative resection. We studied the therapeutic benefit of radiotherapy (RT) in patients with huge unresectable HCCs. Data from 283 patients with huge HCCs and preserved liver function who underwent non-surgical treatment from July 2001 to March 2012 were retrospectively reviewed. Patients were divided into 4 groups according to the initial treatment: Group A (N= 49), transarterial chemoembolization (TACE); Group B (N = 35), TACE + RT; Group C (N = 50), hepatic arterial infusion chemotherapy; and Group D (n = 149), concurrent chemoradiotherapy (CCRT). The median follow-up period was 27.8 months (range, 12.9-121.9 months). The median overall survival (OS) was longer in Groups B (15.3 months) and D (12.8 months) than in Groups A (7.5 months) and C (8.2 months; Group B vs. A, Bonferroni corrected P [P(c)] = 0.04; Group B vs. C, P(c) = 0.02; Group D vs. A, P(c) = 0.01; Group D vs. C, Pc = 0.006). Groups B and D also showed superior progression-free survival (PFS) and intrahepatic control than Groups A and C. In multivariate analysis, tumour multiplicity, serum alpha-foetoprotein level (≥200 ng/ml) and initial treatment were independent prognostic factors for OS and PFS. Patients with huge unresectable HCCs treated with RT, either as CCRT or in combination with TACE, showed excellent intrahepatic control and prolonged survival. RT could be considered a promising treatment modality in these patients. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.