WorldWideScience

Sample records for wlcg common computing

  1. LHC computing (WLCG) past, present, and future

    CERN Document Server

    Bird, I G

    2016-01-01

    The LCG project, and the WLCG Collaboration, represent a more than 10-year investment in building and operating the LHC computing environment. This article gives some of the history of how the WLCG was constructed and the preparations for the accelerator start-up. It will discuss the experiences and lessons learned during the first 3 year run of the LHC, and will conclude with a look forwards to the planned upgrades of the LHC and the experiments, discussing the implications for computing.

  2. WLCG Operations portal demo tutorial

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    This is a navigation through http://wlcg-ops.web.cern.ch/ the Worldwide LHC Computing Grid (WLCG) Operations' portal. In this portal you will find documentation and information about WLCG Operation activities for: System Administrators at the WLCG sites LHC Experiments Operation coordination people, including Task Forces and Working Groups

  3. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  4. Optimising costs in WLCG operations

    CERN Document Server

    Pradillo, Mar; Flix, Josep; Forti, Alessandra; Sciabà, Andrea

    2015-01-01

    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the 50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several improvements in the WLCG infrastructure have been implemented during the first long LHC shutdown to prepare for the increasing needs of the experiments during Run2 and beyond. However, constraints in funding will affect not only the computing resources but also the available effort for operations. This paper presents the results of a detailed investigation on the allocation of the effort in the different areas of WLCG operations, identifies the most important sources of inefficiency and proposes viable strategies for optimising the operational cost, taking into account the current trends in the evolution of the computing infrastruc...

  5. Job monitoring on the WLCG scope: Current status and new strategy

    International Nuclear Information System (INIS)

    Andreeva, Julia; Casey, James; Gaidioz, Benjamin; Karavakis, Edward; Kokoszkiewicz, Lukasz; Lanciotti, Elisa; Maier, Gerhild; Rodrigues, Daniele Filipe Rocha Da Cuhna; Rocha, Ricardo; Saiz, Pablo; Sidorova, Irina; Boehm, Max; Belov, Sergey; Tikhonenko, Elena; Dvorak, Frantisek; Krenek, Ales; Mulac, Milas; Sitera, Jiri; Kodolova, Olga; Vaibhav, Kumar

    2010-01-01

    Job processing and data transfer are the main computing activities on the WLCG infrastructure. Reliable monitoring of the job processing on the WLCG scope is a complicated task due to the complexity of the infrastructure itself and the diversity of the currently used job submission methods. The paper will describe current status and the new strategy for the job monitoring on the WLCG scope, covering primary information sources, job status changes publishing, transport mechanism and visualization.

  6. Web Proxy Auto Discovery for the WLCG

    Science.gov (United States)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses

  7. Web proxy auto discovery for the WLCG

    CERN Document Server

    Dykstra, D; Blumenfeld, B; De Salvo, A; Dewhurst, A; Verguilov, V

    2017-01-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids regis...

  8. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    CERN Document Server

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities mon...

  9. Next generation WLCG File Transfer Service (FTS)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    LHC experiments at CERN and worldwide utilize WLCG resources and middleware components to perform distributed computing tasks. One of the most important tasks is reliable file replication. It is a complex problem, suffering from transfer failures, disconnections, transfer duplication, server and network overload, differences in storage systems, etc. To address these problems, EMI and gLite have provided the independent File Transfer Service (FTS) and Grid File Access Library (GFAL) tools. Their development started almost a decade ago, in the meantime, requirements in data management have changed - the old architecture of FTS and GFAL cannot keep support easily these changes. Technology has also been progressing: FTS and GFAL do not fit into the new paradigms (cloud, messaging, for example). To be able to serve the next stage of LHC data collecting (from 2013), we need a new generation of  these tools: FTS 3 and GFAL 2. We envision a service requiring minimal configuration, which can dynamically adapt to the...

  10. A Messaging Infrastructure for WLCG

    International Nuclear Information System (INIS)

    Casey, James; Cons, Lionel; Lapka, Wojciech; Paladin, Massimo; Skaburskas, Konstantin

    2011-01-01

    During the EGEE-III project operational tools such as SAM, Nagios, Gridview, the regional Dashboard and GGUS moved to a communication architecture based on ActiveMQ, an open-source enterprise messaging solution. LHC experiments, in particular ATLAS, developed prototypes of systems using the same messaging infrastructure, validating the system for their use-cases. In this paper we describe the WLCG messaging use cases and outline an improved messaging architecture based on the experience gained during the EGEE-III period. We show how this provides a solid basis for many applications, including the grid middleware, to improve their resilience and reliability.

  11. Providing global WLCG transfer monitoring

    International Nuclear Information System (INIS)

    Andreeva, J; Dieguez Arias, D; Campana, S; Keeble, O; Magini, N; Molnar, Z; Ro, G; Saiz, P; Salichos, M; Tuckett, D; Flix, J; Oleynik, D; Petrosyan, A; Uzhinsky, A; Wildish, T

    2012-01-01

    The WLCG[1] Transfers Dashboard is a monitoring system which aims to provide a global view of WLCG data transfers and to reduce redundancy in monitoring tasks performed by the LHC experiments. The system is designed to work transparently across LHC experiments and across the various technologies used for data transfer. Currently each LHC experiment monitors data transfers via experiment-specific systems but the overall cross-experiment picture is missing. Even for data transfers handled by FTS, which is used by 3 LHC experiments, monitoring tasks such as aggregation of FTS transfer statistics or estimation of transfer latencies are performed by every experiment separately. These tasks could be performed once, centrally, and then served to all experiments via a well-defined set of APIs. In the design and development of the new system, experience accumulated by the LHC experiments in the data management monitoring area is taken into account and a considerable part of the code of the ATLAS DDM Dashboard is being re-used. The paper describes the architecture of the Global Transfer monitoring system, the implementation of its components and the first prototype.

  12. x509-free access to WLCG resources

    OpenAIRE

    Short, H; Manzi, A; De Notaris, V; Keeble, O; Kiryanov, A; Mikkonen, H; Tedesco, P; Wartel, R

    2017-01-01

    Access to WLCG resources is authenticated using an x509 and PKI infrastructure. Even though HEP users have always been exposed to certificates directly, the development of modern Web Applications by the LHC experiments calls for simplified authentication processes keeping the underlying software unmodified. In this work we will show a solution with the goal of providing access to WLCG resources using the user’s home organisations credentials, without the need for user-acquired x509 certificat...

  13. Monitoring the EGEE/WLCG grid services

    International Nuclear Information System (INIS)

    Duarte, A; Nyczyk, P; Retico, A; Vicinanza, D

    2008-01-01

    Grids have the potential to revolutionise computing by providing ubiquitous, on demand access to computational services and resources. They promise to allow for on demand access and composition of computational services provided by multiple independent sources. Grids can also provide unprecedented levels of parallelism for high-performance applications. On the other hand, grid characteristics, such as high heterogeneity, complexity and distribution create many new technical challenges. Among these technical challenges, failure management is a key area that demands much progress. A recent survey revealed that fault diagnosis is still a major problem for grid users. When a failure appears at the user screen, it becomes very difficult for the user to identify whether the problem is in the application, somewhere in the grid middleware, or even lower in the fabric that comprises the grid. In this paper we present a tool able to check if a given grid service works as expected for a given set of users (Virtual Organisation) on the different resources available on a grid. Our solution deals with grid services as single components that should produce an expected output to a pre-defined input, what is quite similar to unit testing. The tool, called Service Availability Monitoring or SAM, is being currently used by several different Virtual Organizations to monitor more than 300 grid sites belonging to the largest grids available today. We also discuss how this tool is being used by some of those VOs and how it is helping in the operation of the EGEE/WLCG grid

  14. The WLCG Messaging Service and its Future

    International Nuclear Information System (INIS)

    Cons, Lionel; Paladin, Massimo

    2012-01-01

    Enterprise messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The production messaging service currently used by WLCG includes four tightly coupled brokers operated by EGI (running Apache ActiveMQ and designed to host the Grid operational tools such as SAM) as well as two dedicated services for ATLAS-DDM and experiments dashboards (currently also running Apache ActiveMQ). In the future, this service is expected to grow in numbers of applications supported, brokers and technologies. The WLCG Messaging Roadmap identified three areas with room for improvement (security, scalability and availability/reliability) as well as ten practical recommendations to address them. This paper describes a messaging service architecture that is in line with these recommendations as well as a software architecture based on reusable components that ease interactions with the messaging service. These two architectures will support the growth of the WLCG messaging service.

  15. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale

    International Nuclear Information System (INIS)

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Suthakar, U; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures. (paper)

  16. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    Science.gov (United States)

    Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.

    2015-12-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.

  17. Data management in WLCG and EGEE

    CERN Document Server

    Donno, Flavia; CERN. Geneva. IT Department

    2008-01-01

    This work is a contribution to a book on Scientific Data Management by CRC Press/Taylor and Francis Books. Data Management and Storage Access experience in WLCG is described together with the major use cases. Furthermore, some considerations about the EGEE requirements are also reported.

  18. Processing of the WLCG monitoring data using NoSQL

    Science.gov (United States)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  19. Processing of the WLCG monitoring data using NoSQL

    International Nuclear Information System (INIS)

    Andreeva, J; Beche, A; Karavakis, E; Saiz, P; Tuckett, D; Belov, S; Kadochnikov, I; Schovancova, J; Dzhunov, I

    2014-01-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  20. WLCG transfers dashboard: a unified monitoring tool for heterogeneous data transfers

    International Nuclear Information System (INIS)

    Andreeva, J; Beche, A; Saiz, P; Tuckett, D; Belov, S; Kadochnikov, I

    2014-01-01

    The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the heterogeneity of data transfer technologies. Currently there are two main alternatives for data transfers on the WLCG: File Transfer Service and XRootD protocol. Each LHC VO has its own monitoring system which is limited to the scope of that particular VO. There is a need for a global system which would provide a complete cross-VO and cross-technology picture of all WLCG data transfers. We present a unified monitoring tool – WLCG Transfers Dashboard – where all the VOs and technologies coexist and are monitored together. The scale of the activity and the heterogeneity of the system raise a number of technical challenges. Each technology comes with its own monitoring specificities and some of the VOs use several of these technologies. This paper describes the implementation of the system with particular focus on the design principles applied to ensure the necessary scalability and performance, and to easily integrate any new technology providing additional functionality which might be specific to that technology.

  1. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers

    Science.gov (United States)

    Andreeva, J.; Beche, A.; Belov, S.; Kadochnikov, I.; Saiz, P.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the heterogeneity of data transfer technologies. Currently there are two main alternatives for data transfers on the WLCG: File Transfer Service and XRootD protocol. Each LHC VO has its own monitoring system which is limited to the scope of that particular VO. There is a need for a global system which would provide a complete cross-VO and cross-technology picture of all WLCG data transfers. We present a unified monitoring tool - WLCG Transfers Dashboard - where all the VOs and technologies coexist and are monitored together. The scale of the activity and the heterogeneity of the system raise a number of technical challenges. Each technology comes with its own monitoring specificities and some of the VOs use several of these technologies. This paper describes the implementation of the system with particular focus on the design principles applied to ensure the necessary scalability and performance, and to easily integrate any new technology providing additional functionality which might be specific to that technology.

  2. Experience commissioning the ATLAS distributed data management system on top of the WLCG service

    International Nuclear Information System (INIS)

    Campana, S

    2010-01-01

    The ATLAS experiment at CERN developed an automated system for distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a dedicated effort was put in place to deliver a reliable service for ATLAS data distribution, offering the necessary performance, high availability and accommodating the main use cases. This contribution will describe the various challenges and activities carried on in 2008 for the commissioning of the system, together with the experience distributing simulated data and detector data. The main commissioning activity was concentrated in two Combined Computing Resource Challenges, in February and May 2008, where it was demonstrated that the WLCG service and the ATLAS system could sustain the peak load of data transfer according to the computing model, for several days in a row, concurrently with other LHC experiment activities. This dedicated effort led to the consequential improvements of ATLAS and WLCG services and to daily operation activities throughout the last year. The system has been delivering to WLCG tiers many hundreds of terabytes of simulated data and, since the summer of 2008, more than two petabytes of cosmic and beam data.

  3. Analyzing data flows of WLCG jobs at batch job level

    Science.gov (United States)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-05-01

    With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.

  4. The WLCG Messaging Service and its Future

    CERN Document Server

    Cons, Lionel

    2012-01-01

    Enterprise messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The production messaging service currently used by WLCG includes four tightly coupled brokers operated by EGI (running Apache ActiveMQ and designed to host the Grid operational tools such as SAM) as well as two dedicated services for ATLAS-DDM and experiments dashboards (currently also running Apache ActiveMQ). In the future, this service is expected to grow in numbers of applications supported, brokers and technologies. The WLCG Messaging Roadmap identified three areas with room for improvement (security, scalability and availability/reliability) as well as ten practical recommendations to address them. This paper describes a messaging service architecture that is in line with these recommendations as well as a software architecture based on reusable components that ease interactions with the messaging service. These two architectures wil...

  5. Evolution of Database Replication Technologies for WLCG

    OpenAIRE

    Baranowski, Zbigniew; Pardavila, Lorena Lobato; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-01-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 databas...

  6. x509-free access to WLCG resources

    Science.gov (United States)

    Short, H.; Manzi, A.; De Notaris, V.; Keeble, O.; Kiryanov, A.; Mikkonen, H.; Tedesco, P.; Wartel, R.

    2017-10-01

    Access to WLCG resources is authenticated using an x509 and PKI infrastructure. Even though HEP users have always been exposed to certificates directly, the development of modern Web Applications by the LHC experiments calls for simplified authentication processes keeping the underlying software unmodified. In this work we will show a solution with the goal of providing access to WLCG resources using the user’s home organisations credentials, without the need for user-acquired x509 certificates. In particular, we focus on identity providers within eduGAIN, which interconnects research and education organisations worldwide, and enables the trustworthy exchange of identity-related information. eduGAIN has been integrated at CERN in the SSO infrastructure so that users can authenticate without the need of a CERN account. This solution achieves x509-free access to Grid resources with the help of two services: STS and an online CA. The STS (Security Token Service) allows credential translation from the SAML2 format used by Identity Federations to the VOMS-enabled x509 used by most of the Grid. The IOTA CA (Identifier-Only Trust Assurance Certification Authority) is responsible for the automatic issuing of short-lived x509 certificates. The IOTA CA deployed at CERN has been accepted by EUGridPMA as the CERN LCG IOTA CA, included in the IGTF trust anchor distribution and installed by the sites in WLCG. We will also describe the first pilot projects which are integrating the solution.

  7. WLCG and IPv6 - the HEPiX IPv6 working group

    Science.gov (United States)

    Campana, S.; Chadwick, K.; Chen, G.; Chudoba, J.; Clarke, P.; Eliáš, M.; Elwell, A.; Fayer, S.; Finnern, T.; Goossens, L.; Grigoras, C.; Hoeft, B.; Kelsey, D. P.; Kouba, T.; López Muñoz, F.; Martelli, E.; Mitchell, M.; Nairz, A.; Ohrenberg, K.; Pfeiffer, A.; Prelz, F.; Qi, F.; Rand, D.; Reale, M.; Rozsa, S.; Sciaba, A.; Voicu, R.; Walker, C. J.; Wildish, T.

    2014-06-01

    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applying for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). This paper describes the work done by the working group and its future plans.

  8. WLCG and IPv6 – the HEPiX IPv6 working group

    International Nuclear Information System (INIS)

    Campana, S; Elwell, A; Goossens, L; Grigoras, C; Martelli, E; Nairz, A; Pfeiffer, A; Chadwick, K; Chen, G; Chudoba, J; Eliáš, M; Kouba, T; Clarke, P; Fayer, S; Finnern, T; Ohrenberg, K; Hoeft, B; Kelsey, D P; Muñoz, F López; Mitchell, M

    2014-01-01

    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applying for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). This paper describes the work done by the working group and its future plans.

  9. Lessons learnt from WLCG service deployment

    International Nuclear Information System (INIS)

    Shiers, J D

    2008-01-01

    This paper summarises the main lessons learnt from deploying WLCG production services, with a focus on Reliability, Scalability, Accountability, which lead to both manageability and usability. Each topic is analysed in turn. Techniques for zero-user-visible downtime for the main service interventions are described, together with pathological cases that need special treatment. The requirements in terms of scalability are analysed, calling for as much robustness and automation in the service as possible. The different aspects of accountability - which covers measuring/tracking/logging/monitoring what is going on - and has gone on - is examined, with the goal of attaining a manageable service. Finally, a simple analogy is drawn with the Web in terms of usability - what do we need to achieve to cross the chasm from small-scale adoption to ubiquity?

  10. Evolution of Database Replication Technologies for WLCG

    CERN Document Server

    Baranowski, Zbigniew; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-01-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  11. The production deployment of IPv6 on WLCG

    Science.gov (United States)

    Bernier, J.; Campana, S.; Chadwick, K.; Chudoba, J.; Dewhurst, A.; Eliáš, M.; Fayer, S.; Finnern, T.; Grigoras, C.; Hartmann, T.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Macmahon, E.; Martelli, E.; Millar, A. P.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Voicu, R.; Walker, C. J.; Wildish, T.

    2015-12-01

    The world is rapidly running out of IPv4 addresses; the number of IPv6 end systems connected to the internet is increasing; WLCG and the LHC experiments may soon have access to worker nodes and/or virtual machines (VMs) possessing only an IPv6 routable address. The HEPiX IPv6 Working Group has been investigating, testing and planning for dual-stack services on WLCG for several years. Following feedback from our working group, many of the storage technologies in use on WLCG have recently been made IPv6-capable. This paper presents the IPv6 requirements, tests and plans of the LHC experiments together with the tests performed on the group's IPv6 test-bed. This is primarily aimed at IPv6-only worker nodes or VMs accessing several different implementations of a global dual-stack federated storage service. Finally the plans for deployment of production dual-stack WLCG services are presented.

  12. Sustainable support for WLCG through the EGI distributed infrastructure

    International Nuclear Information System (INIS)

    Antoni, Torsten; Bozic, Stefan; Reisser, Sabine

    2011-01-01

    Grid computing is now in a transition phase from development in research projects to routine usage in a sustainable infrastructure. This is mirrored in Europe by the transition from the series of EGEE projects to the European Grid Initiative (EGI). EGI aims at establishing a self-sustained grid infrastructure across Europe. The main building blocks of EGI are the national grid initiatives in the participating countries and a central coordinating institution (EGI.eu). The middleware used is provided by consortia outside of EGI. Also the user communities are organized separately from EGI. The transition to a self-sustained grid infrastructure is aided by the EGI-InSPIRE project, aiming at reducing the project-funding needed to run EGI over the course of its four year duration. Providing user support in this framework poses new technical and organisational challenges as it has to cross the boundaries of various projects and infrastructures. The EGI user support infrastructure is built around the Gobal Grid User Support system (GGUS) that was also the basis of user support in EGEE. Utmost care was taken that during the transition from EGEE to EGI support services which are already used in production were not perturbed. A year into the EGI-InSPIRE project, in this paper we would like to present the current status of the user support infrastructure provided by EGI for WLCG, new features that were needed to match the new infrastructure, issues and challenges that occurred during the transition and give an outlook on future plans and developments.

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  14. A Voyage to Arcturus: A model for automated management of a WLCG Tier-2 facility

    International Nuclear Information System (INIS)

    Roy, Gareth; Crooks, David; Mertens, Lena; Mitchell, Mark; Skipsey, Samuel Cadellin; Britton, David; Purdie, Stuart

    2014-01-01

    With the current trend towards 'On Demand Computing' in big data environments it is crucial that the deployment of services and resources becomes increasingly automated. Deployment based on cloud platforms is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller, resource constrained WLCG Tier-2 sites. Along with a greater desire for bespoke monitoring and collection of Grid related metrics, a more lightweight and modular approach is desired. In this paper we present a model for a lightweight automated framework which can be use to build WLCG grid sites, based on 'off the shelf' software components. As part of the research into an automation framework the use of both IPMI and SNMP for physical device management will be included, as well as the use of SNMP as a monitoring/data sampling layer such that more comprehensive decision making can take place and potentially be automated. This could lead to reduced down times and better performance as services are recognised to be in a non-functional state by autonomous systems.

  15. WLCG Operations and the First Prolonged LHC Run

    CERN Document Server

    Girone, M; CERN. Geneva. IT Department

    2011-01-01

    By the time of CHEP 2010 we had accumulated just over 6 months’ experience with proton-proton data taking, production and analysis at the LHC. This paper addresses the issues seen from the point of view of the WLCG Service. In particular, it answers the following questions: Did the WLCG service delivered quantitatively and qualitatively? Were the "key performance indicators" a reliable and accurate measure of the service quality? Were the inevitable service issues been resolved in a sufficiently rapid fashion? What are the key areas of improvement required not only for long-term sustainable operations, but also to embrace new technologies. It concludes with a summary of our readiness for data taking in the light of real experience.

  16. WLCG Operations and the First Prolonged LHC Run

    International Nuclear Information System (INIS)

    Girone, M; Shiers, J

    2011-01-01

    By the time of CHEP 2010 we had accumulated just over 6 months' experience with proton-proton data taking, production and analysis at the LHC. This paper addresses the issues seen from the point of view of the WLCG Service. In particular, it answers the following questions: Did the WLCG service delivered quantitatively and qualitatively? Were the 'key performance indicators' a reliable and accurate measure of the service quality? Were the inevitable service issues been resolved in a sufficiently rapid fashion? What are the key areas of improvement required not only for long-term sustainable operations, but also to embrace new technologies. It concludes with a summary of our readiness for data taking in the light of real experience.

  17. Evaluation of ZFS as an efficient WLCG storage backend

    Science.gov (United States)

    Ebert, M.; Washbrook, A.

    2017-10-01

    A ZFS based software raid system was tested for performance against a hardware raid system providing storage based on the traditional Linux file systems XFS and EXT4. These tests were done for a healthy raid array as well as for a degraded raid array and during the rebuild of a raid array. It was found that ZFS performs better in almost all test scenarios. In addition, distinct features of ZFS were tested for WLCG data storage use, like compression and higher raid levels with triple redundancy information. The long term reliability was observed after converting all production storage servers at the Edinburgh WLCG Tier-2 site to ZFS, resulting in about 1.2PB of ZFS based storage at this site.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  19. A scalable architecture for online anomaly detection of WLCG batch jobs

    Science.gov (United States)

    Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.

    2016-10-01

    For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.

  20. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    International Nuclear Information System (INIS)

    Lanciotti, E; Merino, G; Blomer, J; Bria, A

    2011-01-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  1. Geographical failover for the EGEE-WLCG grid collaboration tools

    International Nuclear Information System (INIS)

    Cavalli, A; Pagano, A; Aidel, O; L'Orphelin, C; Mathieu, G; Lichwala, R

    2008-01-01

    Worldwide grid projects such as EGEE and WLCG need services with high availability, not only for grid usage, but also for associated operations. In particular, tools used for daily activities or operational procedures are considered to be critical. The operations activity of EGEE relies on many tools developed by teams from different countries. For each tool, only one instance was originally deployed, thus representing single points of failure. In this context, the EGEE failover problem was solved by replicating tools at different sites, using specific DNS features to automatically failover to a given service. A new domain for grid operations (gridops.org) was registered and deployed following DNS testing in a virtual machine (vm) environment using nsupdate, NS/zone configuration and fast TTLs. In addition, replication of databases, web servers and web services have been tested and configured. In this paper, we describe the technical mechanism used in our approach to replication and failover. We also describe the procedure implemented for the EGEE/WLCG CIC Operations Portal use case. Furthermore, we present the interest in failover procedures in the context of other grid projects and grid services. Future plans for improvements of the procedures are also described

  2. Test Anxiety, Computer-Adaptive Testing and the Common Core

    Science.gov (United States)

    Colwell, Nicole Makas

    2013-01-01

    This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…

  3. Integrated experiment activity monitoring for wLCG sites based on GWT

    International Nuclear Information System (INIS)

    Feijóo, Alejandro Guinó; Espinal, Xavier

    2011-01-01

    The goal of this work is to develop a High Level Monitoring (HLM) where to merge the distributed computing activities of an LHC experiment (ATLAS). ATLAS distributed computing is organized in clouds, where the Tier-Is (primary centers) provide services to the associated Tier-2s centers (secondaries) so they are all seen as a cloud by the experiment. Computing activities and sites stability monitoring services are numerous and delocalized. It would be very useful for a cloud manager to have a single place where to aggregate available monitoring information. The idea presented in this paper is to develop a set of collectors to gather information regarding site status and performance on data distribution, data processing and Worldwide LHC Computing Grid (WLCG) tests (Service Availability Monitoring), store them in specific databases, process the results and show it in a single HLM page. Once having it, one can investigate further by interacting with the front-end, which is fed by the stats stored on databases.

  4. Deployment of IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.

    2017-10-01

    The fraction of Internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure upgrade often offers sites an excellent window of opportunity to configure and enable IPv6. There is a significant overhead when setting up and maintaining dual-stack machines, so where possible sites would like to upgrade their services directly to IPv6 only. In doing so, they are also expediting the transition process towards its desired completion. While the LHC experiments accept there is a need to move to IPv6, it is currently not directly affecting their work. Sites are unwilling to upgrade if they will be unable to run LHC experiment workflows. This has resulted in a very slow uptake of IPv6 from WLCG sites. For several years the HEPiX IPv6 Working Group has been testing a range of WLCG services to ensure they are IPv6 compliant. Several sites are now running many of their services as dual-stack. The working group, driven by the requirements of the LHC VOs to be able to use IPv6-only opportunistic resources, continues to encourage wider deployment of dual-stack services to make the use of such IPv6-only clients viable. This paper presents the working group’s plan and progress so far to allow sites to deploy IPv6-only CPU resources. This includes making experiment central services dual-stack as well as a number of storage services. The monitoring, accounting and information services that are used by jobs also need to be upgraded. Finally the VO testing that has taken place on hosts connected via IPv6-only is reported.

  5. A Computer-Based Instrument That Identifies Common Science Misconceptions

    Science.gov (United States)

    Larrabee, Timothy G.; Stein, Mary; Barman, Charles

    2006-01-01

    This article describes the rationale for and development of a computer-based instrument that helps identify commonly held science misconceptions. The instrument, known as the Science Beliefs Test, is a 47-item instrument that targets topics in chemistry, physics, biology, earth science, and astronomy. The use of an online data collection system…

  6. Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

    International Nuclear Information System (INIS)

    Field, L; Gronager, M; Johansson, D; Kleist, J

    2010-01-01

    Interoperability of grid infrastructures is becoming increasingly important in the emergence of large scale grid infrastructures based on national and regional initiatives. To achieve interoperability of grid infrastructures adaptions and bridging of many different systems and services needs to be tackled. A grid infrastructure offers services for authentication, authorization, accounting, monitoring, operation besides from the services for handling and data and computations. This paper presents an outline of the work done to integrate the Nordic Tier-1 and 2s, which for the compute part is based on the ARC middleware, into the WLCG grid infrastructure co-operated by the EGEE project. Especially, a throughout description of integration of the compute services is presented.

  7. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  8. Making the most of cloud storage - a toolkit for exploitation by WLCG experiments

    Science.gov (United States)

    Alvarez Ayllon, Alejandro; Arsuaga Rios, Maria; Bitzes, Georgios; Furano, Fabrizio; Keeble, Oliver; Manzi, Andrea

    2017-10-01

    Understanding how cloud storage can be effectively used, either standalone or in support of its associated compute, is now an important consideration for WLCG. We report on a suite of extensions to familiar tools targeted at enabling the integration of cloud object stores into traditional grid infrastructures and workflows. Notable updates include support for a number of object store flavours in FTS3, Davix and gfal2, including mitigations for lack of vector reads; the extension of Dynafed to operate as a bridge between grid and cloud domains; protocol translation in FTS3; the implementation of extensions to DPM (also implemented by the dCache project) to allow 3rd party transfers over HTTP. The result is a toolkit which facilitates data movement and access between grid and cloud infrastructures, broadening the range of workflows suitable for cloud. We report on deployment scenarios and prototype experience, explaining how, for example, an Amazon S3 or Azure allocation can be exploited by grid workflows.

  9. Maximum likelihood as a common computational framework in tomotherapy

    International Nuclear Information System (INIS)

    Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.

    1998-01-01

    Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)

  10. Neurobiological roots of language in primate audition: common computational properties.

    Science.gov (United States)

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L; Rauschecker, Josef P

    2015-03-01

    Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Experience of Google's latest deep learning library, TensorFlow, in a large-scale WLCG cluster

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Smith, Joshua Wyatt; Quadt, Arnulf [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany)

    2016-07-01

    The researchers at the Google Brain team released their second generation's Deep Learning library, TensorFlow, as an open-source package under the Apache 2.0 license in November, 2015. Google has already deployed the first generation's library using DistBlief in various systems such as Google Search, advertising systems, speech recognition systems, Google Images, Google Maps, Street View, Google Translate and many other latest products. In addition, many researchers in high energy physics have recently started to understand and use Deep Learning algorithms in their own research and analysis. We conceive a first use-case scenario of TensorFlow to create the Deep Learning models from high-dimensional inputs like physics analysis data in a large-scale WLCG computing cluster. TensorFlow carries out computations using a dataflow model and graph structure onto a wide variety of different hardware platforms and systems, such as many CPU architectures, GPUs and smartphone platforms. Having a single library that can distribute the computations to create a model to the various platforms and systems would significantly simplify the use of Deep Learning algorithms in high energy physics. We deploy TensorFlow with the Docker container environments and present the first use in our grid system.

  12. Common Agency and Computational Complexity : Theory and Experimental Evidence

    NARCIS (Netherlands)

    Kirchsteiger, G.; Prat, A.

    1999-01-01

    In a common agency game, several principals try to influence the behavior of an agent. Common agency games typically have multiple equilibria. One class of equilibria, called truthful, has been identified by Bernheim and Whinston and has found widespread use in the political economy literature. In

  13. Deployment of 464XLAT (RFC6877) alongside IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Froy, T. S.; Traynor, D. P.; Walker, C. J.

    2017-10-01

    IPv4 is now officially deprecated by the IETF. A significant amount of effort has already been expended by the HEPiX IPv6 Working Group on testing dual-stacked hosts and IPv6-only CPU resources. Dual-stack adds complexity and administrative overhead to sites that may already be starved of resource. This has resulted in a very slow uptake of IPv6 from WLCG sites. 464XLAT (RFC6877) is intended for IPv6 single-stack environments that require the ability to communicate with IPv4-only endpoints. This paper will present a deployment strategy for 464XLAT, operational experiences of using 464XLAT in production at a WLCG site and important information to consider prior to deploying 464XLAT.

  14. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  15. Computer aided approach to qualitative and quantitative common cause failure analysis for complex systems

    International Nuclear Information System (INIS)

    Cate, C.L.; Wagner, D.P.; Fussell, J.B.

    1977-01-01

    Common cause failure analysis, also called common mode failure analysis, is an integral part of a complete system reliability analysis. Existing methods of computer aided common cause failure analysis are extended by allowing analysis of the complex systems often encountered in practice. The methods aid in identifying potential common cause failures and also address quantitative common cause failure analysis

  16. Faster Algorithms for Computing Longest Common Increasing Subsequences

    DEFF Research Database (Denmark)

    Kutz, Martin; Brodal, Gerth Stølting; Kaligosi, Kanela

    2011-01-01

    of the alphabet, and Sort is the time to sort each input sequence. For k⩾3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures......We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths n and m, where m⩾n, we present an algorithm with an output-dependent expected running time of and O(m) space, where ℓ is the length of an LCIS, σ is the size....... Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an -time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small...

  17. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application

  18. Logical and physical resource management in the common node of a distributed function laboratory computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-01-01

    A scheme for managing resources required for transaction processing in the common node of a distributed function computer system has been given. The scheme has been found to be satisfactory for all common node services provided so far

  19. Common findings and pseudolesions at computed tomography colonography: pictorial essay

    International Nuclear Information System (INIS)

    Atzingen, Augusto Castelli von; Tiferes, Dario Ariel; Matsumoto, Carlos Alberto; Nunes, Thiago Franchi; Maia, Marcos Vinicius Alvim Soares; D'Ippolito, Giuseppe

    2012-01-01

    Computed tomography colonography is a minimally invasive method for screening for polyps and colorectal cancer, with extremely unusual complications, increasingly used in the clinical practice. In the last decade, developments in bowel preparation, imaging, and in the training of investigators have determined a significant increase in the method sensitivity. Images interpretation is accomplished through a combined analysis of two-dimensional source images and several types of three-dimensional renderings, with sensitivity around 96% in the detection of lesions with dimensions equal or greater than 10 mm in size, when analyzed by experienced radiologists. The present pictorial essay includes examples of diseases and pseudolesions most frequently observed in this type of imaging study. The authors present examples of flat and polypoid lesions, benign and malignant lesions, diverticular disease of the colon, among other conditions, as well as pseudolesions, including those related to inappropriate bowel preparation and misinterpretation. (author)

  20. Common findings and pseudolesions at computed tomography colonography: pictorial essay

    Energy Technology Data Exchange (ETDEWEB)

    Atzingen, Augusto Castelli von [Clinical Radiology, Universidade Federal de Sao Paulo (UNIFESP), Sao Paulo, SP (Brazil); Tiferes, Dario Ariel; Matsumoto, Carlos Alberto; Nunes, Thiago Franchi; Maia, Marcos Vinicius Alvim Soares [Abdominal Imaging Section, Department of Imaging Diagnosis - Universidade Federal de Sao Paulo (UNIFESP), Sao Paulo, SP (Brazil); D' Ippolito, Giuseppe, E-mail: giuseppe_dr@uol.com.br [Department of Imaging Diagnosis, Universidade Federal de Sao Paulo (UNIFESP), Sao Paulo, SP (Brazil)

    2012-05-15

    Computed tomography colonography is a minimally invasive method for screening for polyps and colorectal cancer, with extremely unusual complications, increasingly used in the clinical practice. In the last decade, developments in bowel preparation, imaging, and in the training of investigators have determined a significant increase in the method sensitivity. Images interpretation is accomplished through a combined analysis of two-dimensional source images and several types of three-dimensional renderings, with sensitivity around 96% in the detection of lesions with dimensions equal or greater than 10 mm in size, when analyzed by experienced radiologists. The present pictorial essay includes examples of diseases and pseudolesions most frequently observed in this type of imaging study. The authors present examples of flat and polypoid lesions, benign and malignant lesions, diverticular disease of the colon, among other conditions, as well as pseudolesions, including those related to inappropriate bowel preparation and misinterpretation. (author)

  1. The "Common Solutions" Strategy of the Experiment Support group at CERN for the LHC Experiments

    CERN Document Server

    Girone, M; Barreiro Megino, F H; Campana, S; Cinquilli, M; Di Girolamo, A; Dimou, M; Giordano, D; Karavakis, E; Kenyon, M J; Kokozkiewicz, L; Lanciotti, E; Litmaath, M; Magini, N; Negri, G; Roiser, S; Saiz, P; Saiz Santos, M D; Schovancova, J; Sciabà, A; Spiga, D; Trentadue, R; Tuckett, D; Valassi, A; Van der Ster, D C; Shiers, J D

    2012-01-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments' computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management m...

  2. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  3. GLUE 2 deployment: Ensuring quality in the EGI/WLCG information system

    International Nuclear Information System (INIS)

    Burke, Stephen; Pradillo, Maria Alandes; Field, Laurence; Keeble, Oliver

    2014-01-01

    The GLUE 2 information model is now fully supported in the production EGI/WLCG information system. However, to make it usable and allow clients to rely on the published information it is important that the meaning is clearly defined, and that information providers and site configurations are validated to ensure as far as possible that what they publish is correct. In this paper we describe the definition of a detailed schema usage profile, the implementation of a software tool to validate published information according to the profile and the use of the tool in the production Grid, and also summarise the overall state of GLUE 2 deployment.

  4. Collection of reports on use of computation fund utilized in common in 1988

    International Nuclear Information System (INIS)

    1989-05-01

    Nuclear Physics Research Center, Osaka University, has provided the computation fund utilized in common since 1976 for supporting the computation related to the activities of the Center. When this computation fund is used, after finishing the use, the simple report of definite form (printed in RCNP-Z together with the report of the committee on computation fund utilized in common) and the detailed report concerning the contents of computation are to be presented. In the latter report, English abstract, explanation of the results obtained by computation and physical contents, new development, difficult point and the method of its solution in computation techniques, subroutine and function used for computation and their functions and block diagrams and so on are included. This book is the collection of the latter reports on the use of the computation fund utilized in common in fiscal year 1988. The invitation to the computation fund utilized in common is informed in December every year in RCNP-Z. (K.I.)

  5. LHCb: LHCb Distributed Computing Operations

    CERN Multimedia

    Stagni, F

    2011-01-01

    The proliferation of tools for monitoring both activities and infrastructure, together with the pressing need for prompt reaction in case of problems impacting data taking, data reconstruction, data reprocessing and user analysis brought to the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many heterogeneous and independent sources of information offering different views for a better understanding of problems while an operations team and defined procedures have been put in place to handle them. This work summarizes the state-of-the-art of LHCb Grid operations emphasizing the reasons that brought to various choices and what are the tools currently in use to run our daily activities. We highlight the most common problems experienced across years of activities on the WLCG infrastructure, the services with their criticality, the procedures in place, the relevant metrics and the tools available and the ones still missing.

  6. Gridification: Porting New Communities onto the WLCG/EGEE Infrastructure

    CERN Document Server

    Méndez-Lorenzo, P; Lamanna, M; Muraru, A

    2007-01-01

    The computational and storage capability of the Grid are attracting several research communities and we will discuss the general patterns observed in supporting new applications, porting them on the EGEE environment. In this talk we present the general infrastructure we have developed inside the application and support team at CERN (PSS and GD groups) to merge in a fast and feasible way all these applications inside the Grid, as for example Geant4, HARP, Garfield, UNOSAT or ITU. All these communities have different goals and requirements and the main challenge is the creation of a standard and general software infrastructure for the immersion of these communities onto the Grid. This general infrastructure effectively ‘shields’ the applications from the details of the Grid (the emphasis here is to run applications developed independently from the Grid middleware).It is stable enough to require few controls and supports by the members of the Grid team and also of the members of the user communities. Finally...

  7. Enabling IPv6 at FZU - WLCG Tier2 in Prague

    International Nuclear Information System (INIS)

    Kouba, Tomáš; Chudoba, Jiří; Eliáš, Marek

    2014-01-01

    The usage of the new IPv6 protocol in production is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution presents experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for monitoring the network latency. Since it is not always trivial to setup DNS in a dual stack environment properly, we developed a Nagios plugin for checking whether a domain name is resolvable when using only IP protocol version 6 and only version 4. We will also present local area network monitoring and tuning related to IPv6 performance. One of the most important software for a grid site is a batch system for a job execution. We will present our experience with configuring and running Torque batch system in a dual stack environment. We also discuss the steps needed to run VO specific jobs in our IPv6 testbed.

  8. Enabling IPv6 at FZU - WLCG Tier2 in Prague

    Science.gov (United States)

    Kouba, Tomáš; Chudoba, Jiří; Eliáš, Marek

    2014-06-01

    The usage of the new IPv6 protocol in production is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution presents experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for monitoring the network latency. Since it is not always trivial to setup DNS in a dual stack environment properly, we developed a Nagios plugin for checking whether a domain name is resolvable when using only IP protocol version 6 and only version 4. We will also present local area network monitoring and tuning related to IPv6 performance. One of the most important software for a grid site is a batch system for a job execution. We will present our experience with configuring and running Torque batch system in a dual stack environment. We also discuss the steps needed to run VO specific jobs in our IPv6 testbed.

  9. Deployment of the CMS software on the WLCG Grid

    International Nuclear Information System (INIS)

    Behrenhoff, W; Wissing, C; Kim, B; Blyweert, S; D'Hondt, J; Maes, J; Maes, M; Mulders, P Van; Villella, I; Vanelderen, L

    2011-01-01

    The CMS Experiment is taking high energy collision data at CERN. The computing infrastructure used to analyse the data is distributed round the world in a tiered structure. In order to use the 7 Tier-1 sites, the 50 Tier-2 sites and a still growing number of about 30 Tier-3 sites, the CMS software has to be available at those sites. Except for a very few sites the deployment and the removal of CMS software is managed centrally. Since the deployment team has no local accounts at the remote sites all installation jobs have to be sent via Grid jobs. Via a VOMS role the job has a high priority in the batch system and gains write privileges to the software area. Due to the lack of interactive access the installation jobs must be very robust against possible failures, in order not to leave a broken software installation. The CMS software is packaged in RPMs that are installed in the software area independent of the host OS. The apt-get tool is used to resolve package dependencies. This paper reports about the recent deployment experiences and the achieved performance.

  10. Common data buffer system. [communication with computational equipment utilized in spacecraft operations

    Science.gov (United States)

    Byrne, F. (Inventor)

    1981-01-01

    A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  12. The “Common Solutions" Strategy of the Experiment Support group at CERN for the LHC Experiments

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing as well as WLCG deployment and operations need to evolve. As part of the activities of the Experiment Support group in CERN’s IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management...

  13. Transaction processing in the common node of a distributed function laboratory computer system

    International Nuclear Information System (INIS)

    Stubblefield, F.W.; Dimmler, D.G.

    1975-01-01

    A computer network architecture consisting of a common node processor for managing peripherals and files and a number of private node processors for laboratory experiment control is briefly reviewed. Central to the problem of private node-common node communication is the concept of a transaction. The collection of procedures and the data structure associated with a transaction are described. The common node properties assigned to a transaction and procedures required for its complete processing are discussed. (U.S.)

  14. The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments

    International Nuclear Information System (INIS)

    Girone, M; Andreeva, J; Barreiro Megino, F H; Campana, S; Cinquilli, M; Di Girolamo, A; Dimou, M; Giordano, D; Karavakis, E; Kenyon, M J; Kokozkiewicz, L; Lanciotti, E; Litmaath, M; Magini, N; Negri, G; Roiser, S; Saiz, P; Saiz Santos, M D; Schovancova, J; Sciabà, A

    2012-01-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.

  15. The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments

    Science.gov (United States)

    Girone, M.; Andreeva, J.; Barreiro Megino, F. H.; Campana, S.; Cinquilli, M.; Di Girolamo, A.; Dimou, M.; Giordano, D.; Karavakis, E.; Kenyon, M. J.; Kokozkiewicz, L.; Lanciotti, E.; Litmaath, M.; Magini, N.; Negri, G.; Roiser, S.; Saiz, P.; Saiz Santos, M. D.; Schovancova, J.; Sciabà, A.; Spiga, D.; Trentadue, R.; Tuckett, D.; Valassi, A.; Van der Ster, D. C.; Shiers, J. D.

    2012-12-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.

  16. System of common usage on the base of external memory devices and the SM-3 computer

    International Nuclear Information System (INIS)

    Baluka, G.; Vasin, A.Yu.; Ermakov, V.A.; Zhukov, G.P.; Zimin, G.N.; Namsraj, Yu.; Ostrovnoj, A.I.; Savvateev, A.S.; Salamatin, I.M.; Yanovskij, G.Ya.

    1980-01-01

    An easily modified system of common usage on the base of external memories and a SM-3 minicomputer replacing some pulse analysers is described. The system has merits of PA and is more advantageous with regard to effectiveness of equipment using, the possibility of changing configuration and functions, the data protection against losses due to user errors and some failures, price of one registration channel, place occupied. The system of common usage is intended for the IBR-2 pulse reactor computing centre. It is designed using the SANPO system means for SM-3 computer [ru

  17. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  18. A Study on the Radiographic Diagnosis of Common Periapical Lesions by Using Computer

    International Nuclear Information System (INIS)

    Kim, Jae Duck; Kim, Seung Kug

    1990-01-01

    The purpose of this study was to estimate the diagnostic availability of the common periapical lesions by using computer. The author used a domestic personal computer and rearranged the applied program appropriately with RF (Rapid File), a program to answer the purpose of this study, and then input the consequence made out through collection, analysis and classification of the clinical and radiological features about the common periapical lesions as a basic data. The 256 cases (Cyst 91, Periapical granuloma 74, Periapical abscess 91) were obtained from the chart recordings and radiographs of the patients diagnosed or treated under the common periapical lesions during the past 8 years (1983-1990) at the infirmary of Dental School, Chosun University. Next, the clinical and radiographic features of the 256 cases were applied to RF program for diagnosis, and the diagnosis by using computer was compared with the hidden final diagnosis by clinical and histopathological examination. The obtained results were as follow: 1. In cases of the cyst, diagnosis through the computer program was shown rather lower accuracy (80.22%) as compared with accuracy (90.1%) by the radiologists. 2. In cases of the granuloma, diagnosis through the computer program was shown rather higher accuracy(75.7%) as compared with the accuracy (70.3%) by the radiologists. 3. In cases of periapical abscess, the diagnostic accuracy was shown 88% in both diagnoses. 4. The average diagnostic accuracy of 256 cases through the computer program was shown rather lower accuracy (81.2%) as compared with the accuracy (82.8%) by the radiologists. 5. The applied basic data for radiographic diagnosis of common periapical lesions by using computer was estimated to be available.

  19. A Study on the Radiographic Diagnosis of Common Periapical Lesions by Using Computer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Duck; Kim, Seung Kug [Dept. of Oral Radiology, College of Dentistry, Chosun University, Kwangju (Korea, Republic of)

    1990-08-15

    The purpose of this study was to estimate the diagnostic availability of the common periapical lesions by using computer. The author used a domestic personal computer and rearranged the applied program appropriately with RF (Rapid File), a program to answer the purpose of this study, and then input the consequence made out through collection, analysis and classification of the clinical and radiological features about the common periapical lesions as a basic data. The 256 cases (Cyst 91, Periapical granuloma 74, Periapical abscess 91) were obtained from the chart recordings and radiographs of the patients diagnosed or treated under the common periapical lesions during the past 8 years (1983-1990) at the infirmary of Dental School, Chosun University. Next, the clinical and radiographic features of the 256 cases were applied to RF program for diagnosis, and the diagnosis by using computer was compared with the hidden final diagnosis by clinical and histopathological examination. The obtained results were as follow: 1. In cases of the cyst, diagnosis through the computer program was shown rather lower accuracy (80.22%) as compared with accuracy (90.1%) by the radiologists. 2. In cases of the granuloma, diagnosis through the computer program was shown rather higher accuracy(75.7%) as compared with the accuracy (70.3%) by the radiologists. 3. In cases of periapical abscess, the diagnostic accuracy was shown 88% in both diagnoses. 4. The average diagnostic accuracy of 256 cases through the computer program was shown rather lower accuracy (81.2%) as compared with the accuracy (82.8%) by the radiologists. 5. The applied basic data for radiographic diagnosis of common periapical lesions by using computer was estimated to be available.

  20. A common currency for the computation of motivational values in the human striatum

    NARCIS (Netherlands)

    Sescousse, G.T.; Li, Y.; Dreher, J.C.

    2015-01-01

    Reward comparison in the brain is thought to be achieved through the use of a 'common currency', implying that reward value representations are computed on a unique scale in the same brain regions regardless of the reward type. Although such a mechanism has been identified in the ventro-medial

  1. A common currency for the computation of motivational values in the human striatum

    NARCIS (Netherlands)

    Sescousse, G.T.; Li, Y.; Dreher, J.C.

    2014-01-01

    Reward comparison in the brain is thought to be achieved through the use of a ‘common currency’, implying that reward value representations are computed on a unique scale in the same brain regions regardless of the reward type. Although such a mechanism has been identified in the ventro-medial

  2. MONTHLY VARIATION IN SPERM MOTILITY IN COMMON CARP ASSESSED USING COMPUTER-ASSISTED SPERM ANALYSIS (CASA)

    Science.gov (United States)

    Sperm motility variables from the milt of the common carp Cyprinus carpio were assessed using a computer-assisted sperm analysis (CASA) system across several months (March-August 1992) known to encompass the natural spawning period. Two-year-old pond-raised males obtained each mo...

  3. Cultural Commonalities and Differences in Spatial Problem-Solving: A Computational Analysis

    Science.gov (United States)

    Lovett, Andrew; Forbus, Kenneth

    2011-01-01

    A fundamental question in human cognition is how people reason about space. We use a computational model to explore cross-cultural commonalities and differences in spatial cognition. Our model is based upon two hypotheses: (1) the structure-mapping model of analogy can explain the visual comparisons used in spatial reasoning; and (2) qualitative,…

  4. Fine tuning of work practices of common radiological investigations performed using computed radiography system

    International Nuclear Information System (INIS)

    Livingstone, Roshan S.; Timothy Peace, B.S.; Sunny, S.; Victor Raj, D.

    2007-01-01

    Introduction: The advent of the computed radiography (CR) has brought about remarkable changes in the field of diagnostic radiology. A relatively large cross-section of the human population is exposed to ionizing radiation on account of common radiological investigations. This study is intended to audit radiation doses imparted to patients during common radiological investigations involving the use of CR systems. Method: The entrance surface doses (ESD) were measured using thermoluminescent dosimeters (TLD) for various radiological investigations performed using the computed radiography (CR) systems. Optimization of radiographic techniques and radiation doses was done by fine tuning the work practices. Results and conclusion: Reduction of radiation doses as high as 47% was achieved during certain investigations with the use of optimized exposure factors and fine-tuned work practices

  5. Operating the worldwide LHC computing grid: current and future challenges

    International Nuclear Information System (INIS)

    Molina, J Flix; Forti, A; Girone, M; Sciaba, A

    2014-01-01

    The Wordwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse their data. It includes almost 200,000 CPU cores, 200 PB of disk storage and 200 PB of tape storage distributed among more than 150 sites. The WLCG operations team is responsible for several essential tasks, such as the coordination of testing and deployment of Grid middleware and services, communication with the experiments and the sites, followup and resolution of operational issues and medium/long term planning. In 2012 WLCG critically reviewed all operational procedures and restructured the organisation of the operations team as a more coherent effort in order to improve its efficiency. In this paper we describe how the new organisation works, its recent successes and the changes to be implemented during the long LHC shutdown in preparation for the LHC Run 2.

  6. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case

    International Nuclear Information System (INIS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; Dell'Agnello, Luca

    2015-01-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF. (paper)

  7. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    Science.gov (United States)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  8. FACC: A Novel Finite Automaton Based on Cloud Computing for the Multiple Longest Common Subsequences Search

    Directory of Open Access Journals (Sweden)

    Yanni Li

    2012-01-01

    Full Text Available Searching for the multiple longest common subsequences (MLCS has significant applications in the areas of bioinformatics, information processing, and data mining, and so forth, Although a few parallel MLCS algorithms have been proposed, the efficiency and effectiveness of the algorithms are not satisfactory with the increasing complexity and size of biologic data. To overcome the shortcomings of the existing MLCS algorithms, and considering that MapReduce parallel framework of cloud computing being a promising technology for cost-effective high performance parallel computing, a novel finite automaton (FA based on cloud computing called FACC is proposed under MapReduce parallel framework, so as to exploit a more efficient and effective general parallel MLCS algorithm. FACC adopts the ideas of matched pairs and finite automaton by preprocessing sequences, constructing successor tables, and common subsequences finite automaton to search for MLCS. Simulation experiments on a set of benchmarks from both real DNA and amino acid sequences have been conducted and the results show that the proposed FACC algorithm outperforms the current leading parallel MLCS algorithm FAST-MLCS.

  9. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    Science.gov (United States)

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  10. SSVEP recognition using common feature analysis in brain-computer interface.

    Science.gov (United States)

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-04-15

    Canonical correlation analysis (CCA) has been successfully applied to steady-state visual evoked potential (SSVEP) recognition for brain-computer interface (BCI) application. Although the CCA method outperforms the traditional power spectral density analysis through multi-channel detection, it requires additionally pre-constructed reference signals of sine-cosine waves. It is likely to encounter overfitting in using a short time window since the reference signals include no features from training data. We consider that a group of electroencephalogram (EEG) data trials recorded at a certain stimulus frequency on a same subject should share some common features that may bear the real SSVEP characteristics. This study therefore proposes a common feature analysis (CFA)-based method to exploit the latent common features as natural reference signals in using correlation analysis for SSVEP recognition. Good performance of the CFA method for SSVEP recognition is validated with EEG data recorded from ten healthy subjects, in contrast to CCA and a multiway extension of CCA (MCCA). Experimental results indicate that the CFA method significantly outperformed the CCA and the MCCA methods for SSVEP recognition in using a short time window (i.e., less than 1s). The superiority of the proposed CFA method suggests it is promising for the development of a real-time SSVEP-based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. NEGOTIATING COMMON GROUND IN COMPUTER-MEDIATED VERSUS FACE-TO-FACE DISCUSSIONS

    Directory of Open Access Journals (Sweden)

    Ilona Vandergriff

    2006-01-01

    Full Text Available To explore the impact of the communication medium on building common ground, this article presents research comparing learner use of reception strategies in traditional face-to-face (FTF and in synchronous computer-mediated communication (CMC.Reception strategies, such as reprises, hypothesis testing and forward inferencing provide evidence of comprehension and thus serve to establish common ground among participants. A number of factors, including communicative purpose or medium are hypothesized to affect the use of such strategies (Clark & Brennan, 1991. In the data analysis, I 1 identify specific types of reception strategies, 2 compare their relative frequencies by communication medium, by task, and by learner and 3 describe how these reception strategies function in the discussions. The findings of the quantitative analysis show that the medium alone seems to have little impact on grounding as indicated by use of reception strategies. The qualitative analysis provides evidence that participants adapted the strategies to the goals of the communicative interaction as they used them primarily to negotiate and update common ground on their collaborative activity rather than to compensate for L2 deficiencies.

  12. A common currency for the computation of motivational values in the human striatum

    Science.gov (United States)

    Li, Yansong; Dreher, Jean-Claude

    2015-01-01

    Reward comparison in the brain is thought to be achieved through the use of a ‘common currency’, implying that reward value representations are computed on a unique scale in the same brain regions regardless of the reward type. Although such a mechanism has been identified in the ventro-medial prefrontal cortex and ventral striatum in the context of decision-making, it is less clear whether it similarly applies to non-choice situations. To answer this question, we scanned 38 participants with fMRI while they were presented with single cues predicting either monetary or erotic rewards, without the need to make a decision. The ventral striatum was the main brain structure to respond to both cues while showing increasing activity with increasing expected reward intensity. Most importantly, the relative response of the striatum to monetary vs erotic cues was correlated with the relative motivational value of these rewards as inferred from reaction times. Similar correlations were observed in a fronto-parietal network known to be involved in attentional focus and motor readiness. Together, our results suggest that striatal reward value signals not only obey to a common currency mechanism in the absence of choice but may also serve as an input to adjust motivated behaviour accordingly. PMID:24837478

  13. Transfer Kernel Common Spatial Patterns for Motor Imagery Brain-Computer Interface Classification

    Science.gov (United States)

    Dai, Mengxi; Liu, Shucong; Zhang, Pengju

    2018-01-01

    Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern (CSP) as preprocessing step before classification. The CSP method is a supervised algorithm. Therefore a lot of time-consuming training data is needed to build the model. To address this issue, one promising approach is transfer learning, which generalizes a learning model can extract discriminative information from other subjects for target classification task. To this end, we propose a transfer kernel CSP (TKCSP) approach to learn a domain-invariant kernel by directly matching distributions of source subjects and target subjects. The dataset IVa of BCI Competition III is used to demonstrate the validity by our proposed methods. In the experiment, we compare the classification performance of the TKCSP against CSP, CSP for subject-to-subject transfer (CSP SJ-to-SJ), regularizing CSP (RCSP), stationary subspace CSP (ssCSP), multitask CSP (mtCSP), and the combined mtCSP and ssCSP (ss + mtCSP) method. The results indicate that the superior mean classification performance of TKCSP can achieve 81.14%, especially in case of source subjects with fewer number of training samples. Comprehensive experimental evidence on the dataset verifies the effectiveness and efficiency of the proposed TKCSP approach over several state-of-the-art methods. PMID:29743934

  14. Isolated and unexplained dilation of the common bile duct on computed tomography scanscans

    Directory of Open Access Journals (Sweden)

    Naveen B. Krishna

    2012-07-01

    Full Text Available Isolated dilation of common bile duct (CBD with normal sized pancreatic duct and without identifiable stones or mass lesion (unexplained is frequently encountered by computed tomography/magnetic resonance imaging. We studied the final diagnoses in these patients and tried to elucidate factors that can predict a malignant etiology. This is a retrospective analysis of prospective database from a University based clinical practice (2002- 2008. We included 107 consecutive patients who underwent endoscopic ultrasound (EUS for evaluation of isolated and unexplained CBD dilation noted on contrast computed tomography scans. EUS examination was performed using a radial echoendoscope followed by a linear echoechoendoscope, if a focal mass lesion was identified. Fine-needle aspirates were assessed immediately by an attending cytopathologist. Main outcome measurements included i prevalence of neoplasms, CBD stones and chronic pancreatitis and ii performance characteristics of EUS/EUS-fine needle aspiration (EUS-FNA. A malignant neoplasm was found in 16 patients (14.9% of the study subjects, all with obstructive jaundice (ObJ. Six patients had CBD stones; three with ObJ and three with abnormal liver function tests. EUS findings suggestive of chronic pancreatitis were identified in 27 patients. EUSFNA had 97.3% accuracy (94.1% in subset with ObJ with a sensitivity of 81.2% and specificity of 100% for diagnosing malignancy. Presence of ObJ and older patient age were only significant predictors of malignancy in our cohort. Amongst patients with isolated and unexplained dilation of CBD, the risk of malignancy is significantly higher in older patients presenting with ObJ. EUS-FNA can diagnose malignancy in these patients with high accuracy besides identifying other potential etiologies including missed CBD stones and chronic pancreatitis.

  15. Computational approaches for discovery of common immunomodulators in fungal infections: towards broad-spectrum immunotherapeutic interventions.

    Science.gov (United States)

    Kidane, Yared H; Lawrence, Christopher; Murali, T M

    2013-10-07

    Fungi are the second most abundant type of human pathogens. Invasive fungal pathogens are leading causes of life-threatening infections in clinical settings. Toxicity to the host and drug-resistance are two major deleterious issues associated with existing antifungal agents. Increasing a host's tolerance and/or immunity to fungal pathogens has potential to alleviate these problems. A host's tolerance may be improved by modulating the immune system such that it responds more rapidly and robustly in all facets, ranging from the recognition of pathogens to their clearance from the host. An understanding of biological processes and genes that are perturbed during attempted fungal exposure, colonization, and/or invasion will help guide the identification of endogenous immunomodulators and/or small molecules that activate host-immune responses such as specialized adjuvants. In this study, we present computational techniques and approaches using publicly available transcriptional data sets, to predict immunomodulators that may act against multiple fungal pathogens. Our study analyzed data sets derived from host cells exposed to five fungal pathogens, namely, Alternaria alternata, Aspergillus fumigatus, Candida albicans, Pneumocystis jirovecii, and Stachybotrys chartarum. We observed statistically significant associations between host responses to A. fumigatus and C. albicans. Our analysis identified biological processes that were consistently perturbed by these two pathogens. These processes contained both immune response-inducing genes such as MALT1, SERPINE1, ICAM1, and IL8, and immune response-repressing genes such as DUSP8, DUSP6, and SPRED2. We hypothesize that these genes belong to a pool of common immunomodulators that can potentially be activated or suppressed (agonized or antagonized) in order to render the host more tolerant to infections caused by A. fumigatus and C. albicans. Our computational approaches and methodologies described here can now be applied to

  16. Computational Investigation of a Boundary-Layer Ingesting Propulsion System for the Common Research Model

    Science.gov (United States)

    Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven

    2016-01-01

    The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  17. Computational Investigation of a Boundary-Layer Ingestion Propulsion System for the Common Research Model

    Science.gov (United States)

    Blumenthal, Brennan

    2016-01-01

    This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  18. A survey of common habits of computer users as indicators of ...

    African Journals Online (AJOL)

    Yomi

    2012-01-31

    Jan 31, 2012 ... Hygiene has been recognized as an infection control strategy and the extent of the problems of environmental contamination largely depends on personal hygiene. With the development of several computer applications in recent times, the uses of computer systems have greatly expanded. And with.

  19. Common Sense Planning for a Computer, or, What's It Worth to You?

    Science.gov (United States)

    Crawford, Walt

    1984-01-01

    Suggests factors to be considered in planning for the purchase of a microcomputer, including budgets, benefits, costs, and decisions. Major uses of a personal computer are described--word processing, financial analysis, file and database management, programming and computer literacy, education, entertainment, and thrill of high technology. (EJS)

  20. Consumer attitudes towards computer-assisted self-care of the common cold.

    Science.gov (United States)

    Reis, J; Wrestler, F

    1994-04-01

    Knowledge of colds and flu and attitudes towards use of computers for self-care are compared for 260 young adult users and 194 young adult non-users of computer-assisted self-care for colds and flu. Participants completed a knowledge questionnaire on colds and flu, used a computer program designed to enhance self-care for colds and flu, and then completed a questionnaire on their attitudes towards using a computer for self-care for colds and flu, and then completed a questionnaire on their attitudes towards using a computer for self-care for colds and flu, perceived importance of physician interactions, physician expertise, and patient-physician communication. Compared with users, non-users preferred personal contact with their physicians and felt that computerized health assessments would be limited in vocabulary and range of current medical information. Non-users were also more likely to agree that people could not be trusted to do an accurate computerized health assessment and that the average person was too computer illiterate to use computers for self-care.

  1. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  2. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    Science.gov (United States)

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  3. A survey of common habits of computer users as indicators of ...

    African Journals Online (AJOL)

    Other unhealthy practices found among computer users included eating (52.1), drinking (56), coughing, sneezing and scratching of head (48.2%). Since microorganisms can be transferred through contact, droplets or airborne routes, it follows that these habits exhibited by users may act as sources of bacteria on keyboards ...

  4. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    CERN Document Server

    Molina-Perez, Jorge Amando

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator on duty at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is explo...

  5. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    Energy Technology Data Exchange (ETDEWEB)

    Molina-Perez, J. [UC, San Diego; Bonacorsi, D. [Bologna U.; Gutsche, O. [Fermilab; Sciaba, A. [CERN; Flix, J. [Madrid, CIEMAT; Kreuzer, P. [CERN; Fajardo, E. [Andes U., Bogota; Boccali, T. [INFN, Pisa; Klute, M. [MIT; Gomes, D. [Rio de Janeiro State U.; Kaselis, R. [Vilnius U.; Du, R. [Beijing, Inst. High Energy Phys.; Magini, N. [CERN; Butenas, I. [Vilnius U.; Wang, W. [Beijing, Inst. High Energy Phys.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  6. Monitoring techniques and alarm procedures for CMS Services and Sites in WLCG

    International Nuclear Information System (INIS)

    Molina-Perez, J; Sciabà, A; Magini, N; Bonacorsi, D; Gutsche, O; Flix, J; Kreuzer, P; Fajardo, E; Boccali, T; Klute, M; Gomes, D; Kaselis, R; Butenas, I; Du, R; Wang, W

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  7. The LHC Computing Grid in the starting blocks

    CERN Multimedia

    Danielle Amy Venton

    2010-01-01

    As the Large Hadron Collider ramps up operations and breaks world records, it is an exciting time for everyone at CERN. To get the computing perspective, the Bulletin this week caught up with Ian Bird, leader of the Worldwide LHC Computing Grid (WLCG). He is confident that everything is ready for the first data.   The metallic globe illustrating the Worldwide LHC Computing GRID (WLCG) in the CERN Computing Centre. The Worldwide LHC Computing Grid (WLCG) collaboration has been in place since 2001 and for the past several years it has continually run the workloads for the experiments as part of their preparations for LHC data taking. So far, the numerous and massive simulations of the full chain of reconstruction and analysis software could only be carried out using Monte Carlo simulated data. Now, for the first time, the system is starting to work with real data and with many simultaneous users accessing them from all around the world. “During the 2009 large-scale computing challenge (...

  8. Single-trial detection of visual evoked potentials by common spatial patterns and wavelet filtering for brain-computer interface.

    Science.gov (United States)

    Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo

    2013-01-01

    Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.

  9. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  10. Investigation on Oracle GoldenGate Veridata for Data Consistency in WLCG Distributed Database Environment

    OpenAIRE

    Asko, Anti; Lobato Pardavila, Lorena

    2014-01-01

    Abstract In the distributed database environment, the data divergence can be an important problem: if it is not discovered and correctly identified, incorrect data can lead to poor decision making, errors in the service and in the operative errors. Oracle GoldenGate Veridata is a product to compare two sets of data and identify and report on data that is out of synchronization. IT DB is providing a replication service between databases at CERN and other computer centers worldwide as a par...

  11. LeaRN: A Collaborative Learning-Research Network for a WLCG Tier-3 Centre

    Science.gov (United States)

    Pérez Calle, Elio

    2011-12-01

    The Department of Modern Physics of the University of Science and Technology of China is hosting a Tier-3 centre for the ATLAS experiment. A interdisciplinary team of researchers, engineers and students are devoted to the task of receiving, storing and analysing the scientific data produced by the LHC. In order to achieve the highest performance and to develop a knowledge base shared by all members of the team, the research activities and their coordination are being supported by an array of computing systems. These systems have been designed to foster communication, collaboration and coordination among the members of the team, both face-to-face and remotely, and both in synchronous and asynchronous ways. The result is a collaborative learning-research network whose main objectives are awareness (to get shared knowledge about other's activities and therefore obtain synergies), articulation (to allow a project to be divided, work units to be assigned and then reintegrated) and adaptation (to adapt information technologies to the needs of the group). The main technologies involved are Communication Tools such as web publishing, revision control and wikis, Conferencing Tools such as forums, instant messaging and video conferencing and Coordination Tools, such as time management, project management and social networks. The software toolkit has been deployed by the members of the team and it has been based on free and open source software.

  12. LeaRN: A Collaborative Learning-Research Network for a WLCG Tier-3 Centre

    International Nuclear Information System (INIS)

    Calle, Elio Pérez

    2011-01-01

    The Department of Modern Physics of the University of Science and Technology of China is hosting a Tier-3 centre for the ATLAS experiment. A interdisciplinary team of researchers, engineers and students are devoted to the task of receiving, storing and analysing the scientific data produced by the LHC. In order to achieve the highest performance and to develop a knowledge base shared by all members of the team, the research activities and their coordination are being supported by an array of computing systems. These systems have been designed to foster communication, collaboration and coordination among the members of the team, both face-to-face and remotely, and both in synchronous and asynchronous ways. The result is a collaborative learning-research network whose main objectives are awareness (to get shared knowledge about other's activities and therefore obtain synergies), articulation (to allow a project to be divided, work units to be assigned and then reintegrated) and adaptation (to adapt information technologies to the needs of the group). The main technologies involved are Communication Tools such as web publishing, revision control and wikis, Conferencing Tools such as forums, instant messaging and video conferencing and Coordination Tools, such as time management, project management and social networks. The software toolkit has been deployed by the members of the team and it has been based on free and open source software.

  13. Detection of common bile duct stones: comparison between endoscopic ultrasonography, magnetic resonance cholangiography, and helical-computed-tomographic cholangiography

    International Nuclear Information System (INIS)

    Kondo, Shintaro; Isayama, Hiroyuki; Akahane, Masaaki; Toda, Nobuo; Sasahira, Naoki; Nakai, Yosuke; Yamamoto, Natsuyo; Hirano, Kenji; Komatsu, Yutaka; Tada, Minoru; Yoshida, Haruhiko; Kawabe, Takao; Ohtomo, Kuni; Omata, Masao

    2005-01-01

    Objectives: New modalities, namely, endoscopic ultrasonography (EUS), magnetic resonance cholangiopancreatography (MRCP), and helical computed-tomographic cholangiography (HCT-C), have been introduced recently for the detection of common bile duct (CBD) stones and shown improved detectability compared to conventional ultrasound or computed tomography. We conducted this study to compare the diagnostic ability of EUS, MRCP, and HCT-C in patients with suspected choledocholithiasis. Methods: Twenty-eight patients clinically suspected of having CBD stones were enrolled, excluding those with cholangitis or a definite history of choledocholithiasis. Each patient underwent EUS, MRCP, and HCT-C prior to endoscopic retrograde cholangio-pancreatography (ERCP), the result of which served as the diagnostic gold standard. Results: CBD stones were detected in 24 (86%) of 28 patients by ERCP/IDUS. The sensitivity of EUS, MRCP, and HCT-C was 100%, 88%, and 88%, respectively. False negative cases for MRCP and HCT-C had a CBD stone smaller than 5 mm in diameter. No serious complications occurred while one patient complained of itching in the eyelids after the infusion of contrast agent on HCT-C. Conclusions: When examination can be scheduled, MRCP or HCT-C will be the first choice because they were less invasive than EUS. MRCP and HCT-C had similar detectability but the former may be preferable considering the possibility of allergic reaction in the latter. When MRCP is negative, EUS is recommended to check for small CBD stones

  14. Diagnostic reference levels for common computed tomography (CT) examinations: results from the first Nigerian nationwide dose survey.

    Science.gov (United States)

    Ekpo, Ernest U; Adejoh, Thomas; Akwo, Judith D; Emeka, Owujekwe C; Modu, Ali A; Abba, Mohammed; Adesina, Kudirat A; Omiyi, David O; Chiegwu, Uche H

    2018-01-29

    To explore doses from common adult computed tomography (CT) examinations and propose national diagnostic reference levels (nDRLs) for Nigeria. This retrospective study was approved by the Nnamdi Azikiwe University and University Teaching Hospital Institutional Review Boards (IRB: NAUTH/CS/66/Vol8/84) and involved dose surveys of adult CT examinations across the six geographical regions of Nigeria and Abuja from January 2016 to August 2017. Dose data of adult head, chest and abdomen/pelvis CT examinations were extracted from patient folders. The median, 75th and 25th percentile CT dose index volume (CTDI vol ) and dose-length-product (DLP) were computed for each of these procedures. Effective doses (E) for these examinations were estimated using the k conversion factor as described in the ICRP publication 103 (E DLP  =  k × DLP ). The proposed 75th percentile CTDI vol for head, chest, and abdomen/pelvis are 61 mGy, 17 mGy, and 20 mGy, respectively. The corresponding DLPs are 1310 mGy.cm, 735 mGy.cm, and 1486 mGy.cm respectively. The effective doses were 2.75 mSv (head), 10.29 mSv (chest), and 22.29 mSv (abdomen/pelvis). Findings demonstrate wide dose variations within and across centres in Nigeria. The results also show CTDI vol comparable to international standards, but considerably higher DLP and effective doses.

  15. Osteoid osteomas in common and in technically challenging locations treated with computed tomography-guided percutaneous radiofrequency ablation

    International Nuclear Information System (INIS)

    Mylona, Sophia; Patsoura, Sofia; Karapostolakis, Georgios; Galani, Panagiota; Pomoni, Anastasia; Thanos, Loukas

    2010-01-01

    To evaluate the efficacy of computed tomography (CT)-guided radiofrequency (RF) ablation for the treatment of osteoid osteomas in common and in technically challenging locations. Twenty-three patients with osteoid osteomas in common (nine cases) and technically challenging [14 cases: intra-articular (n = 7), spinal (n = 5), metaphyseal (n = 2)] positions were treated with CT-guided RF ablation. Therapy was performed under conscious sedation with a seven-array expandable RF electrode for 8-10 min at 80-110 C and power of 90-110 W. The patients went home under instruction. A brief pain inventory (BPI) score was calculated before and after (1 day, 4 weeks, 6 months and 1 year) treatment. All procedures were technically successful. Primary clinical success was 91.3% (21 of total 23 patients), despite the lesions' locations. BPI score was dramatically reduced after the procedure, and the decrease in BPI score was significant (P < 0.001, paired t-test; n - 1 = 22) for all periods during follow up. Two patients had persistent pain after 1 month and were treated successfully with a second procedure (secondary success rate 100%). No immediate or delayed complications were observed. CT-guided RF ablation is safe and highly effective for treatment of osteoid osteomas, even in technically difficult positions. (orig.)

  16. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  17. QCI Common

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There are many common software patterns and utilities for the ORNL Quantum Computing Institute that can and should be shared across projects. Otherwise we find duplication of code which adds unwanted complexity. This is a software product seeks to alleviate this by providing common utilities such as object factories, graph data structures, parameter input mechanisms, etc., for other software products within the ORNL Quantum Computing Institute. This work enables pure basic research, has no export controlled utilities, and has no real commercial value.

  18. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    Harald Barreiro Megino, Fernando; Van der Ster, Daniel; Benjamin, Doug; De, Kaushik; Gable, Ian; Paterson, Michael; Taylor, Ryan; Hendrix, Val; Vitillo, Roberto A; Panitkin, Sergey; De Silva, Asoka; Walker, Rod

    2012-01-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  19. Managing a tier-2 computer centre with a private cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-01-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

  20. Emphysema Is Common in Lungs of Cystic Fibrosis Lung Transplantation Patients: A Histopathological and Computed Tomography Study.

    Directory of Open Access Journals (Sweden)

    Onno M Mets

    Full Text Available Lung disease in cystic fibrosis (CF involves excessive inflammation, repetitive infections and development of bronchiectasis. Recently, literature on emphysema in CF has emerged, which might become an increasingly important disease component due to the increased life expectancy. The purpose of this study was to assess the presence and extent of emphysema in endstage CF lungs.In explanted lungs of 20 CF patients emphysema was semi-quantitatively assessed on histology specimens. Also, emphysema was automatically quantified on pre-transplantation computed tomography (CT using the percentage of voxels below -950 Houndfield Units and was visually scored on CT. The relation between emphysema extent, pre-transplantation lung function and age was determined.All CF patients showed emphysema on histological examination: 3/20 (15% showed mild, 15/20 (75% moderate and 2/20 (10% severe emphysema, defined as 0-20% emphysema, 20-50% emphysema and >50% emphysema in residual lung tissue, respectively. Visually upper lobe bullous emphysema was identified in 13/20 and more diffuse non-bullous emphysema in 18/20. Histology showed a significant correlation to quantified CT emphysema (p = 0.03 and visual emphysema score (p = 0.001. CT and visual emphysema extent were positively correlated with age (p = 0.045 and p = 0.04, respectively.In conclusion, this study both pathologically and radiologically confirms that emphysema is common in end-stage CF lungs, and is age related. Emphysema might become an increasingly important disease component in the aging CF population.

  1. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    International Nuclear Information System (INIS)

    Park, Justin C; Li, Jonathan G; Liu, Chihray; Lu, Bo; Zhang, Hao; Chen, Yunmei; Fan, Qiyong

    2015-01-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the

  2. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.

    Science.gov (United States)

    Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo

    2015-12-07

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm

  3. An R package to compute commonality coefficients in the multiple regression case: an introduction to the package and a practical example.

    Science.gov (United States)

    Nimon, Kim; Lewis, Mitzi; Kane, Richard; Haynes, R Michael

    2008-05-01

    Multiple regression is a widely used technique for data analysis in social and behavioral research. The complexity of interpreting such results increases when correlated predictor variables are involved. Commonality analysis provides a method of determining the variance accounted for by respective predictor variables and is especially useful in the presence of correlated predictors. However, computing commonality coefficients is laborious. To make commonality analysis accessible to more researchers, a program was developed to automate the calculation of unique and common elements in commonality analysis, using the statistical package R. The program is described, and a heuristic example using data from the Holzinger and Swineford (1939) study, readily available in the MBESS R package, is presented.

  4. Women in computer science: An interpretative phenomenological analysis exploring common factors contributing to women's selection and persistence in computer science as an academic major

    Science.gov (United States)

    Thackeray, Lynn Roy

    The purpose of this study is to understand the meaning that women make of the social and cultural factors that influence their reasons for entering and remaining in study of computer science. The twenty-first century presents many new challenges in career development and workforce choices for both men and women. Information technology has become the driving force behind many areas of the economy. As this trend continues, it has become essential that U.S. citizens need to pursue a career in technologies, including the computing sciences. Although computer science is a very lucrative profession, many Americans, especially women, are not choosing it as a profession. Recent studies have shown no significant differences in math, technical and science competency between men and women. Therefore, other factors, such as social, cultural, and environmental influences seem to affect women's decisions in choosing an area of study and career choices. A phenomenological method of qualitative research was used in this study, based on interviews of seven female students who are currently enrolled in a post-secondary computer science program. Their narratives provided meaning into the social and cultural environments that contribute to their persistence in their technical studies, as well as identifying barriers and challenges that are faced by female students who choose to study computer science. It is hoped that the data collected from this study may provide recommendations for the recruiting, retention and support for women in computer science departments of U.S. colleges and universities, and thereby increase the numbers of women computer scientists in industry. Keywords: gender access, self-efficacy, culture, stereotypes, computer education, diversity.

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  7. Comparative evaluation of the cadaveric, radiographic and computed tomographic anatomy of the heads of green iguana (Iguana iguana) , common tegu ( Tupinambis merianae) and bearded dragon ( Pogona vitticeps)

    OpenAIRE

    Banzato, Tommaso; Selleri, Paolo; Veladiano, Irene A; Martin, Andrea; Zanetti, Emanuele; Zotti, Alessandro

    2012-01-01

    Abstract Background Radiology and computed tomography are the most commonly available diagnostic tools for the diagnosis of pathologies affecting the head and skull in veterinary practice. Nevertheless, accurate interpretation of radiographic and CT studies requires a thorough knowledge of the gross and the cross-sectional anatomy. Despite the increasing success of reptiles as pets, only a few reports over their normal imaging features are currently available. The aim of this study is to desc...

  8. A comparison of the accuracy of ultrasound and computed tomography in common diagnoses causing acute abdominal pain

    Energy Technology Data Exchange (ETDEWEB)

    Randen, Adrienne van; Stoker, Jaap [Academic Medical Centre, Department of Radiology (suite G1-227), Amsterdam (Netherlands); Lameris, Wytze; Boermeester, Marja A. [Academic Medical Center, Department of Surgery, Amsterdam (Netherlands); Es, H.W. van; Heesewijk, Hans P.M. van [St Antonius Hospital, Department of Radiology, Nieuwegein (Netherlands); Ramshorst, Bert van [St Antonius Hospital, Department of Surgery, Nieuwegein (Netherlands); Hove, Wim ten [Gelre Hospitals, Department of Radiology, Apeldoorn (Netherlands); Bouma, Willem H. [Gelre Hospitals, Department of Surgery, Apeldoorn (Netherlands); Leeuwen, Maarten S. van [University Medical Centre, Department of Radiology, Utrecht (Netherlands); Keulen, Esteban M. van [Tergooi Hospitals, Department of Radiology, Hilversum (Netherlands); Bossuyt, Patrick M. [Academic Medical Center, Department of Clinical Epidemiology, Biostatistics, and Bioinformatics, Amsterdam (Netherlands)

    2011-07-15

    Head-to-head comparison of ultrasound and CT accuracy in common diagnoses causing acute abdominal pain. Consecutive patients with abdominal pain for >2 h and <5 days referred for imaging underwent both US and CT by different radiologists/radiological residents. An expert panel assigned a final diagnosis. Ultrasound and CT sensitivity and predictive values were calculated for frequent final diagnoses. Effect of patient characteristics and observer experience on ultrasound sensitivity was studied. Frequent final diagnoses in the 1,021 patients (mean age 47; 55% female) were appendicitis (284; 28%), diverticulitis (118; 12%) and cholecystitis (52; 5%). The sensitivity of CT in detecting appendicitis and diverticulitis was significantly higher than that of ultrasound: 94% versus 76% (p < 0.01) and 81% versus 61% (p = 0.048), respectively. For cholecystitis, the sensitivity of both was 73% (p = 1.00). Positive predictive values did not differ significantly between ultrasound and CT for these conditions. Ultrasound sensitivity in detecting appendicitis and diverticulitis was not significantly negatively affected by patient characteristics or reader experience. CT misses fewer cases than ultrasound, but both ultrasound and CT can reliably detect common diagnoses causing acute abdominal pain. Ultrasound sensitivity was largely not influenced by patient characteristics and reader experience. (orig.)

  9. Optimization of Dose and Image Quality in Full-fiand Computed Radiography Systems for Common Digital Radiographic Examinations

    Directory of Open Access Journals (Sweden)

    Soo-Foon Moey

    2018-01-01

    Full Text Available IntroductionA fine balance of image quality and radiation dose can be achieved by optimization to minimize stochastic and deterministic effects. This study aimed in ensuring that images of acceptable quality for common radiographic examinations in digital imaging were produced without causing harmful effects. Materials and MethodsThe study was conducted in three phases. The pre-optimization involved ninety physically abled patients aged between 20 to 60 years and weighed between 60 and 80 kilograms for four common digital radiographic examinations. Kerma X_plus, DAP meter was utilized to measure the entrance surface dose (ESD while effective dose (ED was estimated using CALDose_X 5.0 Monte Carlo software. The second phase, an experimental study utilized an anthropomorphic phantom (PBU-50 and Leeds test object TOR CDR for relative comparison of image quality. For the optimization phase, the imaging parameters with acceptable image quality and lowest ESD from the experimental study was related to patient’s body thickness. Image quality were evaluated by two radiologists using the modified evaluation criteria score lists. ResultsSignificant differences were found for image quality for all examinations. However significant difference for ESD were found for PA chest and AP abdomen only. The ESD for three of the examinations were lower than all published data. Additionally, the ESD and ED obtained for all examinations were lower than that recommended by radiation regulatory bodies. ConclusionOptimization of image quality and dose was achieved by utilizing an appropriate tube potential, calibrated automatic exposure control and additional filtration of 0.2mm copper.

  10. An Experimental and Computational Study of the Gas-Phase Acidities of the Common Amino Acid Amides.

    Science.gov (United States)

    Plummer, Chelsea E; Stover, Michele L; Bokatzian, Samantha S; Davis, John T M; Dixon, David A; Cassady, Carolyn J

    2015-07-30

    Using proton-transfer reactions in a Fourier transform ion cyclotron resonance mass spectrometer and correlated molecular orbital theory at the G3(MP2) level, gas-phase acidities (GAs) and the associated structures for amides corresponding to the common amino acids have been determined for the first time. These values are important because amino acid amides are models for residues in peptides and proteins. For compounds whose most acidic site is the C-terminal amide nitrogen, two ions populations were observed experimentally with GAs that differ by 4-7 kcal/mol. The lower energy, more acidic structure accounts for the majority of the ions formed by electrospray ionization. G3(MP2) calculations predict that the lowest energy anionic conformer has a cis-like orientation of the [-C(═O)NH](-) group whereas the higher energy, less acidic conformer has a trans-like orientation of this group. These two distinct conformers were predicted for compounds with aliphatic, amide, basic, hydroxyl, and thioether side chains. For the most acidic amino acid amides (tyrosine, cysteine, tryptophan, histidine, aspartic acid, and glutamic acid amides) only one conformer was observed experimentally, and its experimental GA correlates with the theoretical GA related to side chain deprotonation.

  11. Computational study of the fibril organization of polyglutamine repeats reveals a common motif identified in beta-helices.

    Science.gov (United States)

    Zanuy, David; Gunasekaran, Kannan; Lesk, Arthur M; Nussinov, Ruth

    2006-04-21

    The formation of fibril aggregates by long polyglutamine sequences is assumed to play a major role in neurodegenerative diseases such as Huntington. Here, we model peptides rich in glutamine, through a series of molecular dynamics simulations. Starting from a rigid nanotube-like conformation, we have obtained a new conformational template that shares structural features of a tubular helix and of a beta-helix conformational organization. Our new model can be described as a super-helical arrangement of flat beta-sheet segments linked by planar turns or bends. Interestingly, our comprehensive analysis of the Protein Data Bank reveals that this is a common motif in beta-helices (termed beta-bend), although it has not been identified so far. The motif is based on the alternation of beta-sheet and helical conformation as the protein sequence is followed from the N to the C termini (beta-alpha(R)-beta-polyPro-beta). We further identify this motif in the ssNMR structure of the protofibril of the amyloidogenic peptide Abeta(1-40). The recurrence of the beta-bend suggests a general mode of connecting long parallel beta-sheet segments that would allow the growth of partially ordered fibril structures. The design allows the peptide backbone to change direction with a minimal loss of main chain hydrogen bonds. The identification of a coherent organization beyond that of the beta-sheet segments in different folds rich in parallel beta-sheets suggests a higher degree of ordered structure in protein fibrils, in agreement with their low solubility and dense molecular packing.

  12. Experience of the WLCG data management system from the first two years of the LHC data taking

    Czech Academy of Sciences Publication Activity Database

    Adamová, Dagmar

    2012-01-01

    Roč. 5, č. 160 (2012), s. 1-10 ISSN 1824-8039. [50th International Winter Meeting on Nuclear Physics. Bormio, 23.01.2012-27.01.2012] R&D Project s: GA MŠk LC07048; GA MŠk LA08015 Institutional support: RVO:61389005 Keywords : grid computing * LHC * data taking Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders http://pos.sissa.it/archive/conferences/160/014/Bormio2012_014.pdf

  13. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  14. Handling Worldwide LHC Computing Grid Critical Service Incidents : The infrastructure and experience behind nearly 5 years of GGUS ALARMs

    CERN Multimedia

    Dimou, M; Dulov, O; Grein, G

    2013-01-01

    In the Wordwide LHC Computing Grid (WLCG) project the Tier centres are of paramount importance for storing and accessing experiment data and for running the batch jobs necessary for experiment production activities. Although Tier2 sites provide a significant fraction of the resources a non-availability of resources at the Tier0 or the Tier1s can seriously harm not only WLCG Operations but also the experiments' workflow and the storage of LHC data which are very expensive to reproduce. This is why availability requirements for these sites are high and committed in the WLCG Memorandum of Understanding (MoU). In this talk we describe the workflow of GGUS ALARMs, the only 24/7 mechanism available to LHC experiment experts for reporting to the Tier0 or the Tier1s problems with their Critical Services. Conclusions and experience gained from the detailed drills performed in each such ALARM for the last 4 years are explained and the shift with time of Type of Problems met. The physical infrastructure put in place to ...

  15. Digital dissection - using contrast-enhanced computed tomography scanning to elucidate hard- and soft-tissue anatomy in the Common Buzzard Buteo buteo.

    Science.gov (United States)

    Lautenschlager, Stephan; Bright, Jen A; Rayfield, Emily J

    2014-04-01

    Gross dissection has a long history as a tool for the study of human or animal soft- and hard-tissue anatomy. However, apart from being a time-consuming and invasive method, dissection is often unsuitable for very small specimens and often cannot capture spatial relationships of the individual soft-tissue structures. The handful of comprehensive studies on avian anatomy using traditional dissection techniques focus nearly exclusively on domestic birds, whereas raptorial birds, and in particular their cranial soft tissues, are essentially absent from the literature. Here, we digitally dissect, identify, and document the soft-tissue anatomy of the Common Buzzard (Buteo buteo) in detail, using the new approach of contrast-enhanced computed tomography using Lugol's iodine. The architecture of different muscle systems (adductor, depressor, ocular, hyoid, neck musculature), neurovascular, and other soft-tissue structures is three-dimensionally visualised and described in unprecedented detail. The three-dimensional model is further presented as an interactive PDF to facilitate the dissemination and accessibility of anatomical data. Due to the digital nature of the data derived from the computed tomography scanning and segmentation processes, these methods hold the potential for further computational analyses beyond descriptive and illustrative proposes. © 2013 The Authors. Journal of Anatomy published by John Wiley & Sons Ltd on behalf of Anatomical Society.

  16. Computer Security: “Hello World” - Welcome to CERN

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    Welcome to the open, liberal and free academic computing environment at CERN. Thanks to your new (or long-established!) affiliation with CERN, you are eligible for a CERN computing account, which enables you to register your devices: computers, laptops, smartphones, tablets, etc. It provides you with plenty of disk space and an e-mail address. It allows you to create websites, virtual machines and databases on demand.   You can now access most of the computing services provided by the GS and IT departments: Indico, for organising meetings and conferences; EDMS, for the approval of your engineering specifications; TWiki, for collaboration with others; and the WLCG computing grid. “Open, liberal, and free”, however, does not mean that you can do whatever you like. While we try to make your access to CERN's computing facilities as convenient and easy as possible, there are a few limits and boundaries to respect. These boundaries protect both the Organization'...

  17. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  18. Cross-sectional anatomy, computed tomography and magnetic resonance imaging of the head of common dolphin (Delphinus delphis) and striped dolphin (Stenella coeruleoalba).

    Science.gov (United States)

    Alonso-Farré, J M; Gonzalo-Orden, M; Barreiro-Vázquez, J D; Barreiro-Lois, A; André, M; Morell, M; Llarena-Reino, M; Monreal-Pawlowsky, T; Degollada, E

    2015-02-01

    Computed tomography (CT) and low-field magnetic resonance imaging (MRI) were used to scan seven by-caught dolphin cadavers, belonging to two species: four common dolphins (Delphinus delphis) and three striped dolphins (Stenella coeruleoalba). CT and MRI were obtained with the animals in ventral recumbency. After the imaging procedures, six dolphins were frozen at -20°C and sliced in the same position they were examined. Not only CT and MRI scans, but also cross sections of the heads were obtained in three body planes: transverse (slices of 1 cm thickness) in three dolphins, sagittal (5 cm thickness) in two dolphins and dorsal (5 cm thickness) in two dolphins. Relevant anatomical structures were identified and labelled on each cross section, obtaining a comprehensive bi-dimensional topographical anatomy guide of the main features of the common and the striped dolphin head. Furthermore, the anatomical cross sections were compared with their corresponding CT and MRI images, allowing an imaging identification of most of the anatomical features. CT scans produced an excellent definition of the bony and air-filled structures, while MRI allowed us to successfully identify most of the soft tissue structures in the dolphin's head. This paper provides a detailed anatomical description of the head structures of common and striped dolphins and compares anatomical cross sections with CT and MRI scans, becoming a reference guide for the interpretation of imaging studies. © 2014 Blackwell Verlag GmbH.

  19. Automated quantification of pulmonary emphysema from computed tomography scans: comparison of variation and correlation of common measures in a large cohort

    Science.gov (United States)

    Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.

    2010-03-01

    The purpose of this work was to retrospectively investigate the variation of standard indices of pulmonary emphysema from helical computed tomographic (CT) scans as related to inspiration differences over a 1 year interval and determine the strength of the relationship between these measures in a large cohort. 626 patients that had 2 scans taken at an interval of 9 months to 15 months (μ: 381 days, σ: 31 days) were selected for this work. All scans were acquired at a 1.25mm slice thickness using a low dose protocol. For each scan, the emphysema index (EI), fractal dimension (FD), mean lung density (MLD), and 15th percentile of the histogram (HIST) were computed. The absolute and relative changes for each measure were computed and the empirical 95% confidence interval was reported both in non-normalized and normalized scales. Spearman correlation coefficients are computed between the relative change in each measure and relative change in inspiration between each scan-pair, as well as between each pair-wise combination of the four measures. EI varied on a range of -10.5 to 10.5 on a non-normalized scale and -15 to 15 on a normalized scale, with FD and MLD showing slightly larger but comparable spreads, and HIST having a much larger variation. MLD was found to show the strongest correlation to inspiration change (r=0.85, pemphysema index and fractal dimension have the least variability overall of the commonly used measures of emphysema and that they offer the most unique quantification of emphysema relative to each other.

  20. Comparative evaluation of the cadaveric, radiographic and computed tomographic anatomy of the heads of green iguana (Iguana iguana) , common tegu ( Tupinambis merianae) and bearded dragon ( Pogona vitticeps)

    Science.gov (United States)

    2012-01-01

    Background Radiology and computed tomography are the most commonly available diagnostic tools for the diagnosis of pathologies affecting the head and skull in veterinary practice. Nevertheless, accurate interpretation of radiographic and CT studies requires a thorough knowledge of the gross and the cross-sectional anatomy. Despite the increasing success of reptiles as pets, only a few reports over their normal imaging features are currently available. The aim of this study is to describe the normal cadaveric, radiographic and computed tomographic features of the heads of the green iguana, tegu and bearded dragon. Results 6 adult green iguanas, 4 tegus, 3 bearded dragons, and, the adult cadavers of : 4 green iguana, 4 tegu, 4 bearded dragon were included in the study. 2 cadavers were dissected following a stratigraphic approach and 2 cadavers were cross-sectioned for each species. These latter specimens were stored in a freezer (−20°C) until completely frozen. Transversal sections at 5 mm intervals were obtained by means of an electric band-saw. Each section was cleaned and photographed on both sides. Radiographs of the head of each subject were obtained. Pre- and post- contrast computed tomographic studies of the head were performed on all the live animals. CT images were displayed in both bone and soft tissue windows. Individual anatomic structures were first recognised and labelled on the anatomic images and then matched on radiographs and CT images. Radiographic and CT images of the skull provided good detail of the bony structures in all species. In CT contrast medium injection enabled good detail of the soft tissues to be obtained in the iguana whereas only the eye was clearly distinguishable from the remaining soft tissues in both the tegu and the bearded dragon. Conclusions The results provide an atlas of the normal anatomical and in vivo radiographic and computed tomographic features of the heads of lizards, and this may be useful in interpreting any

  1. Comparative evaluation of the cadaveric, radiographic and computed tomographic anatomy of the heads of green iguana (Iguana iguana , common tegu ( Tupinambis merianae and bearded dragon ( Pogona vitticeps

    Directory of Open Access Journals (Sweden)

    Banzato Tommaso

    2012-05-01

    Full Text Available Abstract Background Radiology and computed tomography are the most commonly available diagnostic tools for the diagnosis of pathologies affecting the head and skull in veterinary practice. Nevertheless, accurate interpretation of radiographic and CT studies requires a thorough knowledge of the gross and the cross-sectional anatomy. Despite the increasing success of reptiles as pets, only a few reports over their normal imaging features are currently available. The aim of this study is to describe the normal cadaveric, radiographic and computed tomographic features of the heads of the green iguana, tegu and bearded dragon. Results 6 adult green iguanas, 4 tegus, 3 bearded dragons, and, the adult cadavers of : 4 green iguana, 4 tegu, 4 bearded dragon were included in the study. 2 cadavers were dissected following a stratigraphic approach and 2 cadavers were cross-sectioned for each species. These latter specimens were stored in a freezer (−20°C until completely frozen. Transversal sections at 5 mm intervals were obtained by means of an electric band-saw. Each section was cleaned and photographed on both sides. Radiographs of the head of each subject were obtained. Pre- and post- contrast computed tomographic studies of the head were performed on all the live animals. CT images were displayed in both bone and soft tissue windows. Individual anatomic structures were first recognised and labelled on the anatomic images and then matched on radiographs and CT images. Radiographic and CT images of the skull provided good detail of the bony structures in all species. In CT contrast medium injection enabled good detail of the soft tissues to be obtained in the iguana whereas only the eye was clearly distinguishable from the remaining soft tissues in both the tegu and the bearded dragon. Conclusions The results provide an atlas of the normal anatomical and in vivo radiographic and computed tomographic features of the heads of lizards, and this may be

  2. Comparative evaluation of the cadaveric, radiographic and computed tomographic anatomy of the heads of green iguana (Iguana iguana), common tegu (Tupinambis merianae) and bearded dragon (Pogona vitticeps).

    Science.gov (United States)

    Banzato, Tommaso; Selleri, Paolo; Veladiano, Irene A; Martin, Andrea; Zanetti, Emanuele; Zotti, Alessandro

    2012-05-11

    Radiology and computed tomography are the most commonly available diagnostic tools for the diagnosis of pathologies affecting the head and skull in veterinary practice. Nevertheless, accurate interpretation of radiographic and CT studies requires a thorough knowledge of the gross and the cross-sectional anatomy. Despite the increasing success of reptiles as pets, only a few reports over their normal imaging features are currently available. The aim of this study is to describe the normal cadaveric, radiographic and computed tomographic features of the heads of the green iguana, tegu and bearded dragon. 6 adult green iguanas, 4 tegus, 3 bearded dragons, and, the adult cadavers of: 4 green iguana, 4 tegu, 4 bearded dragon were included in the study. 2 cadavers were dissected following a stratigraphic approach and 2 cadavers were cross-sectioned for each species. These latter specimens were stored in a freezer (-20°C) until completely frozen. Transversal sections at 5 mm intervals were obtained by means of an electric band-saw. Each section was cleaned and photographed on both sides. Radiographs of the head of each subject were obtained. Pre- and post- contrast computed tomographic studies of the head were performed on all the live animals. CT images were displayed in both bone and soft tissue windows. Individual anatomic structures were first recognised and labelled on the anatomic images and then matched on radiographs and CT images. Radiographic and CT images of the skull provided good detail of the bony structures in all species. In CT contrast medium injection enabled good detail of the soft tissues to be obtained in the iguana whereas only the eye was clearly distinguishable from the remaining soft tissues in both the tegu and the bearded dragon. The results provide an atlas of the normal anatomical and in vivo radiographic and computed tomographic features of the heads of lizards, and this may be useful in interpreting any imaging modality involving these

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  4. 计算机公共选修课分类教学研究%Research on the Classfication Teaching of Common Elective Courses of Computer

    Institute of Scientific and Technical Information of China (English)

    曾显峰; 张钰莎

    2012-01-01

    Through analysis of common elective courses of computer by examples, puts forward classifica- tion teaching in the sepcial field. The computer foundation courses would be divided into many modules, different professional selectively to learning, and facing the different professional, cstabishes course groups in the next stage, and establishes a flexible teaching methods, assessment methods, thereby efficiently provides services for profesional.%通过实例分析大学计算机公共选修课目前的通行模式,提出面向专业的分类教学。计算机基础按内容模块化,不同的专业时模块进行分类选修,后续选修课程建立面向专业的课程群,并建立起灵活的教学方式,考核方式,从而更好地让计算机公共课程为专业服务。

  5. ATLAS grid compute cluster with virtualized service nodes

    International Nuclear Information System (INIS)

    Mejia, J; Stonjek, S; Kluth, S

    2010-01-01

    The ATLAS Computing Grid consists of several hundred compute clusters distributed around the world as part of the Worldwide LHC Computing Grid (WLCG). The Grid middleware and the ATLAS software which has to be installed on each site, often require a certain Linux distribution and sometimes even specific version thereof. On the other hand, mostly due to maintenance reasons, computer centres install the same operating system and version on all computers. This might lead to problems with the Grid middleware if the local version is different from the one for which it has been developed. At RZG we partly solved this conflict by using virtualization technology for the service nodes. We will present the setup used at RZG and show how it helped to solve the problems described above. In addition we will illustrate the additional advantages gained by the above setup.

  6. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  7. ATLAS and LHC computing on CRAY

    CERN Document Server

    Haug, Sigve; The ATLAS collaboration

    2016-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one import measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb from a dedicated cluster to the large CRAY systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  8. ATLAS and LHC computing on CRAY

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00297774; The ATLAS collaboration; Haug, Sigve

    2017-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  9. Overview of the ATLAS distributed computing system

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are underway to even more efficiently use opportunistic resources such as HPCs and utilize new technologies. This presentation will review and explain the outline and the performance of the ATLAS distributed computing system and give an outlook to new workflow and data management ideas for the beginning of the LHC Run 3.

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  11. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  13. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    Science.gov (United States)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  16. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  17. Future Computing Platforms for Science in a Power Constrained Era

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Eulisse, Giulio; Elmer, Peter; Knight, Robert

    2015-01-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. We evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG). (paper)

  18. Sensitivity analysis on the effect of software-induced common cause failure probability in the computer-based reactor trip system unavailability

    International Nuclear Information System (INIS)

    Kamyab, Shahabeddin; Nematollahi, Mohammadreza; Shafiee, Golnoush

    2013-01-01

    Highlights: ► Importance and sensitivity analysis has been performed for a digitized reactor trip system. ► The results show acceptable trip unavailability, for software failure probabilities below 1E −4 . ► However, the value of Fussell–Vesley indicates that software common cause failure is still risk significant. ► Diversity and effective test is founded beneficial to reduce software contribution. - Abstract: The reactor trip system has been digitized in advanced nuclear power plants, since the programmable nature of computer based systems has a number of advantages over non-programmable systems. However, software is still vulnerable to common cause failure (CCF). Residual software faults represent a CCF concern, which threat the implemented achievements. This study attempts to assess the effectiveness of so-called defensive strategies against software CCF with respect to reliability. Sensitivity analysis has been performed by re-quantifying the models upon changing the software failure probability. Importance measures then have been estimated in order to reveal the specific contribution of software CCF in the trip failure probability. The results reveal the importance and effectiveness of signal and software diversity as applicable strategies to ameliorate inefficiencies due to software CCF in the reactor trip system (RTS). No significant change has been observed in the rate of RTS failure probability for the basic software CCF greater than 1 × 10 −4 . However, the related Fussell–Vesley has been greater than 0.005, for the lower values. The study concludes that consideration of risk associated with the software based systems is a multi-variant function which requires compromising among them in more precise and comprehensive studies

  19. EVALUATION OF BONE MINERALIZATION BY COMPUTED TOMOGRAPHY IN WILD AND CAPTIVE EUROPEAN COMMON SPADEFOOTS (PELOBATES FUSCUS), IN RELATION TO EXPOSURE TO ULTRAVIOLET B RADIATION AND DIETARY SUPPLEMENTS.

    Science.gov (United States)

    van Zijll Langhout, Martine; Struijk, Richard P J H; Könning, Tessa; van Zuilen, Dick; Horvath, Katalin; van Bolhuis, Hester; Maarschalkerweerd, Roelof; Verstappen, Frank

    2017-09-01

    Captive rearing programs have been initiated to save the European common spadefoot (Pelobates fuscus), a toad species in the family of Pelobatidae, from extinction in The Netherlands. Evaluating whether this species needs ultraviolet B (UVB) radiation and/or dietary supplementation for healthy bone development is crucial for its captive management and related conservation efforts. The bone mineralization in the femurs and the thickest part of the parietal bone of the skulls of European common spadefoots (n = 51) was measured in Hounsfield units (HUs) by computed tomography. One group, containing adults (n = 8) and juveniles (n = 13), was reared at ARTIS Amsterdam Royal Zoo without UVB exposure. During their terrestrial lifetime, these specimens received a vitamin-mineral supplement. Another group, containing adults (n = 8) and juveniles (n = 10), was reared and kept in an outdoor breeding facility in Münster, Germany, with permanent access to natural UVB light, without vitamin-mineral supplementation. The HUs in the ARTIS and Münster specimens were compared with those in wild specimens (n = 12). No significant difference was found between the HUs in the femurs of both ARTIS and Münster adults and wild adults (P = 0.537; P = 0.181). The HUs in the skulls of both captive-adult groups were significantly higher than in the skulls of wild specimens (P = 0.020; P = 0.005). The HUs in the femurs of the adult ARTIS animals were significantly higher than the HUs in the femurs of the adult Münster animals (P = 0.007). The absence of UVB radiation did not seem to have a negative effect on the bone development in the terrestrial stage. This suggests that this nocturnal, subterrestrial amphibian was able to extract sufficient vitamin D 3 from its diet and did not rely heavily on photobiosynthesis through UVB exposure.

  20. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  4. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  5. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  6. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  7. Computer Security: Geneva, Suisse Romande and beyond

    CERN Multimedia

    Computer Security Team

    2014-01-01

    To ensure good computer security, it is essential for us to keep in close contact and collaboration with a multitude of official and unofficial, national and international bodies, agencies, associations and organisations in order to discuss best practices, to learn about the most recent (and, at times, still unpublished) vulnerabilities, and to handle jointly any security incident. A network of peers - in particular a network of trusted peers - can provide important intelligence about new vulnerabilities or ongoing attacks much earlier than information published in the media. In this article, we would like to introduce a few of the official peers we usually deal with.*   Directly relevant for CERN are SWITCH, our partner for networking in Switzerland, and our contacts within the WLCG, i.e. the European Grid Infrastructure (EGI), and the U.S. Open Science Grid (OSG). All three are essential partners when discussing security implementations and resolving security incidents. SWITCH, in...

  8. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    International Nuclear Information System (INIS)

    Ballestrero, S; Lee, C J; Batraneanu, S M; Scannicchio, D A; Brasolin, F; Contescu, C; Girolamo, A Di; Astigarraga, M E Pozo; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  9. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    Science.gov (United States)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  10. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  11. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  12. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  13. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  16. Computation for LHC experiments: a worldwide computing grid

    International Nuclear Information System (INIS)

    Fairouz, Malek

    2010-01-01

    In normal operating conditions the LHC detectors are expected to record about 10 10 collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10 9 octets per second and recording capacity of a few tens of 10 15 octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  17. Application of a common spatial pattern-based algorithm for an fNIRS-based motor imagery brain-computer interface.

    Science.gov (United States)

    Zhang, Shen; Zheng, Yanchun; Wang, Daifa; Wang, Ling; Ma, Jianai; Zhang, Jing; Xu, Weihao; Li, Deyu; Zhang, Dan

    2017-08-10

    Motor imagery is one of the most investigated paradigms in the field of brain-computer interfaces (BCIs). The present study explored the feasibility of applying a common spatial pattern (CSP)-based algorithm for a functional near-infrared spectroscopy (fNIRS)-based motor imagery BCI. Ten participants performed kinesthetic imagery of their left- and right-hand movements while 20-channel fNIRS signals were recorded over the motor cortex. The CSP method was implemented to obtain the spatial filters specific for both imagery tasks. The mean, slope, and variance of the CSP filtered signals were taken as features for BCI classification. Results showed that the CSP-based algorithm outperformed two representative channel-wise methods for classifying the two imagery statuses using either data from all channels or averaged data from imagery responsive channels only (oxygenated hemoglobin: CSP-based: 75.3±13.1%; all-channel: 52.3±5.3%; averaged: 64.8±13.2%; deoxygenated hemoglobin: CSP-based: 72.3±13.0%; all-channel: 48.8±8.2%; averaged: 63.3±13.3%). Furthermore, the effectiveness of the CSP method was also observed for the motor execution data to a lesser extent. A partial correlation analysis revealed significant independent contributions from all three types of features, including the often-ignored variance feature. To our knowledge, this is the first study demonstrating the effectiveness of the CSP method for fNIRS-based motor imagery BCIs. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  19. Optical Computing

    OpenAIRE

    Woods, Damien; Naughton, Thomas J.

    2008-01-01

    We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...

  20. Common Courses for Common Purposes:

    DEFF Research Database (Denmark)

    Schaub Jr, Gary John

    2014-01-01

    (PME)? I suggest three alternative paths that increased cooperation in PME at the level of the command and staff course could take: a Nordic Defence College, standardized national command and staff courses, and a core curriculum of common courses for common purposes. I conclude with a discussion of how...

  1. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration; Velikhov, Vasily; Konoplich, Rostislav

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of computer institutes' clusters, commercial computing resources and supercomputers. To perform precision measurements of the Higgs boson properties in these realities, it is also highly required to have effective instruments to simulate kinematic distributions of signal events. In this talk we give a brief description of the modern distribution reconstruction method called Morphing and perform few efficiency tests to demonstrate its potential. These studies have been performed on the WLCG and Kurchatov Institute’s Data Processing Center, including Tier-1 GRID site and supercomputer as well. We also analyze the CPU efficienc...

  2. Creative Commons

    DEFF Research Database (Denmark)

    Jensen, Lone

    2006-01-01

    En Creative Commons licens giver en forfatter mulighed for at udbyde sit værk i en alternativ licensløsning, som befinder sig på forskellige trin på en skala mellem yderpunkterne "All rights reserved" og "No rights reserved". Derved opnås licensen "Some rights reserved"......En Creative Commons licens giver en forfatter mulighed for at udbyde sit værk i en alternativ licensløsning, som befinder sig på forskellige trin på en skala mellem yderpunkterne "All rights reserved" og "No rights reserved". Derved opnås licensen "Some rights reserved"...

  3. Science commons

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    SCP: Creative Commons licensing for open access publishing, Open Access Law journal-author agreements for converting journals to open access, and the Scholar's Copyright Addendum Engine for retaining rights to self-archive in meaningful formats and locations for future re-use. More than 250 science and technology journals already publish under Creative Commons licensing while 35 law journals utilize the Open Access Law agreements. The Addendum Engine is a new tool created in partnership with SPARC and U.S. universities. View John Wilbanks's biography

  4. Common approach to common interests

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-06-01

    In referring to issues confronting the energy field in this region and options to be exercised in the future, I would like to mention the fundamental condition of the utmost importance. That can be summed up as follows: any subject in energy area can never be solved by one country alone, given the geographical and geopolitical characteristics intrinsically possessed by energy. So, a regional approach is needed and it is especially necessary for the main players in the region to jointly address problems common to them. Though it may be a matter to be pursued in the distant future, I am personally dreaming a 'Common Energy Market for Northeast Asia,' in which member countries' interests are adjusted so that the market can be integrated and the region can become a most economically efficient market, thus formulating an effective power to encounter the outside. It should be noted that Europe needed forty years to integrate its market as the unified common market. It is necessary for us to follow a number of steps over the period to eventually materialize our common market concept, too. Now is the time for us to take a first step to lay the foundation for our descendants to enjoy prosperity from such a common market.

  5. Common envelope evolution

    NARCIS (Netherlands)

    Taam, Ronald E.; Ricker, Paul M.

    2010-01-01

    The common envelope phase of binary star evolution plays a central role in many evolutionary pathways leading to the formation of compact objects in short period systems. Using three dimensional hydrodynamical computations, we review the major features of this evolutionary phase, focusing on the

  6. Making the Common Good Common

    Science.gov (United States)

    Chase, Barbara

    2011-01-01

    How are independent schools to be useful to the wider world? Beyond their common commitment to educate their students for meaningful lives in service of the greater good, can they educate a broader constituency and, thus, share their resources and skills more broadly? Their answers to this question will be shaped by their independence. Any…

  7. ATLAS Distributed Computing: Its Central Services core

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration

    2018-01-01

    The ATLAS Distributed Computing (ADC) Project is responsible for the off-line processing of data produced by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. It facilitates data and workload management for ATLAS computing on the Worldwide LHC Computing Grid (WLCG). ADC Central Services operations (CSops)is a vital part of ADC, responsible for the deployment and configuration of services needed by ATLAS computing and operation of those services on CERN IT infrastructure, providing knowledge of CERN IT services to ATLAS service managers and developers, and supporting them in case of issues. Currently this entails the management of thirty seven different OpenStack projects, with more than five thousand cores allocated for these virtual machines, as well as overseeing the distribution of twenty nine petabytes of storage space in EOS for ATLAS. As the LHC begins to get ready for the next long shut-down, which will bring in many new upgrades to allow for more data to be captured by the on-line syste...

  8. Computed tomography dosimeter utilizing a radiochromic film and an optical common-mode rejection: characterization and calibration of the GafChromic XRCT film

    International Nuclear Information System (INIS)

    Ohuchi, H.; Abe, M.

    2008-01-01

    Gafchromic XRCT radiochromic film is a self-developing high sensitivity radiochromic film product which can be used for assessment of delivered radiation doses which could match applications such as computed tomography (CT) dosimetry. The film automatically changes color upon irradiation changing from amber to dark greenish-black depending on the level of exposure. The absorption spectra of Gafchromic XRCT radiochromic film as measured with reflectance spectrophotometry have been investigated to analyse the dosimetry characteristics of the film. Results show two main absorption peaks produced from irradiation located at around 630 nm and 580 nm. We employed a commercially available, optical flatbed scanner for digitization of the film and image analysis software to determine the response of the XRCT films to ionizing radiation. The two dose response curves as a function of delivered dose ranging from 1.069 to 119.7 mGy for tube voltages of 80, 100, and 120 kV X-ray beams and from films scanned 24 hrs after exposure are obtained. One represents the net optical density obtained with the conventional analysis way using only red component and another shows the net reduced OD with the optical CMR scheme, which we developed, using red and green components. The measured ODs obtained with the optical CMR scheme show a good consistency among four samples and all values show an improved consistency with a second-order polynomial fit less than 1 mGy, while those with the conventional analysis exhibited a large discrepancy among four samples and did not show a consistency with a second-order polynomial fit less than 1 mGy. This result combined with its energy independence from 80 kV to 120 kV X-ray energy range provides a unique enhancement in dosimetric measurement capabilities such as the acquisition of high-spatial resolution and calibrated radiation dose profiles over currently available dosimetry films for CT applications. (author)

  9. Algorithms for computing parsimonious evolutionary scenarios for genome evolution, the last universal common ancestor and dominance of horizontal gene transfer in the evolution of prokaryotes

    Directory of Open Access Journals (Sweden)

    Galperin Michael Y

    2003-01-01

    Full Text Available Abstract Background Comparative analysis of sequenced genomes reveals numerous instances of apparent horizontal gene transfer (HGT, at least in prokaryotes, and indicates that lineage-specific gene loss might have been even more common in evolution. This complicates the notion of a species tree, which needs to be re-interpreted as a prevailing evolutionary trend, rather than the full depiction of evolution, and makes reconstruction of ancestral genomes a non-trivial task. Results We addressed the problem of constructing parsimonious scenarios for individual sets of orthologous genes given a species tree. The orthologous sets were taken from the database of Clusters of Orthologous Groups of proteins (COGs. We show that the phyletic patterns (patterns of presence-absence in completely sequenced genomes of almost 90% of the COGs are inconsistent with the hypothetical species tree. Algorithms were developed to reconcile the phyletic patterns with the species tree by postulating gene loss, COG emergence and HGT (the latter two classes of events were collectively treated as gene gains. We prove that each of these algorithms produces a parsimonious evolutionary scenario, which can be represented as mapping of loss and gain events on the species tree. The distribution of the evolutionary events among the tree nodes substantially depends on the underlying assumptions of the reconciliation algorithm, e.g. whether or not independent gene gains (gain after loss after gain are permitted. Biological considerations suggest that, on average, gene loss might be a more likely event than gene gain. Therefore different gain penalties were used and the resulting series of reconstructed gene sets for the last universal common ancestor (LUCA of the extant life forms were analysed. The number of genes in the reconstructed LUCA gene sets grows as the gain penalty increases. However, qualitative examination of the LUCA versions reconstructed with different gain penalties

  10. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab

    2017-11-22

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  11. Evolution of the ATLAS Distributed Computing during the LHC long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2013-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  12. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  13. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  14. An automated meta-monitoring mobile application and front-end interface for the ATLAS computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Quadt, Arnulf [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany)

    2016-07-01

    Efficient administration of computing centres requires advanced tools for the monitoring and front-end interface of the infrastructure. Providing the large-scale distributed systems as a global grid infrastructure, like the Worldwide LHC Computing Grid (WLCG) and ATLAS computing, is offering many existing web pages and information sources indicating the status of the services, systems and user jobs at grid sites. A meta-monitoring mobile application which automatically collects the information could give every administrator a sophisticated and flexible interface of the infrastructure. We describe such a solution; the MadFace mobile application developed at Goettingen. It is a HappyFace compatible mobile application which has a user-friendly interface. It also becomes very feasible to automatically investigate the status and problem from different sources and provides access of the administration roles for non-experts.

  15. The Tragedy of the Commons

    Science.gov (United States)

    Short, Daniel

    2016-01-01

    The tragedy of the commons is one of the principal tenets of ecology. Recent developments in experiential computer-based simulation of the tragedy of the commons are described. A virtual learning environment is developed using the popular video game "Minecraft". The virtual learning environment is used to experience first-hand depletion…

  16. Computation for LHC experiments: a worldwide computing grid; Le calcul scientifique des experiences LHC: une grille de production mondiale

    Energy Technology Data Exchange (ETDEWEB)

    Fairouz, Malek [Universite Joseph-Fourier, LPSC, CNRS-IN2P3, Grenoble I, 38 (France)

    2010-08-15

    In normal operating conditions the LHC detectors are expected to record about 10{sup 10} collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10{sup 9} octets per second and recording capacity of a few tens of 10{sup 15} octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  17. High-throughput landslide modelling using computational grids

    Science.gov (United States)

    Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.

    2012-04-01

    physicists and geographical scientists are collaborating to develop methods for providing simple and effective access to landslide models and associated simulation data. Particle physicists have valuable experience in dealing with data complexity and management due to the scale of data generated by particle accelerators such as the Large Hadron Collider (LHC). The LHC generates tens of petabytes of data every year which is stored and analysed using the Worldwide LHC Computing Grid (WLCG). Tools and concepts from the WLCG are being used to drive the development of a Software-as-a-Service (SaaS) platform to provide access to hosted landslide simulation software and data. It contains advanced data management features and allows landslide simulations to be run on the WLCG, dramatically reducing simulation runtimes by parallel execution. The simulations are accessed using a web page through which users can enter and browse input data, submit jobs and visualise results. Replication of the data ensures a local copy can be accessed should a connection to the platform be unavailable. The platform does not know the details of the simulation software it runs, so it is therefore possible to use it to run alternative models at similar scales. This creates the opportunity for activities such as model sensitivity analysis and performance comparison at scales that are impractical using standalone software.

  18. Unified Monitoring Architecture for IT and Grid Services

    Science.gov (United States)

    Aimar, A.; Aguado Corman, A.; Andrade, P.; Belov, S.; Delgado Fernandez, J.; Garrido Bear, B.; Georgiou, M.; Karavakis, E.; Magnoni, L.; Rama Ballesteros, R.; Riahi, H.; Rodriguez Martinez, J.; Saiz, P.; Zolnai, D.

    2017-10-01

    This paper provides a detailed overview of the Unified Monitoring Architecture (UMA) that aims at merging the monitoring of the CERN IT data centres and the WLCG monitoring using common and widely-adopted open source technologies such as Flume, Elasticsearch, Hadoop, Spark, Kibana, Grafana and Zeppelin. It provides insights and details on the lessons learned, explaining the work performed in order to monitor the CERN IT data centres and the WLCG computing activities such as the job processing, data access and transfers, and the status of sites and services.

  19. Maintaining Traceability in an Evolving Distributed Computing Environment

    Science.gov (United States)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  20. The common ancestry of life

    Directory of Open Access Journals (Sweden)

    Wolf Yuri I

    2010-11-01

    Full Text Available Abstract Background It is common belief that all cellular life forms on earth have a common origin. This view is supported by the universality of the genetic code and the universal conservation of multiple genes, particularly those that encode key components of the translation system. A remarkable recent study claims to provide a formal, homology independent test of the Universal Common Ancestry hypothesis by comparing the ability of a common-ancestry model and a multiple-ancestry model to predict sequences of universally conserved proteins. Results We devised a computational experiment on a concatenated alignment of universally conserved proteins which shows that the purported demonstration of the universal common ancestry is a trivial consequence of significant sequence similarity between the analyzed proteins. The nature and origin of this similarity are irrelevant for the prediction of "common ancestry" of by the model-comparison approach. Thus, homology (common origin of the compared proteins remains an inference from sequence similarity rather than an independent property demonstrated by the likelihood analysis. Conclusion A formal demonstration of the Universal Common Ancestry hypothesis has not been achieved and is unlikely to be feasible in principle. Nevertheless, the evidence in support of this hypothesis provided by comparative genomics is overwhelming. Reviewers this article was reviewed by William Martin, Ivan Iossifov (nominated by Andrey Rzhetsky and Arcady Mushegian. For the complete reviews, see the Reviewers' Report section.

  1. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  2. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  3. Cloud Computing

    CERN Document Server

    Baun, Christian; Nimis, Jens; Tai, Stefan

    2011-01-01

    Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d

  4. Common Readout System in ALICE

    CERN Document Server

    Jubin, Mitra

    2016-01-01

    The ALICE experiment at the CERN Large Hadron Collider is going for a major physics upgrade in 2018. This upgrade is necessary for getting high statistics and high precision measurement for probing into rare physics channels needed to understand the dynamics of the condensed phase of QCD. The high interaction rate and the large event size in the upgraded detectors will result in an experimental data flow traffic of about 1 TB/s from the detectors to the on-line computing system. A dedicated Common Readout Unit (CRU) is proposed for data concentration, multiplexing, and trigger distribution. CRU, as common interface unit, handles timing, data and control signals between on-detector systems and online-offline computing system. An overview of the CRU architecture is presented in this manuscript.

  5. Common Readout System in ALICE

    CERN Document Server

    Jubin, Mitra

    2017-01-01

    The ALICE experiment at the CERN Large Hadron Collider is going for a major physics upgrade in 2018. This upgrade is necessary for getting high statistics and high precision measurement for probing into rare physics channels needed to understand the dynamics of the condensed phase of QCD. The high interaction rate and the large event size in the upgraded detectors will result in an experimental data flow traffic of about 1 TB/s from the detectors to the on-line computing system. A dedicated Common Readout Unit (CRU) is proposed for data concentration, multiplexing, and trigger distribution. CRU, as common interface unit, handles timing, data and control signals between on-detector systems and online-offline computing system. An overview of the CRU architecture is presented in this manuscript.

  6. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... are the limitations of CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed ... nasal cavity by small openings. top of page What are some common uses of the procedure? CT ...

  7. CERN database services for the LHC computing grid

    Energy Technology Data Exchange (ETDEWEB)

    Girone, M [CERN IT Department, CH-1211 Geneva 23 (Switzerland)], E-mail: maria.girone@cern.ch

    2008-07-15

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  8. CERN database services for the LHC computing grid

    International Nuclear Information System (INIS)

    Girone, M

    2008-01-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed

  9. Availability measurement of grid services from the perspective of a scientific computing centre

    International Nuclear Information System (INIS)

    Marten, H; Koenig, T

    2011-01-01

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard 'IT Infrastructure Library (ITIL)' was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  10. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  11. Common Misconceptions about Cholesterol

    Science.gov (United States)

    ... Venous Thromboembolism Aortic Aneurysm More Common Misconceptions about Cholesterol Updated:Jan 29,2018 How much do you ... are some common misconceptions — and the truth. High cholesterol isn’t a concern for children. High cholesterol ...

  12. How Common Is PTSD?

    Science.gov (United States)

    ... Center for PTSD » Public » How Common Is PTSD? PTSD: National Center for PTSD Menu Menu PTSD PTSD Home For the Public ... here Enter ZIP code here How Common Is PTSD? Public This section is for Veterans, General Public, ...

  13. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) is a fast, painless exam that uses special ... the limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known as a CT or CAT ...

  14. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... News Physician Resources Professions Site Index A-Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) ... are the limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known as a ...

  15. Common cause failure analysis methodology for complex systems

    International Nuclear Information System (INIS)

    Wagner, D.P.; Cate, C.L.; Fussell, J.B.

    1977-01-01

    Common cause failure analysis, also called common mode failure analysis, is an integral part of a complex system reliability analysis. This paper extends existing methods of computer aided common cause failure analysis by allowing analysis of the complex systems often encountered in practice. The methods presented here aid in identifying potential common cause failures and also address quantitative common cause failure analysis

  16. Common Law and Un-common Sense

    OpenAIRE

    Ballard, Roger

    2000-01-01

    This paper examines the practical and conceptual differences which arise when juries are invited to apply their common sense in assessing reasonable behaviour in the midst of an ethnically plural society. The author explores the conundrums which the increasing salience of ethnic pluralism has now begun to pose in legal terms, most especially with respect to organisation of system for the equitable administration and delivery of justice in the context of an increasingly heterogeneous society. ...

  17. The common good

    OpenAIRE

    Argandoña, Antonio

    2011-01-01

    The concept of the common good occupied a relevant place in classical social, political and economic philosophy. After losing ground in the Modern age, it has recently reappeared, although with different and sometimes confusing meanings. This paper is the draft of a chapter of a Handbook; it explains the meaning of common good in the Aristotelian-Thomistic philosophy and in the Social Doctrine of the Catholic Church; why the common good is relevant; and how it is different from the other uses...

  18. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2013-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  19. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  20. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Head Computed tomography (CT) of the head uses special x-ray ... What is CT Scanning of the Head? Computed tomography, more commonly known as a CT or CAT ...

  1. Efektivitas Instagram Common Grounds

    OpenAIRE

    Wifalin, Michelle

    2016-01-01

    Efektivitas Instagram Common Grounds merupakan rumusan masalah yang diambil dalam penelitian ini. Efektivitas Instagram diukur menggunakan Customer Response Index (CRI), dimana responden diukur dalam berbagai tingkatan, mulai dari awareness, comprehend, interest, intentions dan action. Tingkatan respons inilah yang digunakan untuk mengukur efektivitas Instagram Common Grounds. Teori-teori yang digunakan untuk mendukung penelitian ini yaitu teori marketing Public Relations, teori iklan, efekti...

  2. Common Variable Immunodeficiency (CVID)

    Science.gov (United States)

    ... Relations Cyber Infrastructure Computational Biology Equal Employment Opportunity Ethics Global Research Office of Mission Integration and Financial Management Strategic Planning Workforce Effectiveness Workplace Solutions Technology Transfer Intellectual Property Division of AIDS ...

  3. Genomic Data Commons launches

    Science.gov (United States)

    The Genomic Data Commons (GDC), a unified data system that promotes sharing of genomic and clinical data between researchers, launched today with a visit from Vice President Joe Biden to the operations center at the University of Chicago.

  4. Common Mental Health Issues

    Science.gov (United States)

    Stock, Susan R.; Levine, Heidi

    2016-01-01

    This chapter provides an overview of common student mental health issues and approaches for student affairs practitioners who are working with students with mental illness, and ways to support the overall mental health of students on campus.

  5. Five Common Glaucoma Tests

    Science.gov (United States)

    ... About Us Donate In This Section Five Common Glaucoma Tests en Español email Send this article to ... year or two after age 35. A Comprehensive Glaucoma Exam To be safe and accurate, five factors ...

  6. Common symptoms during pregnancy

    Science.gov (United States)

    ... keep your gums healthy Swelling, Varicose Veins, and Hemorrhoids Swelling in your legs is common. You may ... In your rectum, veins that swell are called hemorrhoids. To reduce swelling: Raise your legs and rest ...

  7. The Common Good

    DEFF Research Database (Denmark)

    Feldt, Liv Egholm

    At present voluntary and philanthropic organisations are experiencing significant public attention and academic discussions about their role in society. Central to the debate is on one side the question of how they contribute to “the common good”, and on the other the question of how they can avoid...... and concepts continuously over time have blurred the different sectors and “polluted” contemporary definitions of the “common good”. The analysis shows that “the common good” is not an autonomous concept owned or developed by specific spheres of society. The analysis stresses that historically, “the common...... good” has always been a contested concept. It is established through messy and blurred heterogeneity of knowledge, purposes and goal achievements originating from a multitude of scientific, religious, political and civil society spheres contested not only in terms of words and definitions but also...

  8. Childhood Obesity: Common Misconceptions

    Science.gov (United States)

    ... Issues Listen Español Text Size Email Print Share Childhood Obesity: Common Misconceptions Page Content Article Body Everyone, it ... for less than 1% of the cases of childhood obesity. Yes, hypothyroidism (a deficit in thyroid secretion) and ...

  9. Common Childhood Orthopedic Conditions

    Science.gov (United States)

    ... Parents Parents site Sitio para padres General Health Growth & Development Infections Diseases & Conditions Pregnancy & Baby Nutrition & Fitness Emotions & ... pain. Toe Walking Toe walking is common among toddlers as they learn to walk, especially during the ...

  10. Review of quantum computation

    International Nuclear Information System (INIS)

    Lloyd, S.

    1992-01-01

    Digital computers are machines that can be programmed to perform logical and arithmetical operations. Contemporary digital computers are ''universal,'' in the sense that a program that runs on one computer can, if properly compiled, run on any other computer that has access to enough memory space and time. Any one universal computer can simulate the operation of any other; and the set of tasks that any such machine can perform is common to all universal machines. Since Bennett's discovery that computation can be carried out in a non-dissipative fashion, a number of Hamiltonian quantum-mechanical systems have been proposed whose time-evolutions over discrete intervals are equivalent to those of specific universal computers. The first quantum-mechanical treatment of computers was given by Benioff, who exhibited a Hamiltonian system with a basis whose members corresponded to the logical states of a Turing machine. In order to make the Hamiltonian local, in the sense that its structure depended only on the part of the computation being performed at that time, Benioff found it necessary to make the Hamiltonian time-dependent. Feynman discovered a way to make the computational Hamiltonian both local and time-independent by incorporating the direction of computation in the initial condition. In Feynman's quantum computer, the program is a carefully prepared wave packet that propagates through different computational states. Deutsch presented a quantum computer that exploits the possibility of existing in a superposition of computational states to perform tasks that a classical computer cannot, such as generating purely random numbers, and carrying out superpositions of computations as a method of parallel processing. In this paper, we show that such computers, by virtue of their common function, possess a common form for their quantum dynamics

  11. Right-sided duplication of the inferior vena cava and the common iliac vein: hidden hinds in spiral-computed tomography; Rechtsseitige Dopplung der Vena cava inferior und Vena iliaca communis: Bildgebung mit der Spiral-Computertomographie

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, D.R.; Friedrich, M. [Krankenhaus am Urban (Germany). Abt. fuer Roentgendiagnostik und Nuklearmedizin; Andresen, R. [Staedtisches Krankenhaus Zehlendorf, Behring (Germany). Abt. fuer Roentgendiagnostik und Nuklearmedizin

    1998-05-01

    Duplications of the inferior vena cava (IVC) are rare variants of the abdominal vessels and are normally located on both sides of the abdominal aorta. The rare case of a rightsided infrarenal duplication of the IVC with involvement of the common iliac vein is reported. Details of the embryology are presented for the understanding of this IVC variant. The spiral CT with multiplanar reconstructions makes it possible to define the vascular morphology and to differentiate it from lymphoma. (orig.) [Deutsch] Duplikaturen der Vena cava inferior (VCI) sind seltene meist bilateral der Aorta abdominalis gelegene abdominelle Gefaessvarianten. Der ungewoehnliche Fall einer rechtsseitigen infrarenalen Dopplung der VCI mit Beteiligung der Vena iliaca communis wird dargestellt. Auf der Embryologie wird, soweit fuer das Verstaendnis der vorliegenden VCI-Variante notwendig, eingegangen. Die Spiral-CT mit multiplanaren Rekonstruktionen erlaubt die morphologische Beschreibung der Gefaesssituation und die Differenzierung gegenueber Lymphomen. (orig.)

  12. Spanish ATLAS Tier-1 &Tier-2 perspective on computing over the next years

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration

    2018-01-01

    Since the beginning of the WLCG Project the Spanish ATLAS computer centres have contributed with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 5%, even though the Spanish contribution to the ATLAS detector construction as well as the number of authors are both close to 3%. In 2015 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimising the federation of three sites located in Barcelona, Madrid and Valencia, taking into account that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the Tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments w...

  13. Computer Lexis and Terminology

    Directory of Open Access Journals (Sweden)

    Gintautas Grigas

    2011-04-01

    Full Text Available Computer becomes a widely used tool in everyday work and at home. Every computer user sees texts on its screen containing a lot of words naming new concepts. Those words come from the terminology used by specialists. The common vocabury between computer terminology and lexis of everyday language comes into existence. The article deals with the part of computer terminology which goes to everyday usage and the influence of ordinary language to computer terminology. The relation between English and Lithuanian computer terminology, the construction and pronouncing of acronyms are discussed as well.

  14. Algorithms for solving common fixed point problems

    CERN Document Server

    Zaslavski, Alexander J

    2018-01-01

    This book details approximate solutions to common fixed point problems and convex feasibility problems in the presence of perturbations. Convex feasibility problems search for a common point of a finite collection of subsets in a Hilbert space; common fixed point problems pursue a common fixed point of a finite collection of self-mappings in a Hilbert space. A variety of algorithms are considered in this book for solving both types of problems, the study of which has fueled a rapidly growing area of research. This monograph is timely and highlights the numerous applications to engineering, computed tomography, and radiation therapy planning. Totaling eight chapters, this book begins with an introduction to foundational material and moves on to examine iterative methods in metric spaces. The dynamic string-averaging methods for common fixed point problems in normed space are analyzed in Chapter 3. Dynamic string methods, for common fixed point problems in a metric space are introduced and discussed in Chapter ...

  15. Common Ground and Delegation

    DEFF Research Database (Denmark)

    Dobrajska, Magdalena; Foss, Nicolai Juul; Lyngsie, Jacob

    preconditions of increasing delegation. We argue that key HR practices?namely, hiring, training and job-rotation?are associated with delegation of decision-making authority. These practices assist in the creation of shared knowledge conditions between managers and employees. In turn, such a ?common ground......? influences the confidence with which managers delegate decision authority to employees, as managers improve their knowledge of the educational background, firm-specific knowledge, and perhaps even the possible actions of those to whom they delegate such authority. To test these ideas, we match a large......-scale questionnaire survey with unique population-wide employer-employee data. We find evidence of a direct and positive influence of hiring decisions (proxied by common educational background), and the training and job rotation of employees on delegation. Moreover, we find a positive interaction between common...

  16. Towards common technical standards

    International Nuclear Information System (INIS)

    Rahmat, H.; Suardi, A.R.

    1993-01-01

    In 1989, PETRONAS launched its Total Quality Management (TQM) program. In the same year the decision was taken by the PETRONAS Management to introduce common technical standards group wide. These standards apply to the design, construction, operation and maintenance of all PETRONAS installations in the upstream, downstream and petrochemical sectors. The introduction of common company standards is seen as part of an overall technical management system, which is an integral part of Total Quality Management. The Engineering and Safety Unit in the PETRONAS Central Office in Kuala Lumpur has been charged with the task of putting in place a set of technical standards throughout PETRONAS and its operating units

  17. COMMON FISCAL POLICY

    Directory of Open Access Journals (Sweden)

    Gabriel Mursa

    2014-08-01

    Full Text Available The purpose of this article is to demonstrate that a common fiscal policy, designed to support the euro currency, has some significant drawbacks. The greatest danger is the possibility of leveling the tax burden in all countries. This leveling of the tax is to the disadvantage of countries in Eastern Europe, in principle, countries poorly endowed with capital, that use a lax fiscal policy (Romania, Bulgaria, etc. to attract foreign investment from rich countries of the European Union. In addition, common fiscal policy can lead to a higher degree of centralization of budgetary expenditures in the European Union.

  18. Common Privacy Myths

    Science.gov (United States)

    ... the common myths: Health information cannot be faxed – FALSE Your information may be shared between healthcare providers by faxing ... E-mail cannot be used to transmit health information – FALSE E-mail can be used to transmit information, ...

  19. Common Breastfeeding Challenges

    Science.gov (United States)

    ... or duplicated without permission of the Office on Women’s Health in the U.S. Department of Health and Human Services. Citation of the source is appreciated. Page last updated: March 02, 2018. Common breastfeeding challenges Breastfeeding can be ...

  20. Common mistakes of investors

    Directory of Open Access Journals (Sweden)

    Yuen Wai Pong Raymond

    2012-09-01

    Full Text Available Behavioral finance is an actively discussed topic in the academic and investment circle. The main reason is because behavioral finance challenges the validity of a cornerstone of the modern financial theory: rationality of investors. In this paper, the common irrational behaviors of investors are discussed

  1. Common tester platform concept.

    Energy Technology Data Exchange (ETDEWEB)

    Hurst, Michael James

    2008-05-01

    This report summarizes the results of a case study on the doctrine of a common tester platform, a concept of a standardized platform that can be applicable across the broad spectrum of testing requirements throughout the various stages of a weapons program, as well as across the various weapons programs. The common tester concept strives to define an affordable, next-generation design that will meet testing requirements with the flexibility to grow and expand; supporting the initial development stages of a weapons program through to the final production and surveillance stages. This report discusses a concept investing key leveraging technologies and operational concepts combined with prototype tester-development experiences and practical lessons learned gleaned from past weapons programs.

  2. Common-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest, M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CROSSER, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), used independently of one another. Point of equality between reliability of system and common reliability of components found. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  3. Common anorectal disorders.

    Science.gov (United States)

    Foxx-Orenstein, Amy E; Umar, Sarah B; Crowell, Michael D

    2014-05-01

    Anorectal disorders result in many visits to healthcare specialists. These disorders include benign conditions such as hemorrhoids to more serious conditions such as malignancy; thus, it is important for the clinician to be familiar with these disorders as well as know how to conduct an appropriate history and physical examination. This article reviews the most common anorectal disorders, including hemorrhoids, anal fissures, fecal incontinence, proctalgia fugax, excessive perineal descent, and pruritus ani, and provides guidelines on comprehensive evaluation and management.

  4. Common sense codified

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    At CERN, people of more than a hundred different nationalities and hundreds of different professions work together towards a common goal. The new Code of Conduct is a tool that has been designed to help us keep our workplace pleasant and productive through common standards of behaviour. Its basic principle is mutual respect and common sense. This is only natural, but not trivial…  The Director-General announced it in his speech at the beginning of the year, and the Bulletin wrote about it immediately afterwards. "It" is the new Code of Conduct, the document that lists our Organization's values and describes the basic standards of behaviour that we should both adopt and expect from others. "The Code of Conduct is not going to establish new rights or new obligations," explains Anne-Sylvie Catherin, Head of the Human Resources Department (HR). But what it will do is provide a framework for our existing rights and obligations." The aim of a co...

  5. Common primary headaches in pregnancy

    Directory of Open Access Journals (Sweden)

    Anuradha Mitra

    2015-01-01

    Full Text Available Headache is a very common problem in pregnancy. Evaluation of a complaint of headache requires categorizing it as primary or secondary. Migrainous headaches are known to be influenced by fluctuation of estrogen levels with high levels improving it and low levels deteriorating the symptoms. Tension-type Headaches (TTHs are the most common and usually less severe types of headache with female to male ratio 3:1. Women known to have primary headache before conception who present with a headache that is different from their usual headache, or women not known to have primary headache before conception who present with new-onset of headache during pregnancy need neurologic assessments for potential secondary cause for their headache. In addition to proper history and physical examination, both non-contrast computed tomography (CT and Magnetic Resonance Imaging (MRI are considered safe to be performed in pregnant women when indicated. Treatment of abortive and prophylactic therapy should include non-pharmacologic tools, judicious use of drugs which are safe for mother and fetus.

  6. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... the limitations of CT Scanning of the Head? What is CT Scanning of the Head? Computed tomography, ... than regular radiographs (x-rays). top of page What are some common uses of the procedure? CT ...

  7. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... of the Head? Computed tomography, more commonly known as a CT or CAT scan, is a diagnostic ... white on the x-ray; soft tissue, such as organs like the heart or liver, shows up ...

  8. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... of the Sinuses? Computed tomography, more commonly known as a CT or CAT scan, is a diagnostic ... white on the x-ray; soft tissue, such as organs like the heart or liver, shows up ...

  9. Common Vestibular Disorders

    Directory of Open Access Journals (Sweden)

    Dimitrios G. Balatsouras

    2017-01-01

    Full Text Available The three most common vestibular diseases, benign paroxysmal positional vertigo (BPPV, Meniere's disease (MD and vestibular neuritis (VN, are presented in this paper. BPPV, which is the most common peripheral vestibular disorder, can be defined as transient vertigo induced by a rapid head position change, associated with a characteristic paroxysmal positional nystagmus. Canalolithiasis of the posterior semicircular canal is considered the most convincing theory of its pathogenesis and the development of appropriate therapeutic maneuvers resulted in its effective treatment. However, involvement of the horizontal or the anterior canal has been found in a significant rate and the recognition and treatment of these variants completed the clinical picture of the disease. MD is a chronic condition characterized by episodic attacks of vertigo, fluctuating hearing loss, tinnitus, aural pressure and a progressive loss of audiovestibular functions. Presence of endolymphatic hydrops on postmortem examination is its pathologic correlate. MD continues to be a diagnostic and therapeutic challenge. Patients with the disease range from minimally symptomatic, highly functional individuals to severely affected, disabled patients. Current management strategies are designed to control the acute and recurrent vestibulopathy but offer minimal remedy for the progressive cochlear dysfunction. VN is the most common cause of acute spontaneous vertigo, attributed to acute unilateral loss of vestibular function. Key signs and symptoms are an acute onset of spinning vertigo, postural imbalance and nausea as well as a horizontal rotatory nystagmus beating towards the non-affected side, a pathological headimpulse test and no evidence for central vestibular or ocular motor dysfunction. Vestibular neuritis preferentially involves the superior vestibular labyrinth and its afferents. Symptomatic medication is indicated only during the acute phase to relieve the vertigo and nausea

  10. Common Influence Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Mamoulis, Nikos; Karras, Panagiotis

    2008-01-01

    We identify and formalize a novel join operator for two spatial pointsets P and Q. The common influence join (CIJ) returns the pairs of points (p,q),p isin P,q isin Q, such that there exists a location in space, being closer to p than to any other point in P and at the same time closer to q than ......-demand, is very efficient in practice, incurring only slightly higher I/O cost than the theoretical lower bound cost for the problem....

  11. English for common entrance

    CERN Document Server

    Kossuth, Kornel

    2013-01-01

    Succeed in the exam with this revision guide, designed specifically for the brand new Common Entrance English syllabus. It breaks down the content into manageable and straightforward chunks with easy-to-use, step-by-step instructions that should take away the fear of CE and guide you through all aspects of the exam. - Gives you step-by-step guidance on how to recognise various types of comprehension questions and answer them. - Shows you how to write creatively as well as for a purpose for the section B questions. - Reinforces and consolidates learning with tips, guidance and exercises through

  12. Building the common

    DEFF Research Database (Denmark)

    Agustin, Oscar Garcia

    document, A Common Immigration Policy for Europe: Principles, actions and tools (2008) as a part of Hague Programme (2004) on actions against terrorism, organised crime and migration and asylum management and influenced by the renewed Lisbon Strategy (2005-2010) for growth and jobs. My aim is to explore...... policy in the European Union is constructed and the categories and themes that are discussed. I will look also at the discourse strategies to show the linguistic representations of the social actors, who are excluded from or include in such representations. I will analysis a European Commission’s policy...

  13. Managing common marital stresses.

    Science.gov (United States)

    Martin, A C; Starling, B P

    1989-10-01

    Marital conflict and divorce are problems of great magnitude in our society, and nurse practitioners are frequently asked by patients to address marital problems in clinical practice. "Family life cycle theory" provides a framework for understanding the common stresses of marital life and for developing nursing strategies to improve marital satisfaction. If unaddressed, marital difficulties have serious adverse consequences for a couple's health, leading to greater dysfunction and a decline in overall wellness. This article focuses on identifying couples in crisis, assisting them to achieve pre-crisis equilibrium or an even higher level of functioning, and providing appropriate referral if complex relationship problems exist.

  14. Applications of computer algebra

    CERN Document Server

    1985-01-01

    Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in­ creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa­ tion and obtains a closed for...

  15. Quantum computing

    International Nuclear Information System (INIS)

    Steane, Andrew

    1998-01-01

    The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from

  16. Quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Steane, Andrew [Department of Atomic and Laser Physics, University of Oxford, Clarendon Laboratory, Oxford (United Kingdom)

    1998-02-01

    The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from

  17. Common Sense Biblical Hermeneutics

    Directory of Open Access Journals (Sweden)

    Michael B. Mangini

    2014-12-01

    Full Text Available Since the noetics of moderate realism provide a firm foundation upon which to build a hermeneutic of common sense, in the first part of his paper the author adopts Thomas Howe’s argument that the noetical aspect of moderate realism is a necessary condition for correct, universally valid biblical interpretation, but he adds, “insofar as it gives us hope in discovering the true meaning of a given passage.” In the second part, the author relies on John Deely’s work to show how semiotics may help interpreters go beyond meaning and seek the significance of the persons, places, events, ideas, etc., of which the meaning of the text has presented as objects to be interpreted. It is in significance that the unity of Scripture is found. The chief aim is what every passage of the Bible signifies. Considered as a genus, Scripture is composed of many parts/species that are ordered to a chief aim. This is the structure of common sense hermeneutics; therefore in the third part the author restates Peter Redpath’s exposition of Aristotle and St. Thomas’s ontology of the one and the many and analogously applies it to the question of how an exegete can discern the proper significance and faithfully interpret the word of God.

  18. True and common balsams

    Directory of Open Access Journals (Sweden)

    Dayana L. Custódio

    2012-08-01

    Full Text Available Balsams have been used since ancient times, due to their therapeutic and healing properties; in the perfume industry, they are used as fixatives, and in the cosmetics industry and in cookery, they are used as preservatives and aromatizers. They are generally defined as vegetable material with highly aromatic properties that supposedly have the ability to heal diseases, not only of the body, but also of the soul. When viewed according to this concept, many substances can be considered balsams. A more modern concept is based on its chemical composition and origin: a secretion or exudate of plants that contain cinnamic and benzoic acids, and their derivatives, in their composition. The most common naturally-occurring balsams (i.e. true balsams are the Benzoins, Liquid Storaque and the Balsams of Tolu and Peru. Many other aromatic exudates, such as Copaiba Oil and Canada Balsam, are wrongly called balsam. These usually belong to other classes of natural products, such as essential oils, resins and oleoresins. Despite the understanding of some plants, many plants are still called balsams. This article presents a chemical and pharmacological review of the most common balsams.

  19. Challenging data and workload management in CMS Computing with network-aware systems

    Science.gov (United States)

    D, Bonacorsi; T, Wildish

    2014-06-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  20. Challenging data and workload management in CMS Computing with network-aware systems

    International Nuclear Information System (INIS)

    Bonacorsi D; Wildish T

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  1. Challenging data and workload management in CMS Computing with network-aware systems

    CERN Document Server

    Wildish, Anthony

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of "Intelligent Network Services", including also bandwidth on demand concepts. In this paper, we will ...

  2. Challenging Data Management in CMS Computing with Network-aware Systems

    CERN Document Server

    Bonacorsi, Daniele

    2013-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of �?��??Intelligent Network Services�?��?�, including also bandwidt...

  3. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  4. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  5. Computer information systems framework

    International Nuclear Information System (INIS)

    Shahabuddin, S.

    1989-01-01

    Management information systems (MIS) is a commonly used term in computer profession. The new information technology has caused management to expect more from computer. The process of supplying information follows a well defined procedure. MIS should be capable for providing usable information to the various areas and levels of organization. MIS is different from data processing. MIS and business hierarchy provides a good framework for many organization which are using computers. (A.B.)

  6. Disscusion on the common

    Directory of Open Access Journals (Sweden)

    Antonio Negri

    2011-01-01

    Full Text Available In this interview taken shortly after the launch of the Italian translation of the Commonwealth, Antonio Negri, besides discussing details of his collaboration with Michael Hardt, addresses the most important topics of the book, which could remain unclear for the readers. He gives a wide range of answers for the questions on, for example, importance of revision and revitalization of seventeenth century’s categories, what does it mean to be a communist today, elaboration of the thesis of real subsumption. He also stresses the significance of the struggle over the common and processes of its institutionalization for contemporary revolutionary politics and faces criticism of the conception of immaterial and biopolitical labour.

  7. CPL: Common Pipeline Library

    Science.gov (United States)

    ESO CPL Development Team

    2014-02-01

    The Common Pipeline Library (CPL) is a set of ISO-C libraries that provide a comprehensive, efficient and robust software toolkit to create automated astronomical data reduction pipelines. Though initially developed as a standardized way to build VLT instrument pipelines, the CPL may be more generally applied to any similar application. The code also provides a variety of general purpose image- and signal-processing functions, making it an excellent framework for the creation of more generic data handling packages. The CPL handles low-level data types (images, tables, matrices, strings, property lists, etc.) and medium-level data access methods (a simple data abstraction layer for FITS files). It also provides table organization and manipulation, keyword/value handling and management, and support for dynamic loading of recipe modules using programs such as EsoRex (ascl:1504.003).

  8. CMS results in the Combined Computing Readiness Challenge CCRC'08

    International Nuclear Information System (INIS)

    Bonacorsi, D.; Bauerdick, L.

    2009-01-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed

  9. Common Superficial Bursitis.

    Science.gov (United States)

    Khodaee, Morteza

    2017-02-15

    Superficial bursitis most often occurs in the olecranon and prepatellar bursae. Less common locations are the superficial infrapatellar and subcutaneous (superficial) calcaneal bursae. Chronic microtrauma (e.g., kneeling on the prepatellar bursa) is the most common cause of superficial bursitis. Other causes include acute trauma/hemorrhage, inflammatory disorders such as gout or rheumatoid arthritis, and infection (septic bursitis). Diagnosis is usually based on clinical presentation, with a particular focus on signs of septic bursitis. Ultrasonography can help distinguish bursitis from cellulitis. Blood testing (white blood cell count, inflammatory markers) and magnetic resonance imaging can help distinguish infectious from noninfectious causes. If infection is suspected, bursal aspiration should be performed and fluid examined using Gram stain, crystal analysis, glucose measurement, blood cell count, and culture. Management depends on the type of bursitis. Acute traumatic/hemorrhagic bursitis is treated conservatively with ice, elevation, rest, and analgesics; aspiration may shorten the duration of symptoms. Chronic microtraumatic bursitis should be treated conservatively, and the underlying cause addressed. Bursal aspiration of microtraumatic bursitis is generally not recommended because of the risk of iatrogenic septic bursitis. Although intrabursal corticosteroid injections are sometimes used to treat microtraumatic bursitis, high-quality evidence demonstrating any benefit is unavailable. Chronic inflammatory bursitis (e.g., gout, rheumatoid arthritis) is treated by addressing the underlying condition, and intrabursal corticosteroid injections are often used. For septic bursitis, antibiotics effective against Staphylococcus aureus are generally the initial treatment, with surgery reserved for bursitis not responsive to antibiotics or for recurrent cases. Outpatient antibiotics may be considered in those who are not acutely ill; patients who are acutely ill

  10. Common Lisp a gentle introduction to symbolic computation

    CERN Document Server

    Touretzky, David S

    2013-01-01

    This highly accessible introduction to Lisp is suitable both for novices approaching their first programming language and experienced programmers interested in exploring a key tool for artificial intelligence research. The text offers clear, reader-friendly explanations of such essential concepts as cons cell structures, evaluation rules, programs as data, and recursive and applicative programming styles. The treatment incorporates several innovative instructional devices, such as the use of function boxes in the first two chapters to visually distinguish functions from data, use of evaltrace

  11. CERN's common Unix and X terminal environment

    International Nuclear Information System (INIS)

    Cass, Tony

    1996-01-01

    The Desktop Infrastructure Group of CERN's Computing and Networks Division has developed a Common Unix and X Terminal Environment to case the migration to Unix based Interactive Computing. The CUTE architecture relies on a distributed flesystem - currently Transarc's AFS - to enable essentially interchangeable client workstation to access both home directory and program files transparently. Additionally, we provide a suite of programs to configure workstations for CUTE and to ensure continued compatibility. This paper describes the different components and the development of the CUTE architecture. (author)

  12. Computational force, mass, and energy

    International Nuclear Information System (INIS)

    Numrich, R.W.

    1997-01-01

    This paper describes a correspondence between computational quantities commonly used to report computer performance measurements and mechanical quantities from classical Newtonian mechanics. It defines a set of three fundamental computational quantities that are sufficient to establish a system of computational measurement. From these quantities, it defines derived computational quantities that have analogous physical counterparts. These computational quantities obey three laws of motion in computational space. The solutions to the equations of motion, with appropriate boundary conditions, determine the computational mass of the computer. Computational forces, with magnitudes specific to each instruction and to each computer, overcome the inertia represented by this mass. The paper suggests normalizing the computational mass scale by picking the mass of a register on the CRAY-1 as the standard unit of mass

  13. APME launches common method

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    A common approach for carrying out ecological balances for commodity thermoplastics is due to be launched by the Association of Plastics Manufacturers in Europe (APME; Brussels) and its affiliate, The European Centre for Plastics in the Environment (PWMI) this week. The methodology report is the latest stage of a program started in 1990 that aims to describe all operations up to the production of polymer powder or granules at the plant gate. Information gathered will be made freely available to companies considering the use of polymers. An industry task force, headed by PWMI executive director Vince Matthews, has gathered information on the plastics production processes from oil to granule, and an independent panel of specialists, chaired by Ian Boustead of the U.K.'s Open University, devised the methodology and analysis. The methodology report stresses the need to define the system being analyzed and discusses how complex chemical processes can be analyzed in terms of consumption of fuels, energy, and raw materials, as well as solid, liquid, and gaseous emissions

  14. Reformulating the commons

    Directory of Open Access Journals (Sweden)

    Ostrom Elinor

    2002-01-01

    Full Text Available The western hemisphere is richly endowed with a diversity of natural resource systems that are governed by complex local and national institutional arrangements that have not, until recently, been well understood. While many local communities that possess a high degree of autonomy to govern local resources have been highly successful over long periods of time, others fail to take action to prevent overuse and degradation of forests, inshore fisheries, and other natural resources. The conventional theory used to predict and explain how local users will relate to resources that they share makes a uniform prediction that users themselves will be unable to extricate themselves from the tragedy of the commons. Using this theoretical view of the world, there is no variance in the performance of self-organized groups. In theory, there are no self-organized groups. Empirical evidence tells us, however, that considerable variance in performance exists and many more local users self-organize and are more successful than it is consistent with the conventional theory . Parts of a new theory are presented here.

  15. Neural computation and the computational theory of cognition.

    Science.gov (United States)

    Piccinini, Gualtiero; Bahar, Sonya

    2013-04-01

    We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism-neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation. Copyright © 2012 Cognitive Science Society, Inc.

  16. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We identify a number of common features in programming that seem...... conspicuously absent from the literature on biomolecular computing; to partially redress this absence, we introduce a model of computation that is evidently programmable, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined...... by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways...

  17. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  18. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    International Nuclear Information System (INIS)

    Campana, S

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R and D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  19. Threads of common knowledge.

    Science.gov (United States)

    Icamina, P

    1993-04-01

    Indigenous knowledge is examined as it is affected by development and scientific exploration. The indigenous culture of shamanism, which originated in northern and southeast Asia, is a "political and religious technique for managing societies through rituals, myths, and world views." There is respect for the natural environment and community life as a social common good. This world view is still practiced by many in Latin America and in Colombia specifically. Colombian shamanism has an environmental accounting system, but the Brazilian government has established its own system of land tenure and political representation which does not adequately represent shamanism. In 1992 a conference was held in the Philippines by the International Institute for Rural Reconstruction and IDRC on sustainable development and indigenous knowledge. The link between the two is necessary. Unfortunately, there are already examples in the Philippines of loss of traditional crop diversity after the introduction of modern farming techniques and new crop varieties. An attempt was made to collect species, but without proper identification. Opposition was expressed to the preservation of wilderness preserves; the desire was to allow indigenous people to maintain their homeland and use their time-tested sustainable resource management strategies. Property rights were also discussed during the conference. Of particular concern was the protection of knowledge rights about biological diversity or pharmaceutical properties of indigenous plant species. The original owners and keepers of the knowledge must retain access and control. The research gaps were identified and found to be expansive. Reference was made to a study of Mexican Indian children who knew 138 plant species while non-Indian children knew only 37. Sometimes there is conflict of interest where foresters prefer timber forests and farmers desire fuelwood supplies and fodder and grazing land, which is provided by shrubland. Information

  20. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... Site Index A-Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) is a fast, painless exam that uses ... of Children's CT? What is Children's CT? Computed tomography, more commonly known as a CT or CAT ...

  1. Experts' views on digital competence: commonalities and differences

    NARCIS (Netherlands)

    Janssen, José; Stoyanov, Slavi; Ferrari, Anusca; Punie, Yves; Pannekeet, Kees; Sloep, Peter

    2013-01-01

    Janssen, J., Stoyanov, S., Ferrari, A., Punie, Y., Pannekeet, K., & Sloep, P. B. (2013). Experts’ views on digital competence: commonalities and differences. Computers & Education, 68, 473–481. doi:10.1016/j.compedu.2013.06.008

  2. Structures for common-cause failure analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1981-01-01

    Common-cause failure methodology and terminology have been reviewed and structured to provide a systematical basis for addressing and developing models and methods for quantification. The structure is based on (1) a specific set of definitions, (2) categories based on the way faults are attributable to a common cause, and (3) classes based on the time of entry and the time of elimination of the faults. The failure events are then characterized by their likelihood or frequency and the average residence time. The structure provides a basis for selecting computational models, collecting and evaluating data and assessing the importance of various failure types, and for developing effective defences against common-cause failure. The relationships of this and several other structures are described

  3. Computer group

    International Nuclear Information System (INIS)

    Bauer, H.; Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schati, C.; Schmidt, A.; Schwind, D.; Weber, G.

    1983-01-01

    The computer groups has been reorganized to take charge for the general purpose computers DEC10 and VAX and the computer network (Dataswitch, DECnet, IBM - connections to GSI and IPP, preparation for Datex-P). (orig.)

  4. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  5. Upgrading and Expanding Lustre Storage for use with the WLCG

    Science.gov (United States)

    Traynor, D. P.; Froy, T. S.; Walker, C. J.

    2017-10-01

    The Queen Mary University of London Grid site’s Lustre file system has recently undergone a major upgrade from version 1.8 to the most recent 2.8 release, and a capacity increase to over 3 PB. Lustre is an open source, POSIX compatible, clustered file system presented to the Grid using the StoRM Storage Resource Manager. The motivation and benefits of upgrading including hardware and software choices, are discussed. The testing, performance improvements and data migration procedure are outlined as is the source code modifications needed for StoRM compatibility. Benchmarks and real world performance are presented and future plans discussed.

  6. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  7. Common Sleep Problems (For Teens)

    Science.gov (United States)

    ... Safe Videos for Educators Search English Español Common Sleep Problems KidsHealth / For Teens / Common Sleep Problems What's ... have emotional problems, like depression. What Happens During Sleep? You don't notice it, of course, but ...

  8. 6 Common Cancers - Skin Cancer

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues 6 Common Cancers - Skin Cancer Past Issues / Spring 2007 Table of Contents ... AP Photo/Herald-Mail, Kevin G. Gilbert Skin Cancer Skin cancer is the most common form of cancer ...

  9. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  10. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  11. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  12. Quantum Computing

    OpenAIRE

    Scarani, Valerio

    1998-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  13. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  14. Indirection and computer security.

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Michael J.

    2011-09-01

    The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

  15. Otwarty model licencjonowania Creative Commons

    OpenAIRE

    Tarkowski, Alek

    2007-01-01

    The paper presents a family of Creative Commons licenses (which form nowadays one of the basic legal tools used in the Open Access movement), as well as a genesis of the licenses – inspired by Open Software Licenses and the concept of commons. Then legal tools such as individual Creative Commons licenses are discussed as well as how to use them, with a special emphasis on practical applications in science and education. The author discusses also his research results on scientific publishers a...

  16. Five Theses on the Common

    Directory of Open Access Journals (Sweden)

    Gigi Roggero

    2011-01-01

    Full Text Available I present five theses on the common within the context of the transformations of capitalist social relations as well as their contemporary global crisis. My framework involves ‘‘cognitive capitalism,’’ new processes of class composition, and the production of living knowledge and subjectivity. The commons is often discussed today in reference to the privatizationand commodification of ‘‘common goods.’’ This suggests a naturalistic and conservative image of the common, unhooked from the relations of production. I distinguish between commons and the common: the first model is related to Karl Polanyi, the second to Karl Marx. As elaborated in the postoperaista debate, the common assumes an antagonistic double status: it is boththe plane of the autonomy of living labor and it is subjected to capitalist ‘‘capture.’’ Consequently, what is at stake is not the conservation of ‘‘commons,’’ but rather the production of the common and its organization into new institutions that would take us beyond the exhausted dialectic between public and private.

  17. Proceedings of the second workshop of LHC Computing Grid, LCG-France

    International Nuclear Information System (INIS)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin

    2007-03-01

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3 leadership expressed

  18. Open Data as a New Commons

    DEFF Research Database (Denmark)

    Morelli, Nicola; Mulder, Ingrid; Concilio, Grazia

    2017-01-01

    and environmental opportunities around them and government choices. Developing spacesmeans for enabling citizens to harness the opportunities coming from the use of this new resource, offers thus a substantial promise of social innovation. This means that open data is vi (still) virtually a new resource that could...... become a new commons with the engagement of interested and active communities. The condition for open data becoming a new common is that citizens become aware of the potential of this resource, that they use it for creating new services and that new practices and infrastructures are defined, that would......An increasing computing capability is raising the opportunities to use a large amount of publicly available data for creating new applications and a new generation of public services. But while it is easy to find some early examples of services concerning control systems (e.g. traffic, meteo...

  19. Motivating Contributions for Home Computer Security

    Science.gov (United States)

    Wash, Richard L.

    2009-01-01

    Recently, malicious computer users have been compromising computers en masse and combining them to form coordinated botnets. The rise of botnets has brought the problem of home computers to the forefront of security. Home computer users commonly have insecure systems; these users do not have the knowledge, experience, and skills necessary to…

  20. Computational Medicine

    DEFF Research Database (Denmark)

    Nygaard, Jens Vinge

    2017-01-01

    The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours......The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours...

  1. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  2. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  3. Facts about the Common Cold

    Science.gov (United States)

    ... different viruses. Rhinovirus is the most common cause, accounting for 10 to 40 percent of colds. Other common cold viruses include coronavirus and ... RSS | Terms Of Use | Privacy | Sitemap Our Family Of Sites ... Introduction Risk Factors Screening Symptoms Tumor Testing Summary '; var ...

  4. Wave-equation Migration Velocity Analysis Using Plane-wave Common Image Gathers

    KAUST Repository

    Guo, Bowen; Schuster, Gerard T.

    2017-01-01

    Wave-equation migration velocity analysis (WEMVA) based on subsurface-offset, angle domain or time-lag common image gathers (CIGs) requires significant computational and memory resources because it computes higher dimensional migration images

  5. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... is Children's CT? Computed tomography, more commonly known as a CT or CAT scan, is a diagnostic ... is used to evaluate: complications from infections such as pneumonia a tumor that arises in the lung ...

  6. Quantum computers and quantum computations

    International Nuclear Information System (INIS)

    Valiev, Kamil' A

    2005-01-01

    This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)

  7. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  8. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  9. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  10. Spatial Computation

    Science.gov (United States)

    2003-12-01

    Computation and today’s microprocessors with the approach to operating system architecture, and the controversy between microkernels and monolithic kernels...Both Spatial Computation and microkernels break away a relatively monolithic architecture into in- dividual lightweight pieces, well specialized...for their particular functionality. Spatial Computation removes global signals and control, in the same way microkernels remove the global address

  11. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  12. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  13. Quantum computation

    International Nuclear Information System (INIS)

    Deutsch, D.

    1992-01-01

    As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)

  14. Computer software.

    Science.gov (United States)

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  15. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  16. Governing of common cause failures

    International Nuclear Information System (INIS)

    Bock, H.W.

    1998-01-01

    Agreed strategy is to govern common cause failures by the application of diversity, to assure that the overall plant safety objectives are met even in the case that a common cause failure of a system with all redundant trains is assumed. The presented strategy aims on the application of functional diversity without the implementation of equipment diversity. In the focus are the design criteria which have to be met for the design of independent systems in such a way that the time-correlated failure of such independent systems according a common cause can be excluded deterministically. (author)

  17. Physical computation and cognitive science

    CERN Document Server

    Fresco, Nir

    2014-01-01

    This book presents a study of digital computation in contemporary cognitive science. Digital computation is a highly ambiguous concept, as there is no common core definition for it in cognitive science. Since this concept plays a central role in cognitive theory, an adequate cognitive explanation requires an explicit account of digital computation. More specifically, it requires an account of how digital computation is implemented in physical systems. The main challenge is to deliver an account encompassing the multiple types of existing models of computation without ending up in pancomputationalism, that is, the view that every physical system is a digital computing system. This book shows that only two accounts, among the ones examined by the author, are adequate for explaining physical computation. One of them is the instructional information processing account, which is developed here for the first time.   “This book provides a thorough and timely analysis of differing accounts of computation while adv...

  18. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  19. Introduction to computer networking

    CERN Document Server

    Robertazzi, Thomas G

    2017-01-01

    This book gives a broad look at both fundamental networking technology and new areas that support it and use it. It is a concise introduction to the most prominent, recent technological topics in computer networking. Topics include network technology such as wired and wireless networks, enabling technologies such as data centers, software defined networking, cloud and grid computing and applications such as networks on chips, space networking and network security. The accessible writing style and non-mathematical treatment makes this a useful book for the student, network and communications engineer, computer scientist and IT professional. • Features a concise, accessible treatment of computer networking, focusing on new technological topics; • Provides non-mathematical introduction to networks in their most common forms today;< • Includes new developments in switching, optical networks, WiFi, Bluetooth, LTE, 5G, and quantum cryptography.

  20. The illusion of common ground

    DEFF Research Database (Denmark)

    Cowley, Stephen; Harvey, Matthew

    2016-01-01

    When people talk about “common ground”, they invoke shared experiences, convictions, and emotions. In the language sciences, however, ‘common ground’ also has a technical sense. Many taking a representational view of language and cognition seek to explain that everyday feeling in terms of how...... isolated individuals “use” language to communicate. Autonomous cognitive agents are said to use words to communicate inner thoughts and experiences; in such a framework, ‘common ground’ describes a body of information that people allegedly share, hold common, and use to reason about how intentions have......, together with concerted bodily (and vocal) activity, serve to organize, regulate and coordinate both attention and the verbal and non-verbal activity that it gives rise to. Since wordings are normative, they can be used to develop skills for making cultural sense of environments and other peoples’ doings...

  1. NIH Common Data Elements Repository

    Data.gov (United States)

    U.S. Department of Health & Human Services — The NIH Common Data Elements (CDE) Repository has been designed to provide access to structured human and machine-readable definitions of data elements that have...

  2. 6 Common Cancers - Colorectal Cancer

    Science.gov (United States)

    ... Home Current Issue Past Issues 6 Common Cancers - Colorectal Cancer Past Issues / Spring 2007 Table of Contents For ... of colon cancer. Photo: AP Photo/Ron Edmonds Colorectal Cancer Cancer of the colon (large intestine) or rectum ( ...

  3. 6 Common Cancers - Lung Cancer

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues 6 Common Cancers - Lung Cancer Past Issues / Spring 2007 Table of Contents ... Desperate Housewives. (Photo ©2005 Kathy Hutchins / Hutchins) Lung Cancer Lung cancer causes more deaths than the next three ...

  4. Common High Blood Pressure Myths

    Science.gov (United States)

    ... Disease Venous Thromboembolism Aortic Aneurysm More Common High Blood Pressure Myths Updated:May 4,2018 Knowing the facts ... This content was last reviewed October 2016. High Blood Pressure • Home • Get the Facts About HBP Introduction What ...

  5. Common mode and coupled failure

    International Nuclear Information System (INIS)

    Taylor, J.R.

    1975-10-01

    Based on examples and data from Abnormal Occurence Reports for nuclear reactors, a classification of common mode or coupled failures is given, and some simple statistical models are investigated. (author)

  6. Common Systems Integration Lab (CSIL)

    Data.gov (United States)

    Federal Laboratory Consortium — The Common Systems Integration Lab (CSIL)supports the PMA-209 Air Combat Electronics Program Office. CSIL also supports development, test, integration and life cycle...

  7. 6 Common Cancers - Breast Cancer

    Science.gov (United States)

    ... Home Current Issue Past Issues 6 Common Cancers - Breast Cancer Past Issues / Spring 2007 Table of Contents For ... slow her down. Photo: AP Photo/Brett Flashnick Breast Cancer Breast cancer is a malignant (cancerous) growth that ...

  8. Communication, timing, and common learning

    Czech Academy of Sciences Publication Activity Database

    Steiner, Jakub; Stewart, C.

    2011-01-01

    Roč. 146, č. 1 (2011), s. 230-247 ISSN 0022-0531 Institutional research plan: CEZ:AV0Z70850503 Keywords : common knowledge * learning * communication Subject RIV: AH - Economics Impact factor: 1.235, year: 2011

  9. Computational Design of Urban Layouts

    KAUST Repository

    Wonka, Peter

    2015-10-07

    A fundamental challenge in computational design is to compute layouts by arranging a set of shapes. In this talk I will present recent urban modeling projects with applications in computer graphics, urban planning, and architecture. The talk will look at different scales of urban modeling (streets, floorplans, parcels). A common challenge in all these modeling problems are functional and aesthetic constraints that should be respected. The talk also highlights interesting links to geometry processing problems, such as field design and quad meshing.

  10. Philosophy vs the common sense

    OpenAIRE

    V. V. Chernyshov

    2017-01-01

    The paper deals with the antinomy of philosophy and the common sense. Philosophy emerges as a way of specifically human knowledge, which purposes analytics of the reality of subjective experience. The study reveals that in order to alienate philosophy from the common sense it was essential to revise the understanding of wisdom. The new, philosophical interpretation of wisdom – offered by Pythagoras – has laid the foundation of any future philosophy. Thus, philosophy emerges, alienating itself...

  11. Sustainability of common pool resources

    OpenAIRE

    Timilsina, Raja Rajendra; Kotani, Koji; Kamijo, Yoshio

    2017-01-01

    Sustainability has become a key issue in managing natural resources together with growing concerns for capitalism, environmental and resource problems. We hypothesize that the ongoing modernization of competitive societies, which we refer to as "capitalism," affects human nature for utilizing common pool resources, thus compromising sustainability. To test this hypothesis, we design and implement a set of dynamic common pool resource games and experiments in the following two types of Nepales...

  12. Whose commons are mobilities spaces?

    DEFF Research Database (Denmark)

    Freudendal-Pedersen, Malene

    2015-01-01

    for cyclists and cycling to be given greater consideration in broader societal understandings of the common good. I argue that this is in fact not the case. Rather the specific project identities that are nurtured by Copenhagen’s cycling community inhibit it from advocating publicly or aggressively...... for a vision of the common good that gives cyclists greater and more protected access to the city’s mobility spaces...

  13. Casuistry as common law morality.

    Science.gov (United States)

    Paulo, Norbert

    2015-12-01

    This article elaborates on the relation between ethical casuistry and common law reasoning. Despite the frequent talk of casuistry as common law morality, remarks on this issue largely remain at the purely metaphorical level. The article outlines and scrutinizes Albert Jonsen and Stephen Toulmin's version of casuistry and its basic elements. Drawing lessons for casuistry from common law reasoning, it is argued that one generally has to be faithful to ethical paradigms. There are, however, limitations for the binding force of paradigms. The most important limitations--the possibilities of overruling and distinguishing paradigm norms--are similar in common law and in casuistry, or so it is argued. These limitations explain why casuistry is not necessarily overly conservative and conventional, which is one line of criticism to which casuists can now better respond. Another line of criticism has it that the very reasoning from case to case is extremely unclear in casuistry. I suggest a certain model of analogical reasoning to address this critique. All my suggestions to understand and to enhance casuistry make use of common law reasoning whilst remaining faithful to Jonsen and Toulmin's main ideas and commitments. Further developed along these lines, casuistry can appropriately be called "common law morality."

  14. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  15. A Case for Data Commons: Toward Data Science as a Service.

    Science.gov (United States)

    Grossman, Robert L; Heath, Allison; Murphy, Mark; Patterson, Maria; Wells, Walt

    2016-01-01

    Data commons collocate data, storage, and computing infrastructure with core services and commonly used tools and applications for managing, analyzing, and sharing data to create an interoperable resource for the research community. An architecture for data commons is described, as well as some lessons learned from operating several large-scale data commons.

  16. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  17. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  18. Towards a global service registry for the world-wide LHC computing grid

    International Nuclear Information System (INIS)

    Field, Laurence; Pradillo, Maria Alandes; Girolamo, Alessandro Di

    2014-01-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages

  19. Towards a Global Service Registry for the World-Wide LHC Computing Grid

    Science.gov (United States)

    Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro

    2014-06-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the

  20. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  1. Computational biomechanics

    International Nuclear Information System (INIS)

    Ethier, C.R.

    2004-01-01

    Computational biomechanics is a fast-growing field that integrates modern biological techniques and computer modelling to solve problems of medical and biological interest. Modelling of blood flow in the large arteries is the best-known application of computational biomechanics, but there are many others. Described here is work being carried out in the laboratory on the modelling of blood flow in the coronary arteries and on the transport of viral particles in the eye. (author)

  2. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    to understand the computer as a material like any other material we would use for design, like wood, aluminum, or plastic. That as soon as the computer forms a composition with other materials it becomes just as approachable and inspiring as other smart materials. I present a series of investigations of what...... Computational Composite, and Telltale). Through the investigations, I show how the computer can be understood as a material and how it partakes in a new strand of materials whose expressions come to be in context. I uncover some of their essential material properties and potential expressions. I develop a way...

  3. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  4. Computer science II essentials

    CERN Document Server

    Raus, Randall

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Computer Science II includes organization of a computer, memory and input/output, coding, data structures, and program development. Also included is an overview of the most commonly

  5. Philosophy vs the common sense

    Directory of Open Access Journals (Sweden)

    V. V. Chernyshov

    2017-01-01

    Full Text Available The paper deals with the antinomy of philosophy and the common sense. Philosophy emerges as a way of specifically human knowledge, which purposes analytics of the reality of subjective experience. The study reveals that in order to alienate philosophy from the common sense it was essential to revise the understanding of wisdom. The new, philosophical interpretation of wisdom – offered by Pythagoras – has laid the foundation of any future philosophy. Thus, philosophy emerges, alienating itself from the common sense, which refers to the common or collective experience. Moreover, the study examines the role of emotions, conformity and conventionality which they play with respect to the common sense. Next the author focuses on the role of philosophical intuition, guided with principles of rationality, nonconformity and scepticism, which the author professes the foundation stones of any sound philosophy. The common sense, described as deeply routed in the world of human emotions, aims at empathy, as the purpose of philosophy is to provide the rational means of knowledge. Therefore, philosophy uses thinking, keeping the permanent efforts to check and recheck data of its own experience. Thus, the first task of philosophical thinking appears to overcome the suggestion of the common sense, which purposes the social empathy, as philosophical intuition aims at independent thinking, the analytics of subjective experience. The study describes the fundamental principles of the common sense, on the one hand, and those of philosophy, on the other. The author arrives to conclusion that the common sense is unable to exceed the limits of sensual experience. Even there, where it apparently rises to a form of any «spiritual unity», even there it cannot avoid referring to the data of commonly shared sensual experience; though, philosophy, meanwhile, goes beyond sensuality, creating a discourse that would be able to alienate from it, and to make its rational

  6. [Common household traditional Chinese medicines].

    Science.gov (United States)

    Zhang, Shu-Yuan; Li, Mei; Fu, Dan; Liu, Yang; Wang, Hui; Tan, Wei

    2016-02-01

    With the enhancement in the awareness of self-diagnosis among residents, it's very common for each family to prepare common medicines for unexpected needs. Meanwhile, with the popularization of the traditional Chinese medicine knowledge, the proportion of common traditional Chinese medicines prepared at residents' families is increasingly higher than western medicines year by year. To make it clear, both pre-research and closed questionnaire research were adopted for residents in Chaoyang District, Beijing, excluding residents with a medical background. Based on the results of data, a analysis was made to define the role and influence on the quality of life of residents and give suggestions for relevant departments to improve the traditional Chinese medicine popularization and promote the traditional Chinese medicine market. Copyright© by the Chinese Pharmaceutical Association.

  7. Governing for the Common Good.

    Science.gov (United States)

    Ruger, Jennifer Prah

    2015-12-01

    The proper object of global health governance (GHG) should be the common good, ensuring that all people have the opportunity to flourish. A well-organized global society that promotes the common good is to everyone's advantage. Enabling people to flourish includes enabling their ability to be healthy. Thus, we must assess health governance by its effectiveness in enhancing health capabilities. Current GHG fails to support human flourishing, diminishes health capabilities and thus does not serve the common good. The provincial globalism theory of health governance proposes a Global Health Constitution and an accompanying Global Institute of Health and Medicine that together propose to transform health governance. Multiple lines of empirical research suggest that these institutions would be effective, offering the most promising path to a healthier, more just world.

  8. The Messiness of Common Good

    DEFF Research Database (Denmark)

    Feldt, Liv Egholm

    Civil society and its philanthropic and voluntary organisations are currently experiencing public and political attention and demands to safeguard society’s ‘common good’ through social cohesion and as providers of welfare services. This has raised the question by both practitioners and researchers...... that a distinction between the non-civil and the civil is more fruitful, if we want to understand the past, present and future messiness in place in defining the common good. Based on an ethnographic case analysis of a Danish corporate foundation between 1920 and 2014 the paper shows how philanthropic gift......-giving concepts, practices and operational forms throughout history have played a significant role in defining the common good and its future avenues. Through an analytical attitude based on microhistory, conceptual history and the sociology of translation it shows that civil society’s institutional logic always...

  9. UMTS Common Channel Sensitivity Analysis

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António; Santos, Frederico

    2006-01-01

    and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....

  10. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  11. Quantum Computing

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 9. Quantum Computing - Building Blocks of a Quantum Computer. C S Vijay Vishal Gupta. General Article Volume 5 Issue 9 September 2000 pp 69-81. Fulltext. Click here to view fulltext PDF. Permanent link:

  12. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  13. Quantum Computing

    Indian Academy of Sciences (India)

    In the first part of this article, we had looked at how quantum physics can be harnessed to make the building blocks of a quantum computer. In this concluding part, we look at algorithms which can exploit the power of this computational device, and some practical difficulties in building such a device. Quantum Algorithms.

  14. Quantum computing

    OpenAIRE

    Burba, M.; Lapitskaya, T.

    2017-01-01

    This article gives an elementary introduction to quantum computing. It is a draft for a book chapter of the "Handbook of Nature-Inspired and Innovative Computing", Eds. A. Zomaya, G.J. Milburn, J. Dongarra, D. Bader, R. Brent, M. Eshaghian-Wilner, F. Seredynski (Springer, Berlin Heidelberg New York, 2006).

  15. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  16. Recent trends in grid computing

    International Nuclear Information System (INIS)

    Miura, Kenichi

    2004-01-01

    Grid computing is a technology which allows uniform and transparent access to geographically dispersed computational resources, such as computers, databases, experimental and observational equipment etc. via high-speed, high-bandwidth networking. The commonly used analogy is that of electrical power grid, whereby the household electricity is made available from outlets on the wall, and little thought need to be given to where the electricity is generated and how it is transmitted. The usage of grid also includes distributed parallel computing, high through-put computing, data intensive computing (data grid) and collaborative computing. This paper reviews the historical background, software structure, current status and on-going grid projects, including applications of grid technology to nuclear fusion research. (author)

  17. A SURVEY ON UBIQUITOUS COMPUTING

    Directory of Open Access Journals (Sweden)

    Vishal Meshram

    2016-01-01

    Full Text Available This work presents a survey of ubiquitous computing research which is the emerging domain that implements communication technologies into day-to-day life activities. This research paper provides a classification of the research areas on the ubiquitous computing paradigm. In this paper, we present common architecture principles of ubiquitous systems and analyze important aspects in context-aware ubiquitous systems. In addition, this research work presents a novel architecture of ubiquitous computing system and a survey of sensors needed for applications in ubiquitous computing. The goals of this research work are three-fold: i serve as a guideline for researchers who are new to ubiquitous computing and want to contribute to this research area, ii provide a novel system architecture for ubiquitous computing system, and iii provides further research directions required into quality-of-service assurance of ubiquitous computing.

  18. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  19. Computability theory

    CERN Document Server

    Weber, Rebecca

    2012-01-01

    What can we compute--even with unlimited resources? Is everything within reach? Or are computations necessarily drastically limited, not just in practice, but theoretically? These questions are at the heart of computability theory. The goal of this book is to give the reader a firm grounding in the fundamentals of computability theory and an overview of currently active areas of research, such as reverse mathematics and algorithmic randomness. Turing machines and partial recursive functions are explored in detail, and vital tools and concepts including coding, uniformity, and diagonalization are described explicitly. From there the material continues with universal machines, the halting problem, parametrization and the recursion theorem, and thence to computability for sets, enumerability, and Turing reduction and degrees. A few more advanced topics round out the book before the chapter on areas of research. The text is designed to be self-contained, with an entire chapter of preliminary material including re...

  20. Computational Streetscapes

    Directory of Open Access Journals (Sweden)

    Paul M. Torrens

    2016-09-01

    Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber

  1. Five Common Cancers in Iran

    NARCIS (Netherlands)

    Kolandoozan, Shadi; Sadjadi, Alireza; Radmard, Amir Reza; Khademi, Hooman

    Iran as a developing nation is in epidemiological transition from communicable to non-communicable diseases. Although, cancer is the third cause of death in Iran, ifs mortality are on the rise during recent decades. This mini-review was carried out to provide a general viewpoint on common cancers

  2. Experiments on common property management

    NARCIS (Netherlands)

    van Soest, D.P.; Shogren, J.F.

    2013-01-01

    Common property resources are (renewable) natural resources where current excessive extraction reduces future resource availability, and the use of which is de facto restricted to a specific set of agents, such as inhabitants of a village or members of a community; think of community-owned forests,

  3. The Parody of the Commons

    Directory of Open Access Journals (Sweden)

    Vasilis Kostakis

    2013-08-01

    Full Text Available This essay builds on the idea that Commons-based peer production is a social advancement within capitalism but with various post-capitalistic aspects, in need of protection, enforcement, stimulation and connection with progressive social movements. We use theory and examples to claim that peer-to-peer economic relations can be undermined in the long run, distorted by the extra-economic means of a political context designed to maintain profit-driven relations of production into power. This subversion can arguably become a state policy, and the subsequent outcome is the full absorption of the Commons as well as of the underpinning peer-to-peer relations into the dominant mode of production. To tackle this threat, we argue in favour of a certain working agenda for Commons-based communities. Such an agenda should aim the enforcement of the circulation of the Commons. Therefore, any useful social transformation will be meaningful if the people themselves decide and apply policies for their own benefit, optimally with the support of a sovereign partner state. If peer production is to become dominant, it has to control capital accumulation with the aim to marginalise and eventually transcend capitalism.

  4. Parents' common pitfalls of discipline.

    Science.gov (United States)

    Witoonchart, Chatree; Fangsa-ard, Thitiporn; Chaoaree, Supamit; Ketumarn, Panom; Kaewpornsawan, Titawee; Phatthrayuttawat, Sucheera

    2005-11-01

    Problems of discipline are common among parents. These may be the results of the parents' pitfalls in disciplining their children. To find out common pitfalls of parents in disciplining their children. Parents of students with ages ranged between 60-72 months old in Bangkok-Noi district, Bangkok, were selected by random sampling. Total number of 1947 children ages between 60-72 months were recruited. Parents of these children were interviewed with a questionnaire designed to probe into problems in child rearing. There hindered and fifty questionnaires were used for data analyses. Parents had high concerns about problems in discipline their children and needed support from professional personnel. They had limited knowledge and possessed lots of wrong attitude towards discipline. Common pitfalls on the topics were problems in, 1) limit setting 2) rewarding and punishment 3) supervision on children watching TV and bedtime routines. Parents of children with ages 60-72 months old in Bangkok-Noi district, Bangkok, had several common pitfalls in disciplining their children, including attitude, knowledge and practice.

  5. The Common Vision. Reviews: Books.

    Science.gov (United States)

    Chattin-McNichols, John

    1998-01-01

    Reviews Marshak's book describing the work of educators Maria Montessori, Rudolf Steiner, Aurobindo Ghose, and Inayat Khan. Maintains that the book gives clear, concise information on each educator and presents a common vision for children and their education; also maintains that it gives theoretical and practical information and discusses…

  6. Twenty-First Century Diseases: Commonly Rare and Rarely Common?

    Science.gov (United States)

    Daunert, Sylvia; Sittampalam, Gurusingham Sitta; Goldschmidt-Clermont, Pascal J

    2017-09-20

    Alzheimer's drugs are failing at a rate of 99.6%, and success rate for drugs designed to help patients with this form of dementia is 47 times less than for drugs designed to help patients with cancers ( www.scientificamerican.com/article/why-alzheimer-s-drugs-keep-failing/2014 ). How can it be so difficult to produce a valuable drug for Alzheimer's disease? Each human has a unique genetic and epigenetic makeup, thus endowing individuals with a highly unique complement of genes, polymorphisms, mutations, RNAs, proteins, lipids, and complex sugars, resulting in distinct genome, proteome, metabolome, and also microbiome identity. This editorial is taking into account the uniqueness of each individual and surrounding environment, and stresses the point that a more accurate definition of a "common" disorder could be simply the amalgamation of a myriad of "rare" diseases. These rare diseases are being grouped together because they share a rather constant complement of common features and, indeed, generally respond to empirically developed treatments, leading to a positive outcome consistently. We make the case that it is highly unlikely that such treatments, despite their statistical success measured with large cohorts using standardized clinical research, will be effective on all patients until we increase the depth and fidelity of our understanding of the individual "rare" diseases that are grouped together in the "buckets" of common illnesses. Antioxid. Redox Signal. 27, 511-516.

  7. Common sense and the common morality in theory and practice.

    Science.gov (United States)

    Daly, Patrick

    2014-06-01

    The unfinished nature of Beauchamp and Childress's account of the common morality after 34 years and seven editions raises questions about what is lacking, specifically in the way they carry out their project, more generally in the presuppositions of the classical liberal tradition on which they rely. Their wide-ranging review of ethical theories has not provided a method by which to move beyond a hypothetical approach to justification or, on a practical level regarding values conflict, beyond a questionable appeal to consensus. My major purpose in this paper is to introduce the thought of Bernard Lonergan as offering a way toward such a methodological breakthrough. In the first section, I consider Beauchamp and Childress's defense of their theory of the common morality. In the second, I relate a persisting vacillation in their argument regarding the relative importance of reason and experience to a similar tension in classical liberal theory. In the third, I consider aspects of Lonergan's generalized empirical method as a way to address problems that surface in the first two sections of the paper: (1) the structural relation of reason and experience in human action; and (2) the importance of theory for practice in terms of what Lonergan calls "common sense" and "general bias."

  8. COMPUTATIONAL THINKING

    Directory of Open Access Journals (Sweden)

    Evgeniy K. Khenner

    2016-01-01

    Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with

  9. Advanced computer-based training

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H D; Martin, H D

    1987-05-01

    The paper presents new techniques of computer-based training for personnel of nuclear power plants. Training on full-scope simulators is further increased by use of dedicated computer-based equipment. An interactive communication system runs on a personal computer linked to a video disc; a part-task simulator runs on 32 bit process computers and shows two versions: as functional trainer or as on-line predictor with an interactive learning system (OPAL), which may be well-tailored to a specific nuclear power plant. The common goal of both develoments is the optimization of the cost-benefit ratio for training and equipment.

  10. Advanced computer-based training

    International Nuclear Information System (INIS)

    Fischer, H.D.; Martin, H.D.

    1987-01-01

    The paper presents new techniques of computer-based training for personnel of nuclear power plants. Training on full-scope simulators is further increased by use of dedicated computer-based equipment. An interactive communication system runs on a personal computer linked to a video disc; a part-task simulator runs on 32 bit process computers and shows two versions: as functional trainer or as on-line predictor with an interactive learning system (OPAL), which may be well-tailored to a specific nuclear power plant. The common goal of both develoments is the optimization of the cost-benefit ratio for training and equipment. (orig.) [de

  11. Universal computer interfaces

    CERN Document Server

    Dheere, RFBM

    1988-01-01

    Presents a survey of the latest developments in the field of the universal computer interface, resulting from a study of the world patent literature. Illustrating the state of the art today, the book ranges from basic interface structure, through parameters and common characteristics, to the most important industrial bus realizations. Recent technical enhancements are also included, with special emphasis devoted to the universal interface adapter circuit. Comprehensively indexed.

  12. Proceedings of the second workshop of LHC Computing Grid, LCG-France; ACTES, 2e colloque LCG-France

    Energy Technology Data Exchange (ETDEWEB)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin (eds.) [Laboratoire de Physique Corpusculaire Clermont-Ferrand, Campus des Cezeaux, 24, avenue des Landais, Clermont-Ferrand (France)

    2007-03-15

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3

  13. Computer interfacing

    CERN Document Server

    Dixey, Graham

    1994-01-01

    This book explains how computers interact with the world around them and therefore how to make them a useful tool. Topics covered include descriptions of all the components that make up a computer, principles of data exchange, interaction with peripherals, serial communication, input devices, recording methods, computer-controlled motors, and printers.In an informative and straightforward manner, Graham Dixey describes how to turn what might seem an incomprehensible 'black box' PC into a powerful and enjoyable tool that can help you in all areas of your work and leisure. With plenty of handy

  14. Computational physics

    CERN Document Server

    Newman, Mark

    2013-01-01

    A complete introduction to the field of computational physics, with examples and exercises in the Python programming language. Computers play a central role in virtually every major physics discovery today, from astrophysics and particle physics to biophysics and condensed matter. This book explains the fundamentals of computational physics and describes in simple terms the techniques that every physicist should know, such as finite difference methods, numerical quadrature, and the fast Fourier transform. The book offers a complete introduction to the topic at the undergraduate level, and is also suitable for the advanced student or researcher who wants to learn the foundational elements of this important field.

  15. Computational physics

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-01-15

    Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October.

  16. Computational Viscoelasticity

    CERN Document Server

    Marques, Severino P C

    2012-01-01

    This text is a guide how to solve problems in which viscoelasticity is present using existing commercial computational codes. The book gives information on codes’ structure and use, data preparation  and output interpretation and verification. The first part of the book introduces the reader to the subject, and to provide the models, equations and notation to be used in the computational applications. The second part shows the most important Computational techniques: Finite elements formulation, Boundary elements formulation, and presents the solutions of Viscoelastic problems with Abaqus.

  17. Optical computing.

    Science.gov (United States)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  18. Computational physics

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October

  19. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot...... encompass human concepts of subjective experience and intersubjective meaningful communication, which prevents it from being genuinely transdisciplinary. (3) Philosophically, it does not sufficiently accept the deep ontological differences between various paradigms such as von Foerster’s second- order...

  20. Security in Computer Applications

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    Computer security has been an increasing concern for IT professionals for a number of years, yet despite all the efforts, computer systems and networks remain highly vulnerable to attacks of different kinds. Design flaws and security bugs in the underlying software are among the main reasons for this. This lecture addresses the following question: how to create secure software? The lecture starts with a definition of computer security and an explanation of why it is so difficult to achieve. It then introduces the main security principles (like least-privilege, or defense-in-depth) and discusses security in different phases of the software development cycle. The emphasis is put on the implementation part: most common pitfalls and security bugs are listed, followed by advice on best practice for security development. The last part of the lecture covers some miscellaneous issues like the use of cryptography, rules for networking applications, and social engineering threats. This lecture was first given on Thursd...

  1. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  2. Sustainability of common pool resources.

    Science.gov (United States)

    Timilsina, Raja Rajendra; Kotani, Koji; Kamijo, Yoshio

    2017-01-01

    Sustainability has become a key issue in managing natural resources together with growing concerns for capitalism, environmental and resource problems. We hypothesize that the ongoing modernization of competitive societies, which we refer to as "capitalism," affects human nature for utilizing common pool resources, thus compromising sustainability. To test this hypothesis, we design and implement a set of dynamic common pool resource games and experiments in the following two types of Nepalese areas: (i) rural (non-capitalistic) and (ii) urban (capitalistic) areas. We find that a proportion of prosocial individuals in urban areas is lower than that in rural areas, and urban residents deplete resources more quickly than rural residents. The composition of proself and prosocial individuals in a group and the degree of capitalism are crucial in that an increase in prosocial members in a group and the rural dummy positively affect resource sustainability by 65% and 63%, respectively. Overall, this paper shows that when societies move toward more capitalistic environments, the sustainability of common pool resources tends to decrease with the changes in individual preferences, social norms, customs and views to others through human interactions. This result implies that individuals may be losing their coordination abilities for social dilemmas of resource sustainability in capitalistic societies.

  3. Common morality and moral reform.

    Science.gov (United States)

    Wallace, K A

    2009-01-01

    The idea of moral reform requires that morality be more than a description of what people do value, for there has to be some measure against which to assess progress. Otherwise, any change is not reform, but simply difference. Therefore, I discuss moral reform in relation to two prescriptive approaches to common morality, which I distinguish as the foundational and the pragmatic. A foundational approach to common morality (e.g., Bernard Gert's) suggests that there is no reform of morality, but of beliefs, values, customs, and practices so as to conform with an unchanging, foundational morality. If, however, there were revision in its foundation (e.g., in rationality), then reform in morality itself would be possible. On a pragmatic view, on the other hand, common morality is relative to human flourishing, and its justification consists in its effectiveness in promoting flourishing. Morality is dependent on what in fact does promote human flourishing and therefore, could be reformed. However, a pragmatic approach, which appears more open to the possibility of moral reform, would need a more robust account of norms by which reform is measured.

  4. George Combe and common sense.

    Science.gov (United States)

    Dyde, Sean

    2015-06-01

    This article examines the history of two fields of enquiry in late eighteenth- and early nineteenth-century Scotland: the rise and fall of the common sense school of philosophy and phrenology as presented in the works of George Combe. Although many previous historians have construed these histories as separate, indeed sometimes incommensurate, I propose that their paths were intertwined to a greater extent than has previously been given credit. The philosophy of common sense was a response to problems raised by Enlightenment thinkers, particularly David Hume, and spurred a theory of the mind and its mode of study. In order to succeed, or even to be considered a rival of these established understandings, phrenologists adapted their arguments for the sake of engaging in philosophical dispute. I argue that this debate contributed to the relative success of these groups: phrenology as a well-known historical subject, common sense now largely forgotten. Moreover, this history seeks to question the place of phrenology within the sciences of mind in nineteenth-century Britain.

  5. Common Ground Between Three Cultures

    Directory of Open Access Journals (Sweden)

    Gloria Dunnivan

    2009-12-01

    Full Text Available The Triwizard program with Israel brought together students from three different communities: an Israeli Arab school, an Israeli Jewish school, and an American public school with few Jews and even fewer Muslims. The two Israeli groups met in Israel to find common ground and overcome their differences through dialogue and understanding. They communicated with the American school via technology such as video-conferencing, Skype, and emails. The program culminated with a visit to the U.S. The goal of the program was to embark upon a process that would bring about intercultural awareness and acceptance at the subjective level, guiding all involved to develop empathy and an insider's view of the other's culture. It was an attempt to have a group of Israeli high school students and a group of Arab Israeli students who had a fearful, distrustful perception of each other find common ground and become friends. TriWizard was designed to have participants begin a dialogue about issues, beliefs, and emotions based on the premise that cross-cultural training strategies that are effective in changing knowledge are those that engage the emotions, and actively develop empathy and an insider's views of another culture focused on what they have in common. Participants learned that they could become friends despite their cultural differences.

  6. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  7. Personal Computers.

    Science.gov (United States)

    Toong, Hoo-min D.; Gupta, Amar

    1982-01-01

    Describes the hardware, software, applications, and current proliferation of personal computers (microcomputers). Includes discussions of microprocessors, memory, output (including printers), application programs, the microcomputer industry, and major microcomputer manufacturers (Apple, Radio Shack, Commodore, and IBM). (JN)

  8. Computational Literacy

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Robering, Klaus

    2016-01-01

    In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies for the acquisit......In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies...... for the acquisition of Computational Literacy at basic educational levels, focus on higher levels of education has been much less prominent. The present paper considers the case of courses for higher education programs within the Humanities. A model is proposed which conceives of Computational Literacy as a layered...

  9. Computing Religion

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Braxton, Donald M.; Upal, Afzal

    2012-01-01

    The computational approach has become an invaluable tool in many fields that are directly relevant to research in religious phenomena. Yet the use of computational tools is almost absent in the study of religion. Given that religion is a cluster of interrelated phenomena and that research...... concerning these phenomena should strive for multilevel analysis, this article argues that the computational approach offers new methodological and theoretical opportunities to the study of religion. We argue that the computational approach offers 1.) an intermediary step between any theoretical construct...... and its targeted empirical space and 2.) a new kind of data which allows the researcher to observe abstract constructs, estimate likely outcomes, and optimize empirical designs. Because sophisticated mulitilevel research is a collaborative project we also seek to introduce to scholars of religion some...

  10. Computational Controversy

    NARCIS (Netherlands)

    Timmermans, Benjamin; Kuhn, Tobias; Beelen, Kaspar; Aroyo, Lora

    2017-01-01

    Climate change, vaccination, abortion, Trump: Many topics are surrounded by fierce controversies. The nature of such heated debates and their elements have been studied extensively in the social science literature. More recently, various computational approaches to controversy analysis have

  11. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  12. Computer tomographs

    International Nuclear Information System (INIS)

    Niedzwiedzki, M.

    1982-01-01

    Physical foundations and the developments in the transmission and emission computer tomography are presented. On the basis of the available literature and private communications a comparison is made of the various transmission tomographs. A new technique of computer emission tomography ECT, unknown in Poland, is described. The evaluation of two methods of ECT, namely those of positron and single photon emission tomography is made. (author)

  13. Computational sustainability

    CERN Document Server

    Kersting, Kristian; Morik, Katharina

    2016-01-01

    The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

  14. Computing farms

    International Nuclear Information System (INIS)

    Yeh, G.P.

    2000-01-01

    High-energy physics, nuclear physics, space sciences, and many other fields have large challenges in computing. In recent years, PCs have achieved performance comparable to the high-end UNIX workstations, at a small fraction of the price. We review the development and broad applications of commodity PCs as the solution to CPU needs, and look forward to the important and exciting future of large-scale PC computing

  15. Computational chemistry

    Science.gov (United States)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  16. Frustration: A common user experience

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2010-01-01

    % of their time redoing lost work. Thus, the frustrating experiences accounted for a total of 27% of the time, This main finding is exacerbated by several supplementary findings. For example, the users were unable to fix 26% of the experienced problems, and they rated that the problems recurred with a median....... In the present study, 21 users self-reported their frustrating experiences during an average of 1.72 hours of computer use. As in the previous studies the amount of time lost due to frustrating experiences was disturbing. The users spent 16% of their time trying to fix encountered problems and another 11...

  17. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  18. Computational creativity

    Directory of Open Access Journals (Sweden)

    López de Mántaras Badia, Ramon

    2013-12-01

    Full Text Available New technologies, and in particular artificial intelligence, are drastically changing the nature of creative processes. Computers are playing very significant roles in creative activities such as music, architecture, fine arts, and science. Indeed, the computer is already a canvas, a brush, a musical instrument, and so on. However, we believe that we must aim at more ambitious relations between computers and creativity. Rather than just seeing the computer as a tool to help human creators, we could see it as a creative entity in its own right. This view has triggered a new subfield of Artificial Intelligence called Computational Creativity. This article addresses the question of the possibility of achieving computational creativity through some examples of computer programs capable of replicating some aspects of creative behavior in the fields of music and science.Las nuevas tecnologías y en particular la Inteligencia Artificial están cambiando de forma importante la naturaleza del proceso creativo. Los ordenadores están jugando un papel muy significativo en actividades artísticas tales como la música, la arquitectura, las bellas artes y la ciencia. Efectivamente, el ordenador ya es el lienzo, el pincel, el instrumento musical, etc. Sin embargo creemos que debemos aspirar a relaciones más ambiciosas entre los ordenadores y la creatividad. En lugar de verlos solamente como herramientas de ayuda a la creación, los ordenadores podrían ser considerados agentes creativos. Este punto de vista ha dado lugar a un nuevo subcampo de la Inteligencia Artificial denominado Creatividad Computacional. En este artículo abordamos la cuestión de la posibilidad de alcanzar dicha creatividad computacional mediante algunos ejemplos de programas de ordenador capaces de replicar algunos aspectos relacionados con el comportamiento creativo en los ámbitos de la música y la ciencia.

  19. COMMON APPROACH ON WASTE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    ANDREESCU Nicoleta Alina

    2017-05-01

    Full Text Available The world population has doubled since the 60’s, now reaching 7 billion – it is estimated it will continue growing. If in more advanced economies, the population is starting to grow old and even reduce in numbers, in less developed countries, population numbers are registering a fast growth. Across the world, the ecosystems are exposed to critical levels of pollution in more and more complex combinations. Human activities, population growth and shifting patterns in consumer nature are the main factors that are at the base of thin ever-growing burden on our environment. Globalization means that the consumer and production patterns from a country or a region contribute to the pressures on the environment in totally different parts of the world. With the rise of environmental problems, the search for solutions also begun, such as methods and actions aimed to protect the environment and to lead to a better correlation between economic growth and the environment. The common goals of these endeavors from participating states was to come up with medium and long term regulations that would lead to successfully solving environmental issues. In this paper, we have analyzed the way in which countries started collaborating in the 1970’s at an international level in order to come up with a common policy that would have a positive impact on the environment. The European Union has come up with its own common policy, a policy that each member state must implement. In this context, Romania has developed its National Strategy for Waste Management, a program that Romania wishes to use to reduce the quantity of waste and better dispose of it.

  20. Modeling Common-Sense Decisions

    Science.gov (United States)

    Zak, Michail

    This paper presents a methodology for efficient synthesis of dynamical model simulating a common-sense decision making process. The approach is based upon the extension of the physics' First Principles that includes behavior of living systems. The new architecture consists of motor dynamics simulating actual behavior of the object, and mental dynamics representing evolution of the corresponding knowledge-base and incorporating it in the form of information flows into the motor dynamics. The autonomy of the decision making process is achieved by a feedback from mental to motor dynamics. This feedback replaces unavailable external information by an internal knowledgebase stored in the mental model in the form of probability distributions.

  1. The common European flexicurity principles

    DEFF Research Database (Denmark)

    Mailand, Mikkel

    2010-01-01

    This article analyses the decision-making process underlying the adoption of common EU flexicurity principles. Supporters of the initiative succeeded in convincing the sceptics one by one; the change of government in France and the last-minute support of the European social partner organizations...... were instrumental in this regard. However, the critics succeeded in weakening the initially strong focus on the transition from job security to employment security and the divisions between insiders and outsiders in the labour market. In contrast to some decision-making on the European Employment...

  2. Common blocks for ASQS(12

    Directory of Open Access Journals (Sweden)

    Lorenzo Milazzo

    1997-05-01

    Full Text Available An ASQS(v is a particular Steiner system featuring a set of v vertices and two separate families of blocks, B and G, whose elements have a respective cardinality of 4 and 6. It has the property that any three vertices of X belong either to a B-block or to a G-block. The parameter cb is the number of common blocks in two separate ASQSs, both defined on the same set of vertices X . In this paper it is shown that cb ≤ 29 for any pair of ASQSs(12.

  3. The ATLAS distributed analysis system

    OpenAIRE

    Legger, F.

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During...

  4. Scientific computer simulation review

    International Nuclear Information System (INIS)

    Kaizer, Joshua S.; Heller, A. Kevin; Oberkampf, William L.

    2015-01-01

    Before the results of a scientific computer simulation are used for any purpose, it should be determined if those results can be trusted. Answering that question of trust is the domain of scientific computer simulation review. There is limited literature that focuses on simulation review, and most is specific to the review of a particular type of simulation. This work is intended to provide a foundation for a common understanding of simulation review. This is accomplished through three contributions. First, scientific computer simulation review is formally defined. This definition identifies the scope of simulation review and provides the boundaries of the review process. Second, maturity assessment theory is developed. This development clarifies the concepts of maturity criteria, maturity assessment sets, and maturity assessment frameworks, which are essential for performing simulation review. Finally, simulation review is described as the application of a maturity assessment framework. This is illustrated through evaluating a simulation review performed by the U.S. Nuclear Regulatory Commission. In making these contributions, this work provides a means for a more objective assessment of a simulation’s trustworthiness and takes the next step in establishing scientific computer simulation review as its own field. - Highlights: • We define scientific computer simulation review. • We develop maturity assessment theory. • We formally define a maturity assessment framework. • We describe simulation review as the application of a maturity framework. • We provide an example of a simulation review using a maturity framework

  5. Urban ambiances as common ground?

    Directory of Open Access Journals (Sweden)

    Jean-Paul Thibaud

    2014-07-01

    Full Text Available The aim of this paper is to point out various arguments which question ambiance as a common ground of everyday urban experience. Such a project involves four major points. First, we have to move beyond the exclusive practical aspects of everyday life and bring the sensory to the forefront. Under such conditions, sensory cultures emerge where feeling and acting come together. Second, we must put common experience into perspectiveby initiating a dual dynamics of socialising the sensory and sensitising social life. Ambiances involve a complex web comprised of an ‘existential’ dimension (empathy with the ambient world, a ‘contextual’ dimension (degree of presence in the situation, and an ‘interactional’ dimension (forms of sociability expressed in the tonality. Third, we have to initiate a political ecology of ambiances in order to better understand how ambiances deal with fundamental design and planning issues. Far from being neutral, the notion of ambiance appears to be bound up with the socio-aesthetic strategies underpinning changes to the sensory urban environment of the future. Fourth, we have to question what in situ experience is all about. Three major research pointers enable to address this issue: the embodiment of situated experiences, the porous nature of sensory spaces, and the sensory efficiency of the build environment. Ambiances sensitize urban design as well as social lifeforms.

  6. Common questions about infectious mononucleosis.

    Science.gov (United States)

    Womack, Jason; Jimenez, Marissa

    2015-03-15

    Epstein-Barr is a ubiquitous virus that infects 95% of the world population at some point in life. Although Epstein-Barr virus (EBV) infections are often asymptomatic, some patients present with the clinical syndrome of infectious mononucleosis (IM). The syndrome most commonly occurs between 15 and 24 years of age. It should be suspected in patients presenting with sore throat, fever, tonsillar enlargement, fatigue, lymphadenopathy, pharyngeal inflammation, and palatal petechiae. A heterophile antibody test is the best initial test for diagnosis of EBV infection, with 71% to 90% accuracy for diagnosing IM. However, the test has a 25% false-negative rate in the first week of illness. IM is unlikely if the lymphocyte count is less than 4,000 mm3. The presence of EBV-specific immunoglobulin M antibodies confirms infection, but the test is more costly and results take longer than the heterophile antibody test. Symptomatic relief is the mainstay of treatment. Glucocorticoids and antivirals do not reduce the length or severity of illness. Splenic rupture is an uncommon complication of IM. Because physical activity within the first three weeks of illness may increase the risk of splenic rupture, athletic participation is not recommended during this time. Children are at the highest risk of airway obstruction, which is the most common cause of hospitalization from IM. Patients with immunosuppression are more likely to have fulminant EBV infection.

  7. DNA/SNLA commonality program

    International Nuclear Information System (INIS)

    Keller, D.V.; Watts, A.J.; Rice, D.A.; Powe, J.; Beezhold, W.

    1980-01-01

    The purpose of the Commonality program, initiated by DNA in 1978, was to evaluate e-beam material testing procedures and techniques by comparing material stress and spall data from various US and UK e-beam facilities and experimenters. As part of this joint DNA/SNL/UK Commonality effort, Sandia and Ktech used four different electron-beam machines to investigate various aspects of e-beam energy deposition in three materials. The deposition duration and the deposition profiles were varied, and the resulting stresses were measured. The materials studied were: (1) a low-Z material (A1), (2) a high-Z material (Ta), and (3) a typical porous material, a cermet. Aluminium and tantalum were irradiated using the DNA Blackjack 3 accelerator (60 ns pulse width), the DNA Blackjack 3' accelerator (30 ns pulse width), and the SNLA REHYD accelerator (100 ns pulse width). Propagating stresses were measured using x-cut quartz gauges, carbon gauges, and laser interferometry techniques. Data to determine the influence of deposition duration were obtained over a wide range of energy loadings. The cermet material was studied using the SNLA REHYD and HERMES II accelerators. The e-beam from REHYD generated propagating stresses which were monitored with quartz gauges as a function of sample thickness and energy loadings. The HERMES II accelerator was used to uniformly heat the cermet to determine the Grueneisen parameter and identify the incipient spall condition. Results of these experiments are presented

  8. Multiparty Computations

    DEFF Research Database (Denmark)

    Dziembowski, Stefan

    here and discuss other problems caused by the adaptiveness. All protocols in the thesis are formally specified and the proofs of their security are given. [1]Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, and Tal Rabin. Efficient multiparty computations with dishonest minority......In this thesis we study a problem of doing Verifiable Secret Sharing (VSS) and Multiparty Computations in a model where private channels between the players and a broadcast channel is available. The adversary is active, adaptive and has an unbounded computing power. The thesis is based on two...... to a polynomial time black-box reduction, the complexity of adaptively secure VSS is the same as that of ordinary secret sharing (SS), where security is only required against a passive, static adversary. Previously, such a connection was only known for linear secret sharing and VSS schemes. We then show...

  9. Scientific computing

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  10. Computational Psychiatry

    Science.gov (United States)

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  11. Joint Service Common Operating Environment (COE) Common Geographic Information System functional requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meitzler, W.D.

    1992-06-01

    In the context of this document and COE, the Geographic Information Systems (GIS) are decision support systems involving the integration of spatially referenced data in a problem solving environment. They are digital computer systems for capturing, processing, managing, displaying, modeling, and analyzing geographically referenced spatial data which are described by attribute data and location. The ability to perform spatial analysis and the ability to combine two or more data sets to create new spatial information differentiates a GIS from other computer mapping systems. While the CCGIS allows for data editing and input, its primary purpose is not to prepare data, but rather to manipulate, analyte, and clarify it. The CCGIS defined herein provides GIS services and resources including the spatial and map related functionality common to all subsystems contained within the COE suite of C4I systems. The CCGIS, which is an integral component of the COE concept, relies on the other COE standard components to provide the definition for other support computing services required.

  12. Computational artifacts

    DEFF Research Database (Denmark)

    Schmidt, Kjeld; Bansler, Jørgen P.

    2016-01-01

    The key concern of CSCW research is that of understanding computing technologies in the social context of their use, that is, as integral features of our practices and our lives, and to think of their design and implementation under that perspective. However, the question of the nature...... of that which is actually integrated in our practices is often discussed in confusing ways, if at all. The article aims to try to clarify the issue and in doing so revisits and reconsiders the notion of ‘computational artifact’....

  13. Computer security

    CERN Document Server

    Gollmann, Dieter

    2011-01-01

    A completely up-to-date resource on computer security Assuming no previous experience in the field of computer security, this must-have book walks you through the many essential aspects of this vast topic, from the newest advances in software and technology to the most recent information on Web applications security. This new edition includes sections on Windows NT, CORBA, and Java and discusses cross-site scripting and JavaScript hacking as well as SQL injection. Serving as a helpful introduction, this self-study guide is a wonderful starting point for examining the variety of competing sec

  14. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  15. Computational Logistics

    DEFF Research Database (Denmark)

    Pacino, Dario; Voss, Stefan; Jensen, Rune Møller

    2013-01-01

    This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized in to...... in topical sections named: maritime shipping, road transport, vehicle routing problems, aviation applications, and logistics and supply chain management.......This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized...

  16. Computational Logistics

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized in to...... in topical sections named: maritime shipping, road transport, vehicle routing problems, aviation applications, and logistics and supply chain management.......This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized...

  17. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  18. Computer busses

    CERN Document Server

    Buchanan, William

    2000-01-01

    As more and more equipment is interface or'bus' driven, either by the use of controllers or directly from PCs, the question of which bus to use is becoming increasingly important both in industry and in the office. 'Computer Busses' has been designed to help choose the best type of bus for the particular application.There are several books which cover individual busses, but none which provide a complete guide to computer busses. The author provides a basic theory of busses and draws examples and applications from real bus case studies. Busses are analysed using from a top-down approach, helpin

  19. Reconfigurable Computing

    CERN Document Server

    Cardoso, Joao MP

    2011-01-01

    As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a comp

  20. Common Β- Thalassaemia Mutations in

    Directory of Open Access Journals (Sweden)

    P Azarfam

    2005-01-01

    Full Text Available Introduction: β –Thalassaemia was first explained by Thomas Cooly as Cooly’s anaemia in 1925. The β- thalassaemias are hereditary autosomal disorders with decreased or absent β-globin chain synthesis. The most common genetic defects in β-thalassaemias are caused by point mutations, micro deletions or insertions within the β-globin gene. Material and Methods: In this research , 142 blood samples (64 from childrens hospital of Tabriz , 15 samples from Shahid Gazi hospital of Tabriz , 18 from Urumia and 45 samples from Aliasghar hospital of Ardebil were taken from thalassaemic patients (who were previously diagnosed .Then 117 non-familial samples were selected . The DNA of the lymphocytes of blood samples was extracted by boiling and Proteinase K- SDS procedure, and mutations were detected by ARMS-PCR methods. Results: From the results obtained, eleven most common mutations,most of which were Mediterranean mutations were detected as follows; IVS-I-110(G-A, IVS-I-1(G-A ،IVS-I-5(G-C ,Frameshift Codon 44 (-C,( codon5(-CT,IVS-1-6(T-C, IVS-I-25(-25bp del ,Frameshift 8.9 (+G ,IVS-II-1(G-A ,Codon 39(C-T, Codon 30(G-C the mutations of the samples were defined. The results showed that Frameshift 8.9 (+G, IVS-I-110 (G-A ,IVS-II-I(G-A, IVS-I-5(G-C, IVS-I-1(G-A , Frameshift Codon 44(-C , codon5(-CT , IVS-1-6(T-C , IVS-I-25(-25bp del with a frequency of 29.9%, 25.47%,17.83%, 7.00%, 6.36% , 6.63% , 3.8% , 2.5% , 0.63% represented the most common mutations in North - west Iran. No mutations in Codon 39(C-T and Codon 30(G-C were detected. Cunclusion: The frequency of the same mutations in patients from North - West of Iran seems to be different as compared to other regions like Turkey, Pakistan, Lebanon and Fars province of Iran. The pattern of mutations in this region is more or less the same as in the Mediterranean region, but different from South west Asia and East Asia.

  1. Research on cloud computing solutions

    Directory of Open Access Journals (Sweden)

    Liudvikas Kaklauskas

    2015-07-01

    Full Text Available Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, hybrid cloud and community. The most common and well-known deployment model is Public Cloud. A Private Cloud is suited for sensitive data, where the customer is dependent on a certain degree of security.According to the different types of services offered, cloud computing can be considered to consist of three layers (services models: IaaS (infrastructure as a service, PaaS (platform as a service, SaaS (software as a service. Main cloud computing solutions: web applications, data hosting, virtualization, database clusters and terminal services. The advantage of cloud com-puting is the ability to virtualize and share resources among different applications with the objective for better server utilization and without a clustering solution, a service may fail at the moment the server crashes.DOI: 10.15181/csat.v2i2.914

  2. Harvesting NASA's Common Metadata Repository

    Science.gov (United States)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  3. Why is migraine so common?

    Science.gov (United States)

    Edmeads, J

    1998-08-01

    Migraine is clearly a very common biological disorder, but this knowledge has not been sufficient as yet to ensure completely effective treatment strategies. There appears to be discrepancy between what migraine patients desire as the outcome of consultations and what doctors think patients want. Patients seem, from Packard's selective study (11), to want explanation and reassurance before they get pain relief, whereas doctors view pain relief as the most important aim of management. It is possible that doctors still have underlying assumptions about psychological elements of migraine which color their perceptions of their patients. Communicating the relevance of scientific progress in migraine to neurologists and PCPs is an important challenge, as is calling attention to the patient's expectations from treatment. To be effective in improving education in this area, perhaps we should first ascertain the level of knowledge about the biology and treatment of headache among general neurologists.

  4. Practical advantages of evolutionary computation

    Science.gov (United States)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  5. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  6. From computer to brain foundations of computational neuroscience

    CERN Document Server

    Lytton, William W

    2002-01-01

    Biology undergraduates, medical students and life-science graduate students often have limited mathematical skills. Similarly, physics, math and engineering students have little patience for the detailed facts that make up much of biological knowledge. Teaching computational neuroscience as an integrated discipline requires that both groups be brought forward onto common ground. This book does this by making ancillary material available in an appendix and providing basic explanations without becoming bogged down in unnecessary details. The book will be suitable for undergraduates and beginning graduate students taking a computational neuroscience course and also to anyone with an interest in the uses of the computer in modeling the nervous system.

  7. Statistical Computing

    Indian Academy of Sciences (India)

    inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.

  8. Computational biology

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Computation via biological devices has been the subject of close scrutiny since von Neumann’s early work some 60 years ago. In spite of the many relevant works in this field, the notion of programming biological devices seems to be, at best, ill-defined. While many devices are claimed or proved t...

  9. Computing News

    CERN Multimedia

    McCubbin, N

    2001-01-01

    We are still five years from the first LHC data, so we have plenty of time to get the computing into shape, don't we? Well, yes and no: there is time, but there's an awful lot to do! The recently-completed CERN Review of LHC Computing gives the flavour of the LHC computing challenge. The hardware scale for each of the LHC experiments is millions of 'SpecInt95' (SI95) units of cpu power and tens of PetaBytes of data storage. PCs today are about 20-30SI95, and expected to be about 100 SI95 by 2005, so it's a lot of PCs. This hardware will be distributed across several 'Regional Centres' of various sizes, connected by high-speed networks. How to realise this in an orderly and timely fashion is now being discussed in earnest by CERN, Funding Agencies, and the LHC experiments. Mixed in with this is, of course, the GRID concept...but that's a topic for another day! Of course hardware, networks and the GRID constitute just one part of the computing. Most of the ATLAS effort is spent on software development. What we ...

  10. Quantum Computation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 9. Quantum Computation - Particle and Wave Aspects of Algorithms. Apoorva Patel. General Article Volume 16 Issue 9 September 2011 pp 821-835. Fulltext. Click here to view fulltext PDF. Permanent link:

  11. Cloud computing.

    Science.gov (United States)

    Wink, Diane M

    2012-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes how cloud computing can be used in nursing education.

  12. Computer Recreations.

    Science.gov (United States)

    Dewdney, A. K.

    1988-01-01

    Describes the creation of the computer program "BOUNCE," designed to simulate a weighted piston coming into equilibrium with a cloud of bouncing balls. The model follows the ideal gas law. Utilizes the critical event technique to create the model. Discusses another program, "BOOM," which simulates a chain reaction. (CW)

  13. [Grid computing

    CERN Multimedia

    Wolinsky, H

    2003-01-01

    "Turn on a water spigot, and it's like tapping a bottomless barrel of water. Ditto for electricity: Flip the switch, and the supply is endless. But computing is another matter. Even with the Internet revolution enabling us to connect in new ways, we are still limited to self-contained systems running locally stored software, limited by corporate, institutional and geographic boundaries" (1 page).

  14. Computational Finance

    DEFF Research Database (Denmark)

    Rasmussen, Lykke

    One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...

  15. Optical Computing

    Indian Academy of Sciences (India)

    Optical computing technology is, in general, developing in two directions. One approach is ... current support in many places, with private companies as well as governments in several countries encouraging such research work. For example, much ... which enables more information to be carried and data to be processed.

  16. Data mining in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ruxandra-Ştefania PETRE

    2012-10-01

    Full Text Available This paper describes how data mining is used in cloud computing. Data Mining is used for extracting potentially useful information from raw data. The integration of data mining techniques into normal day-to-day activities has become common place. Every day people are confronted with targeted advertising, and data mining techniques help businesses to become more efficient by reducing costs.Data mining techniques and applications are very much needed in the cloud computing paradigm. The implementation of data mining techniques through Cloud computing will allow the users to retrieve meaningful information from virtually integrated data warehouse that reduces the costs of infrastructure and storage.

  17. Computable Frames in Computable Banach Spaces

    Directory of Open Access Journals (Sweden)

    S.K. Kaushik

    2016-06-01

    Full Text Available We develop some parts of the frame theory in Banach spaces from the point of view of Computable Analysis. We define computable M-basis and use it to construct a computable Banach space of scalar valued sequences. Computable Xd frames and computable Banach frames are also defined and computable versions of sufficient conditions for their existence are obtained.

  18. Sustainable models of audiovisual commons

    Directory of Open Access Journals (Sweden)

    Mayo Fuster Morell

    2013-03-01

    Full Text Available This paper addresses an emerging phenomenon characterized by continuous change and experimentation: the collaborative commons creation of audiovisual content online. The analysis wants to focus on models of sustainability of collaborative online creation, paying particular attention to the use of different forms of advertising. This article is an excerpt of a larger investigation, which unit of analysis are cases of Online Creation Communities that take as their central node of activity the Catalan territory. From 22 selected cases, the methodology combines quantitative analysis, through a questionnaire delivered to all cases, and qualitative analysis through face interviews conducted in 8 cases studied. The research, which conclusions we summarize in this article,in this article, leads us to conclude that the sustainability of the project depends largely on relationships of trust and interdependence between different voluntary agents, the non-monetary contributions and retributions as well as resources and infrastructure of free use. All together leads us to understand that this is and will be a very important area for the future of audiovisual content and its sustainability, which will imply changes in the policies that govern them.

  19. Longest Common Extensions via Fingerprinting

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Kristensen, Jesper

    2012-01-01

    query time, no extra space and no preprocessing achieves significantly better average case performance. We show a new algorithm, Fingerprint k , which for a parameter k, 1 ≤ k ≤ [log n], on a string of length n and alphabet size σ, gives O(k n1/k) query time using O(k n) space and O(k n + sort......(n,σ)) preprocessing time, where sort(n,σ) is the time it takes to sort n numbers from σ. Though this solution is asymptotically strictly worse than the asymptotically best previously known algorithms, it outperforms them in practice in average case and is almost as fast as the simple linear time algorithm. On worst....... The LCE problem can be solved in linear space with constant query time and a preprocessing of sorting complexity. There are two known approaches achieving these bounds, which use nearest common ancestors and range minimum queries, respectively. However, in practice a much simpler approach with linear...

  20. Common bus multinode sensor system

    International Nuclear Information System (INIS)

    Kelly, T.F.; Naviasky, E.H.; Evans, W.P.; Jefferies, D.W.; Smith, J.R.

    1988-01-01

    This patent describes a nuclear power plant including a common bus multinode sensor system for sensors in the nuclear power plant, each sensor producing a sensor signal. The system consists of: a power supply providing power; a communication cable coupled to the power supply; plural remote sensor units coupled between the cable and one or more sensors, and comprising: a direct current power supply, connected to the cable and converting the power on the cable into direct current; an analog-to-digital converter connected to the direct current power supply; an oscillator reference; a filter; and an integrated circuit sensor interface connected to the direct current power supply, the analog-to-digital converter, the oscillator crystal and the filter, the interface comprising: a counter receiving a frequency designation word from external to the interface; a phase-frequency comparator connected to the counter; an oscillator connected to the oscillator reference; a timing counter connected to the oscillator, the phase/frequency comparator and the analog-to-digital converter; an analog multiplexer connectable to the sensors and the analog-to-digital converter, and connected to the timing counter; a shift register operatively connected to the timing counter and the analog-to-digital converter; an encoder connected to the shift register and connectable to the filter; and a voltage controlled oscillator connected to the filter and the cable

  1. Common hyperspectral image database design

    Science.gov (United States)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  2. Longest common extensions in trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gawrychowski, Pawel; Gørtz, Inge Li

    2016-01-01

    to trees and suggest a few applications of LCE in trees to tries and XML databases. Given a labeled and rooted tree T of size n, the goal is to preprocess T into a compact data structure that support the following LCE queries between subpaths and subtrees in T. Let v1, v2, w1, and w2 be nodes of T...... such that w1 and w2 are descendants of v1 and v2 respectively. - LCEPP(v1, w1, v2, w2): (path-path LCE) return the longest common prefix of the paths v1 ~→ w1 and v2 ~→ w2. - LCEPT(v1, w1, v2): (path-tree LCE) return maximal path-path LCE of the path v1 ~→ w1 and any path from v2 to a descendant leaf. - LCETT......(v1, v2): (tree-tree LCE) return a maximal path-path LCE of any pair of paths from v1 and v2 to descendant leaves. We present the first non-trivial bounds for supporting these queries. For LCEPP queries, we present a linear-space solution with O(log* n) query time. For LCEPT queries, we present...

  3. Longest Common Extensions in Trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gawrychowski, Pawel; Gørtz, Inge Li

    2015-01-01

    to trees and suggest a few applications of LCE in trees to tries and XML databases. Given a labeled and rooted tree T of size n, the goal is to preprocess T into a compact data structure that support the following LCE queries between subpaths and subtrees in T. Let v1, v2, w1, and w2 be nodes of T...... such that w1 and w2 are descendants of v1 and v2 respectively. - LCEPP(v1, w1, v2, w2): (path-path LCE) return the longest common prefix of the paths v1 ~→ w1 and v2 ~→ w2. - LCEPT(v1, w1, v2): (path-tree LCE) return maximal path-path LCE of the path v1 ~→ w1 and any path from v2 to a descendant leaf. - LCETT......(v1, v2): (tree-tree LCE) return a maximal path-path LCE of any pair of paths from v1 and v2 to descendant leaves. We present the first non-trivial bounds for supporting these queries. For LCEPP queries, we present a linear-space solution with O(log* n) query time. For LCEPT queries, we present...

  4. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  5. Algebraic computing

    International Nuclear Information System (INIS)

    MacCallum, M.A.H.

    1990-01-01

    The implementation of a new computer algebra system is time consuming: designers of general purpose algebra systems usually say it takes about 50 man-years to create a mature and fully functional system. Hence the range of available systems and their capabilities changes little between one general relativity meeting and the next, despite which there have been significant changes in the period since the last report. The introductory remarks aim to give a brief survey of capabilities of the principal available systems and highlight one or two trends. The reference to the most recent full survey of computer algebra in relativity and brief descriptions of the Maple, REDUCE and SHEEP and other applications are given. (author)

  6. [Computer program "PANCREAS"].

    Science.gov (United States)

    Jakubowicz, J; Jankowski, M; Szomański, B; Switka, S; Zagórowicz, E; Pertkiewicz, M; Szczygieł, B

    1998-01-01

    Contemporary computer technology allows precise and fast large database analysis. Widespread and common use depends on appropriate, user friendly software, usually lacking in special medical applications. The aim of this work was to develop an integrated system designed to store, explore and analyze data of patients treated for pancreatic cancer. For that purpose the database administration system MS Visual Fox Pro 3.0 was used and special application, according to ISO 9000 series has been developed. The system works under MS Windows 95 with possibility of easy adaptation to MS Windows 3.11 or MS Windows NT by graphic user's interface. The system stores personal data, laboratory results, visual and histological analyses and information on treatment course and complications. However the system archives them and enables the preparation reports of according to individual and statistical needs. Help and security settings allow to work also for one not familiar with computer science.

  7. Computational Controversy

    OpenAIRE

    Timmermans, Benjamin; Kuhn, Tobias; Beelen, Kaspar; Aroyo, Lora

    2017-01-01

    Climate change, vaccination, abortion, Trump: Many topics are surrounded by fierce controversies. The nature of such heated debates and their elements have been studied extensively in the social science literature. More recently, various computational approaches to controversy analysis have appeared, using new data sources such as Wikipedia, which help us now better understand these phenomena. However, compared to what social sciences have discovered about such debates, the existing computati...

  8. Computed tomography

    International Nuclear Information System (INIS)

    Andre, M.; Resnick, D.

    1988-01-01

    Computed tomography (CT) has matured into a reliable and prominent tool for study of the muscoloskeletal system. When it was introduced in 1973, it was unique in many ways and posed a challenge to interpretation. It is in these unique features, however, that its advantages lie in comparison with conventional techniques. These advantages will be described in a spectrum of important applications in orthopedics and rheumatology

  9. Computed radiography

    International Nuclear Information System (INIS)

    Pupchek, G.

    2004-01-01

    Computed radiography (CR) is an image acquisition process that is used to create digital, 2-dimensional radiographs. CR employs a photostimulable phosphor-based imaging plate, replacing the standard x-ray film and intensifying screen combination. Conventional radiographic exposure equipment is used with no modification required to the existing system. CR can transform an analog x-ray department into a digital one and eliminates the need for chemicals, water, darkrooms and film processor headaches. (author)

  10. Computational universes

    International Nuclear Information System (INIS)

    Svozil, Karl

    2005-01-01

    Suspicions that the world might be some sort of a machine or algorithm existing 'in the mind' of some symbolic number cruncher have lingered from antiquity. Although popular at times, the most radical forms of this idea never reached mainstream. Modern developments in physics and computer science have lent support to the thesis, but empirical evidence is needed before it can begin to replace our contemporary world view

  11. Common Questions About Chronic Prostatitis.

    Science.gov (United States)

    Holt, James D; Garrett, W Allan; McCurry, Tyler K; Teichman, Joel M H

    2016-02-15

    Chronic prostatitis is relatively common, with a lifetime prevalence of 1.8% to 8.2%. Risk factors include conditions that facilitate introduction of bacteria into the urethra and prostate (which also predispose the patient to urinary tract infections) and conditions that can lead to chronic neuropathic pain. Chronic prostatitis must be differentiated from other causes of chronic pelvic pain, such as interstitial cystitis/bladder pain syndrome and pelvic floor dysfunction; prostate and bladder cancers; benign prostatic hyperplasia; urolithiasis; and other causes of dysuria, urinary frequency, and nocturia. The National Institutes of Health divides prostatitis into four syndromes: acute bacterial prostatitis, chronic bacterial prostatitis (CBP), chronic nonbacterial prostatitis (CNP)/chronic pelvic pain syndrome (CPPS), and asymptomatic inflammatory prostatitis. CBP and CNP/CPPS both lead to pelvic pain and lower urinary tract symptoms. CBP presents as recurrent urinary tract infections with the same organism identified on repeated cultures; it responds to a prolonged course of an antibiotic that adequately penetrates the prostate, if the urine culture suggests sensitivity. If four to six weeks of antibiotic therapy is effective but symptoms recur, another course may be prescribed, perhaps in combination with alpha blockers or nonopioid analgesics. CNP/CPPS, accounting for more than 90% of chronic prostatitis cases, presents as prostatic pain lasting at least three months without consistent culture results. Weak evidence supports the use of alpha blockers, pain medications, and a four- to six-week course of antibiotics for the treatment of CNP/CPPS. Patients may also be referred to a psychologist experienced in managing chronic pain. Experts on this condition recommend a combination of treatments tailored to the patient's phenotypic presentation. Urology referral should be considered when appropriate treatment is ineffective. Additional treatments include pelvic

  12. Coordinating towards a Common Good

    Science.gov (United States)

    Santos, Francisco C.; Pacheco, Jorge M.

    2010-09-01

    Throughout their life, humans often engage in collective endeavors ranging from family related issues to global warming. In all cases, the tragedy of the commons threatens the possibility of reaching the optimal solution associated with global cooperation, a scenario predicted by theory and demonstrated by many experiments. Using the toolbox of evolutionary game theory, I will address two important aspects of evolutionary dynamics that have been neglected so far in the context of public goods games and evolution of cooperation. On one hand, the fact that often there is a threshold above which a public good is reached [1, 2]. On the other hand, the fact that individuals often participate in several games, related to the their social context and pattern of social ties, defined by a social network [3, 4, 5]. In the first case, the existence of a threshold above which collective action is materialized dictates a rich pattern of evolutionary dynamics where the direction of natural selection can be inverted compared to standard expectations. Scenarios of defector dominance, pure coordination or coexistence may arise simultaneously. Both finite and infinite population models are analyzed. In networked games, cooperation blooms whenever the act of contributing is more important than the effort contributed. In particular, the heterogeneous nature of social networks naturally induces a symmetry breaking of the dilemmas of cooperation, as contributions made by cooperators may become contingent on the social context in which the individual is embedded. This diversity in context provides an advantage to cooperators, which is particularly strong when both wealth and social ties follow a power-law distribution, providing clues on the self-organization of social communities. Finally, in both situations, it can be shown that individuals no longer play a defection dominance dilemma, but effectively engage in a general N-person coordination game. Even if locally defection may seem

  13. Designing the Microbial Research Commons

    Energy Technology Data Exchange (ETDEWEB)

    Uhlir, Paul F. [Board on Research Data and Information Policy and Global Affairs, Washington, DC (United States)

    2011-10-01

    Recent decades have witnessed an ever-increasing range and volume of digital data. All elements of the pillars of science--whether observation, experiment, or theory and modeling--are being transformed by the continuous cycle of generation, dissemination, and use of factual information. This is even more so in terms of the re-using and re-purposing of digital scientific data beyond the original intent of the data collectors, often with dramatic results. We all know about the potential benefits and impacts of digital data, but we are also aware of the barriers, the challenges in maximizing the access, and use of such data. There is thus a need to think about how a data infrastructure can enhance capabilities for finding, using, and integrating information to accelerate discovery and innovation. How can we best implement an accessible, interoperable digital environment so that the data can be repeatedly used by a wide variety of users in different settings and with different applications? With this objective: to use the microbial communities and microbial data, literature, and the research materials themselves as a test case, the Board on Research Data and Information held an International Symposium on Designing the Microbial Research Commons at the National Academy of Sciences in Washington, DC on 8-9 October 2009. The symposium addressed topics such as models to lower the transaction costs and support access to and use of microbiological materials and digital resources from the perspective of publicly funded research, public-private interactions, and developing country concerns. The overall goal of the symposium was to stimulate more research and implementation of improved legal and institutional models for publicly funded research in microbiology.

  14. SEAL: Common Core Libraries and Services for LHC Applications

    CERN Document Server

    Generowicz, J; Moneta, L; Roiser, S; Marino, M; Tuura, L A

    2003-01-01

    The CERN LHC experiments have begun the LHC Computing Grid project in 2001. One of the project's aims is to develop common software infrastructure based on a development vision shared by the participating experiments. The SEAL project will provide common foundation libraries, services and utilities identified by the project's architecture blueprint report. This requires a broad range of functionality that no individual package suitably covers. SEAL thus selects external and experiment-developed packages, integrates them in a coherent whole, develops new code for missing functionality, and provides support to the experiments. We describe the set of basic components identified by the LHC Computing Grid project and thought to be sufficient for development of higher level framework components and specializations. Examples of such components are a plug-in manager, an object dictionary, object whiteboards, an incident or event manager. We present the design and implementation of some of these components and the und...

  15. Proposals for common definitions of reference points in gynecological brachytherapy

    International Nuclear Information System (INIS)

    Chassagne, D.; Horiot, J.C.

    1977-01-01

    In May 1975 the report of European Curietherapy Group recommended in gynecological Dosimetry by computer. Use of reference points = lymphatic trapezoid figure with 6 points, Pelvic wall, all points are refering to bony structures. Use of critical organ reference points = maximum rectum dose, bladder dose mean rectal dose. Use of 6,000 rads reference isodose described by height, width, and thickness dimensions. These proposals are the basis of a common language in gynecological brachytherapy [fr

  16. Development of a common data model for scientific simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, J. [Los Alamos National Lab., NM (United States); Butler, D.M. [Limit Point Systems, Inc. (United States); Matarazzo, C.; Miller, M. [Lawrence Livermore National Lab., CA (United States); Schoof, L. [Sandia National Lab., Albuquerque, NM (United States)

    1999-06-01

    The problem of sharing data among scientific simulation models is a difficult and persistent one. Computational scientists employ an enormous variety of discrete approximations in modeling physical processes on computers. Problems occur when models based on different representations are required to exchange data with one another, or with some other software package. Within the DOE`s Accelerated Strategic Computing Initiative (ASCI), a cross-disciplinary group called the Data Models and Formats (DMF) group, has been working to develop a common data model. The current model is comprised of several layers of increasing semantic complexity. One of these layers is an abstract model based on set theory and topology called the fiber bundle kernel (FBK). This layer provides the flexibility needed to describe a wide range of mesh-approximated functions as well as other entities. This paper briefly describes the ASCI common data model, its mathematical basis, and ASCI prototype development. These prototypes include an object-oriented data management library developed at Los Alamos called the Common Data Model Library or CDMlib, the Vector Bundle API from the Lawrence Livermore Laboratory, and the DMF API from Sandia National Laboratory.

  17. Common modelling approaches for training simulators for nuclear power plants

    International Nuclear Information System (INIS)

    1990-02-01

    Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs

  18. Approximate solutions of common fixed-point problems

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book presents results on the convergence behavior of algorithms which are known as vital tools for solving convex feasibility problems and common fixed point problems. The main goal for us in dealing with a known computational error is to find what approximate solution can be obtained and how many iterates one needs to find it. According to know results, these algorithms should converge to a solution. In this exposition, these algorithms are studied, taking into account computational errors which remain consistent in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution if computational errors are bounded from above by a small positive constant. Beginning with an introduction, this monograph moves on to study: · dynamic string-averaging methods for common fixed point problems in a Hilbert space · dynamic string methods for common fixed point problems in a metric space · dynamic string-averaging version of the proximal...

  19. Customizable computing

    CERN Document Server

    Chen, Yu-Ting; Gill, Michael; Reinman, Glenn; Xiao, Bingjun

    2015-01-01

    Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory

  20. Adaptively detecting changes in Autonomic Grid Computing

    KAUST Repository

    Zhang, Xiangliang; Germain, Cé cile; Sebag, Michè le

    2010-01-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting

  1. An Electrical Analog Computer for Poets

    Science.gov (United States)

    Bruels, Mark C.

    1972-01-01

    Nonphysics majors are presented with a direct current experiment beyond Ohms law and series and parallel laws. This involves construction of an analog computer from common rheostats and student-assembled voltmeters. (Author/TS)

  2. EU grid computing effort takes on malaria

    CERN Multimedia

    Lawrence, Stacy

    2006-01-01

    Malaria is the world's most common parasitic infection, affecting more thatn 500 million people annually and killing more than 1 million. In order to help combat malaria, CERN has launched a grid computing effort (1 page)

  3. Computed tomography

    International Nuclear Information System (INIS)

    Wells, P.; Davis, J.; Morgan, M.

    1994-01-01

    X-ray or gamma-ray transmission computed tomography (CT) is a powerful non-destructive evaluation (NDE) technique that produces two-dimensional cross-sectional images of an object without the need to physically section it. CT is also known by the acronym CAT, for computerised axial tomography. This review article presents a brief historical perspective on CT, its current status and the underlying physics. The mathematical fundamentals of computed tomography are developed for the simplest transmission CT modality. A description of CT scanner instrumentation is provided with an emphasis on radiation sources and systems. Examples of CT images are shown indicating the range of materials that can be scanned and the spatial and contrast resolutions that may be achieved. Attention is also given to the occurrence, interpretation and minimisation of various image artefacts that may arise. A final brief section is devoted to the principles and potential of a range of more recently developed tomographic modalities including diffraction CT, positron emission CT and seismic tomography. 57 refs., 2 tabs., 14 figs

  4. Computing Services and Assured Computing

    Science.gov (United States)

    2006-05-01

    fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00

  5. Computational neuroscience

    CERN Document Server

    Blackwell, Kim L

    2014-01-01

    Progress in Molecular Biology and Translational Science provides a forum for discussion of new discoveries, approaches, and ideas in molecular biology. It contains contributions from leaders in their fields and abundant references. This volume brings together different aspects of, and approaches to, molecular and multi-scale modeling, with applications to a diverse range of neurological diseases. Mathematical and computational modeling offers a powerful approach for examining the interaction between molecular pathways and ionic channels in producing neuron electrical activity. It is well accepted that non-linear interactions among diverse ionic channels can produce unexpected neuron behavior and hinder a deep understanding of how ion channel mutations bring about abnormal behavior and disease. Interactions with the diverse signaling pathways activated by G protein coupled receptors or calcium influx adds an additional level of complexity. Modeling is an approach to integrate myriad data sources into a cohesiv...

  6. Social Computing

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    The past decade has witnessed a momentous transformation in the way people interact with each other. Content is now co-produced, shared, classified, and rated by millions of people, while attention has become the ephemeral and valuable resource that everyone seeks to acquire. This talk will describe how social attention determines the production and consumption of content within both the scientific community and social media, how its dynamics can be used to predict the future and the role that social media plays in setting the public agenda. About the speaker Bernardo Huberman is a Senior HP Fellow and Director of the Social Computing Lab at Hewlett Packard Laboratories. He received his Ph.D. in Physics from the University of Pennsylvania, and is currently a Consulting Professor in the Department of Applied Physics at Stanford University. He originally worked in condensed matter physics, ranging from superionic conductors to two-dimensional superfluids, and made contributions to the theory of critical p...

  7. computer networks

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  8. Computer Tree

    Directory of Open Access Journals (Sweden)

    Onur AĞAOĞLU

    2014-12-01

    Full Text Available It is crucial that gifted and talented students should be supported by different educational methods for their interests and skills. The science and arts centres (gifted centres provide the Supportive Education Program for these students with an interdisciplinary perspective. In line with the program, an ICT lesson entitled “Computer Tree” serves for identifying learner readiness levels, and defining the basic conceptual framework. A language teacher also contributes to the process, since it caters for the creative function of the basic linguistic skills. The teaching technique is applied for 9-11 aged student level. The lesson introduces an evaluation process including basic information, skills, and interests of the target group. Furthermore, it includes an observation process by way of peer assessment. The lesson is considered to be a good sample of planning for any subject, for the unpredicted convergence of visual and technical abilities with linguistic abilities.

  9. Science for common entrance physics : answers

    CERN Document Server

    Pickering, W R

    2015-01-01

    This book contains answers to all exercises featured in the accompanying textbook Science for Common Entrance: Physics , which covers every Level 1 and 2 topic in the ISEB 13+ Physics Common Entrance exam syllabus. - Clean, clear layout for easy marking. - Includes examples of high-scoring answers with diagrams and workings. - Suitable for ISEB 13+ Mathematics Common Entrance exams taken from Autumn 2017 onwards. Also available to purchase from the Galore Park website www.galorepark.co.uk :. - Science for Common Entrance: Physics. - Science for Common Entrance: Biology. - Science for Common En

  10. Linking computers for science

    CERN Multimedia

    2005-01-01

    After the success of SETI@home, many other scientists have found computer power donated by the public to be a valuable resource - and sometimes the only possibility to achieve their goals. In July, representatives of several “public resource computing” projects came to CERN to discuss technical issues and R&D activities on the common computing platform they are using, BOINC. This photograph shows the LHC@home screen-saver which uses the BOINC platform: the dots represent protons and the position of the status bar indicates the progress of the calculations. This summer, CERN hosted the first “pangalactic workshop” on BOINC (Berkeley Open Interface for Network Computing). BOINC is modelled on SETI@home, which millions of people have downloaded to help search for signs of extraterrestrial intelligence in radio-astronomical data. BOINC provides a general-purpose framework for scientists to adapt their software to, so that the public can install and run it. An important part of BOINC is managing the...

  11. Computed tomography

    International Nuclear Information System (INIS)

    Boyd, D.P.

    1989-01-01

    This paper reports on computed tomographic (CT) scanning which has improved computer-assisted imaging modalities for radiologic diagnosis. The advantage of this modality is its ability to image thin cross-sectional planes of the body, thus uncovering density information in three dimensions without tissue superposition problems. Because this enables vastly superior imaging of soft tissues in the brain and body, CT scanning was immediately successful and continues to grow in importance as improvements are made in speed, resolution, and cost efficiency. CT scanners are used for general purposes, and the more advanced machines are generally preferred in large hospitals, where volume and variety of usage justifies the cost. For imaging in the abdomen, a scanner with a rapid speed is preferred because peristalsis, involuntary motion of the diaphram, and even cardiac motion are present and can significantly degrade image quality. When contrast media is used in imaging to demonstrate scanner, immediate review of images, and multiformat hardcopy production. A second console is reserved for the radiologist to read images and perform the several types of image analysis that are available. Since CT images contain quantitative information in terms of density values and contours of organs, quantitation of volumes, areas, and masses is possible. This is accomplished with region-of- interest methods, which involve the electronic outlining of the selected region of the television display monitor with a trackball-controlled cursor. In addition, various image- processing options, such as edge enhancement (for viewing fine details of edges) or smoothing filters (for enhancing the detectability of low-contrast lesions) are useful tools

  12. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  13. Virtualization and cloud computing in dentistry.

    Science.gov (United States)

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  14. Engineering applications of soft computing

    CERN Document Server

    Díaz-Cortés, Margarita-Arimatea; Rojas, Raúl

    2017-01-01

    This book bridges the gap between Soft Computing techniques and their applications to complex engineering problems. In each chapter we endeavor to explain the basic ideas behind the proposed applications in an accessible format for readers who may not possess a background in some of the fields. Therefore, engineers or practitioners who are not familiar with Soft Computing methods will appreciate that the techniques discussed go beyond simple theoretical tools, since they have been adapted to solve significant problems that commonly arise in such areas. At the same time, the book will show members of the Soft Computing community how engineering problems are now being solved and handled with the help of intelligent approaches. Highlighting new applications and implementations of Soft Computing approaches in various engineering contexts, the book is divided into 12 chapters. Further, it has been structured so that each chapter can be read independently of the others.

  15. Preschool Cookbook of Computer Programming Topics

    Science.gov (United States)

    Morgado, Leonel; Cruz, Maria; Kahn, Ken

    2010-01-01

    A common problem in computer programming use for education in general, not simply as a technical skill, is that children and teachers find themselves constrained by what is possible through limited expertise in computer programming techniques. This is particularly noticeable at the preliterate level, where constructs tend to be limited to…

  16. Mathematics and Computer Science: The Interplay

    OpenAIRE

    Madhavan, Veni CE

    2005-01-01

    Mathematics has been an important intellectual preoccupation of man for a long time. Computer science as a formal discipline is about seven decades young. However, one thing in common between all users and producers of mathematical thought is the almost involuntary use of computing. In this article, we bring to fore the many close connections and parallels between the two sciences of mathematics and computing. We show that, unlike in the other branches of human inquiry where mathematics is me...

  17. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  18. LHCb Data Management: consistency, integrity and coherence of data

    CERN Document Server

    Bargiotti, Marianne

    2007-01-01

    The Large Hadron Collider (LHC) at CERN will start operating in 2007. The LHCb experiment is preparing for the real data handling and analysis via a series of data challenges and production exercises. The aim of these activities is to demonstrate the readiness of the computing infrastructure based on WLCG (Worldwide LHC Computing Grid) technologies, to validate the computing model and to provide useful samples of data for detector and physics studies. DIRAC (Distributed Infrastructure with Remote Agent Control) is the gateway to WLCG. The Dirac Data Management System (DMS) relies on both WLCG Data Management services (LCG File Catalogues, Storage Resource Managers and File Transfer Service) and LHCb specific components (Bookkeeping Metadata File Catalogue). Although the Dirac DMS has been extensively used over the past years and has proved to achieve a high grade of maturity and reliability, the complexity of both the DMS and its interactions with numerous WLCG components as well as the instability of facilit...

  19. Common Core: Teaching Optimum Topic Exploration (TOTE)

    Science.gov (United States)

    Karge, Belinda Dunnick; Moore, Roxane Kushner

    2015-01-01

    The Common Core has become a household term and yet many educators do not understand what it means. This article explains the historical perspectives of the Common Core and gives guidance to teachers in application of Teaching Optimum Topic Exploration (TOTE) necessary for full implementation of the Common Core State Standards. An effective…

  20. A School for the Common Good

    Science.gov (United States)

    Baines, Lawrence; Foster, Hal

    2006-01-01

    This article examines the history and the concept of the common school from the Common School Movement reformers of the 1850s to the present. These reformers envisioned schools that were to be tuition free and open to everyone, places where rich and poor met and learned together on equal terms. Central to the concept of the common school is its…

  1. 49 CFR 1185.5 - Common control.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Common control. 1185.5 Section 1185.5... OF TRANSPORTATION RULES OF PRACTICE INTERLOCKING OFFICERS § 1185.5 Common control. It shall not be... carriers if such carriers are operated under common control or management either: (a) Pursuant to approval...

  2. Simplifying the ELA Common Core; Demystifying Curriculum

    Science.gov (United States)

    Schmoker, Mike; Jago, Carol

    2013-01-01

    The English Language Arts (ELA) Common Core State Standards ([CCSS], 2010) could have a transformational effect on American education. Though the process seems daunting, one can begin immediately integrating the essence of the ELA Common Core in every subject area. This article shows how one could implement the Common Core and create coherent,…

  3. Common Frame of Reference and social justice

    NARCIS (Netherlands)

    Hesselink, M.W.; Satyanarayana, R.

    2009-01-01

    The article "Common Frame of Reference and Social Justice" by Martijn W. Hesselink evaluates the Draft Common Frame of Reference (DCFR) of social justice. It discusses the important areas, namely a common frame of Reference in a broad sense, social justice and contract law, private law and

  4. Learning Commons in Academic Libraries: Discussing Themes in the Literature from 2001 to the Present

    Science.gov (United States)

    Blummer, Barbara; Kenton, Jeffrey M.

    2017-01-01

    Although the term lacks a standard definition, learning commons represent academic library spaces that provide computer and library resources as well as a range of academic services that support learners and learning. Learning commons have been equated to a laboratory for creating knowledge and staffed with librarians that serve as facilitators of…

  5. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    ATLAS sites are now able to store unique or primary copies of the datasets. ATLAS Distributed Computing is further evolving to speed up request processing by introducing network awareness, using machine learning and optimisation of the latencies during the execution of the full chain of tasks. The Event Service, a new workflow and job execution engine, is designed around check-pointing at the level of event processing to use opportunistic resources more efficiently. ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The exploitation of opportunistic resources was at an early stage throughout 2015, at the level of 10% of the total ATLAS computing power, but in the next few years it is expected to deliver much more. In addition, demonstrating the ability to use an opportunistic resource can lead to securing ATLAS allocations on the facility, hence the importance of this work goes beyond merely the initial CPU cycles gained. In this paper, we give an overview and compare the performance, development effort, flexibility and robustness of the various approaches.

  6. Computer Refurbishment

    International Nuclear Information System (INIS)

    Ichiyen, Norman; Chan, Dominic; Thompson, Paul

    2004-01-01

    The major activity for the 18-month refurbishment outage at the Point Lepreau Generating Station is the replacement of all 380 fuel channel and calandria tube assemblies and the lower portion of connecting feeder pipes. New Brunswick Power would also take advantage of this outage to conduct a number of repairs, replacements, inspections and upgrades (such as rewinding or replacing the generator, replacement of shutdown system trip computers, replacement of certain valves and expansion joints, inspection of systems not normally accessible, etc.). This would allow for an additional 25 to 30 years. Among the systems to be replaced are the PDC's for both shutdown systems. Assessments have been completed for both the SDS1 and SDS2 PDC's, and it has been decided to replace the SDS2 PDCs with the same hardware and software approach that has been used successfully for the Wolsong 2, 3, and 4 and the Qinshan 1 and 2 SDS2 PDCs. For SDS1, it has been decided to use the same software development methodology that was used successfully for the Wolsong and Qinshan called the I A and to use a new hardware platform in order to ensure successful operation for the 25-30 year station operating life. The selected supplier is Triconex, which uses a triple modular redundant architecture that will enhance the robustness/fault tolerance of the design with respect to equipment failures

  7. Illustrated computer tomography

    International Nuclear Information System (INIS)

    Takahashi, S.

    1983-01-01

    This book provides the following information: basic aspects of computed tomography; atlas of computed tomography of the normal adult; clinical application of computed tomography; and radiotherapy planning and computed tomography

  8. Space Use in the Commons: Evaluating a Flexible Library Environment

    Directory of Open Access Journals (Sweden)

    Andrew D. Asher

    2017-06-01

    Full Text Available Abstract Objective – This article evaluates the usage and user experience of the Herman B Wells Library’s Learning Commons, a newly renovated technology and learning centre that provides services and spaces tailored to undergraduates’ academic needs at Indiana University Bloomington (IUB. Methods – A mixed-method research protocol combining time-lapse photography, unobtrusive observation, and random-sample surveys was employed to construct and visualize a representative usage and activity profile for the Learning Commons space. Results – Usage of the Learning Commons by particular student groups varied considerably from expectations based on student enrollments. In particular, business, first and second year students, and international students used the Learning Commons to a higher degree than expected, while humanities students used it to a much lower degree. While users were satisfied with the services provided and the overall atmosphere of the space, they also experienced the negative effects of insufficient space and facilities due to the space often operating at or near its capacity. Demand for collaboration rooms and computer workstations was particularly high, while additional evidence suggests that the Learning Commons furniture mix may not adequately match users’ needs. Conclusions – This study presents a unique approach to space use evaluation that enables researchers to collect and visualize representative observational data. This study demonstrates a model for quickly and reliably assessing space use for open-plan and learning-centred academic environments and for evaluating how well these learning spaces fulfill their institutional mission.

  9. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  10. Identification and authentication. Common biometric methods review

    OpenAIRE

    Lysak, A.

    2012-01-01

    Major biometric methods used for identification and authentication purposes in modern computing systems are considered in the article. Basic classification, application areas and key differences are given.

  11. Common cause failures of reactor pressure components

    International Nuclear Information System (INIS)

    Mankamo, T.

    1978-01-01

    The common cause failure is defined as a multiple failure event due to a common cause. The existence of common failure causes may ruin the potential advantages of applying redundancy for reliability improvement. Examples relevant to large mechanical components are presented. Preventive measures against common cause failures, such as physical separation, equipment diversity, quality assurance, and feedback from experience are discussed. Despite the large number of potential interdependencies, the analysis of common cause failures can be done within the framework of conventional reliability analysis, utilizing, for example, the method of deriving minimal cut sets from a system fault tree. Tools for the description and evaluation of dependencies between components are discussed: these include the model of conditional failure causes that are common to many components, and evaluation of the reliability of redundant components subjected to a common load. (author)

  12. Cloud Computing Fundamentals

    Science.gov (United States)

    Furht, Borko

    In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.

  13. Computer games and software engineering

    CERN Document Server

    Cooper, Kendra M L

    2015-01-01

    Computer games represent a significant software application domain for innovative research in software engineering techniques and technologies. Game developers, whether focusing on entertainment-market opportunities or game-based applications in non-entertainment domains, thus share a common interest with software engineers and developers on how to best engineer game software.Featuring contributions from leading experts in software engineering, the book provides a comprehensive introduction to computer game software development that includes its history as well as emerging research on the inte

  14. Unconventional Quantum Computing Devices

    OpenAIRE

    Lloyd, Seth

    2000-01-01

    This paper investigates a variety of unconventional quantum computation devices, including fermionic quantum computers and computers that exploit nonlinear quantum mechanics. It is shown that unconventional quantum computing devices can in principle compute some quantities more rapidly than `conventional' quantum computers.

  15. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  16. Computed tomographic findings of intracranial pyogenic abscess

    International Nuclear Information System (INIS)

    Kim, S. J.; Suh, J. H.; Park, C. Y.; Lee, K. C.; Chung, S. S.

    1982-01-01

    The early diagnosis and effective treatment of brain abscess pose a difficult clinical problem. With the advent of computed tomography, however, it appears that mortality due to intracranial abscess has significantly diminished. 54 cases of intracranial pyogenic abscess are presented. Etiologic factors and computed tomographic findings are analyzed and following result are obtained. 1. The common etiologic factors are otitis media, post operation, and head trauma, in order of frequency. 2. The most common initial computed tomographic findings of brain abscess is ring contrast enhancement with surrounding brain edema. 3. The most characteristic computed tomographic finding of ring contrast enhancement is smooth thin walled ring contrast enhancement. 4. Most of thick irregular ring contrast enhancement are abscess associated with cyanotic heart disease or poor operation. 5. The most common findings of epidural and subdural empyema is crescentic radiolucent area with thin wall contrast enhancement without surrounding brain edema in convexity of brain

  17. From Computer Forensics to Forensic Computing: Investigators Investigate, Scientists Associate

    OpenAIRE

    Dewald, Andreas; Freiling, Felix C.

    2014-01-01

    This paper draws a comparison of fundamental theories in traditional forensic science and the state of the art in current computer forensics, thereby identifying a certain disproportion between the perception of central aspects in common theory and the digital forensics reality. We propose a separation of what is currently demanded of practitioners in digital forensics into a rigorous scientific part on the one hand, and a more general methodology of searching and seizing digital evidence an...

  18. AMRITA -- A computational facility

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, J.E. [California Inst. of Tech., CA (US); Quirk, J.J.

    1998-02-23

    Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

  19. Common-image gathers using the excitation amplitude imaging condition

    KAUST Repository

    Kalita, Mahesh

    2016-06-06

    Common-image gathers (CIGs) are extensively used in migration velocity analysis. Any defocused events in the subsurface offset domain or equivalently nonflat events in angle-domain CIGs are accounted for revising the migration velocities. However, CIGs from wave-equation methods such as reverse time migration are often expensive to compute, especially in 3D. Using the excitation amplitude imaging condition that simplifies the forward-propagated source wavefield, we have managed to extract extended images for space and time lags in conjunction with prestack reverse time migration. The extended images tend to be cleaner, and the memory cost/disk storage is extensively reduced because we do not need to store the source wavefield. In addition, by avoiding the crosscorrelation calculation, we reduce the computational cost. These features are demonstrated on a linear v(z) model, a two-layer velocity model, and the Marmousi model.

  20. [Common physicochemical characteristics of endogenous hormones-- liberins and statins].

    Science.gov (United States)

    Zamiatnin, A A; Voronina, O L

    1998-01-01

    The common chemical features of oligopeptide releasing-hormones and release inhibiting hormones were investigated with the aid of computer methods. 339 regulatory molecules of such type have been extracted out of data from computer bank EROP-Moscow. They contain from 2 to 47 amino acid residues and their sequences include short sites, which play apparently a decisive role in realization of interactions with the receptors. The analysis of chemical radicals shows that all liberins and statins contain positively charged group and cyclic radical of some amino acids or hydrophobic group. Results of this study indicate that the most chemical radicals of hormones are open for the interaction with potential receptors of target-cells. The mechanism of hormone ligand and receptors binding and conceivable role of amino acid and neurotransmitter radicals in hormonal properties of liberins and statins is discussed.