WorldWideScience

Sample records for wlcg common computing

  1. WLCG Operations portal demo tutorial

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    This is a navigation through http://wlcg-ops.web.cern.ch/ the Worldwide LHC Computing Grid (WLCG) Operations' portal. In this portal you will find documentation and information about WLCG Operation activities for: System Administrators at the WLCG sites LHC Experiments Operation coordination people, including Task Forces and Working Groups

  2. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    Science.gov (United States)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea

    2017-10-01

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.

  3. CMS Monte Carlo production in the WLCG computing grid

    CERN Document Server

    Hernández, J M; Mohapatra, A; Filippis, N D; Weirdt, S D; Hof, C; Wakefield, S; Guan, W; Khomitch, A; Fanfani, A; Evans, D; Flossdorf, A; Maes, J; van Mulders, P; Villella, I; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Caballero, J; Sanches, J A; Kavka, C; Van Lingen, F; Bacchi, W; Codispoti, G; Elmer, P; Eulisse, G; Lazaridis, C; Kalini, S; Sarkar, S; Hammad, G

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG).

  4. Optimising costs in WLCG operations

    CERN Document Server

    Pradillo, Mar; Flix, Josep; Forti, Alessandra; Sciabà, Andrea

    2015-01-01

    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the 50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several improvements in the WLCG infrastructure have been implemented during the first long LHC shutdown to prepare for the increasing needs of the experiments during Run2 and beyond. However, constraints in funding will affect not only the computing resources but also the available effort for operations. This paper presents the results of a detailed investigation on the allocation of the effort in the different areas of WLCG operations, identifies the most important sources of inefficiency and proposes viable strategies for optimising the operational cost, taking into account the current trends in the evolution of the computing infrastruc...

  5. From the CMS Computing Experience in the WLCG STEP'09 Challenge to the First Data Taking of the LHC Era

    Science.gov (United States)

    Bonacorsi, D.; Gutsche, O.

    The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.

  6. Job monitoring on the WLCG scope: Current status and new strategy

    CERN Document Server

    Andreeva, J; Belov, S; Casey, J; Dvorak, F; Gaidioz, B; Karavakis, E; Kodolova, O; Kokoszkiewicz, L; Krenek, A; Lanciotti, E; Maier, J; Mulac, M; Rocha Da Cunha Rodrigues, D F; Rocha, R; Saiz, P; Sidorova, I; Sitera, J; Tikhonenko, E; Vaibhav, K; Vocu, M

    2010-01-01

    Job processing and data transfer are the main computing activities on the WLCG infrastructure. Reliable monitoring of the job processing on the WLCG scope is a complicated task due to the complexity of the infrastructure itself and the diversity of the currently used job submission methods. The paper will describe current status and the new strategy for the job monitoring on the WLCG scope, covering primary information sources, job status changes publishing, transport mechanism and visualization.

  7. Web Proxy Auto Discovery for the WLCG

    Science.gov (United States)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses

  8. Web proxy auto discovery for the WLCG

    CERN Document Server

    Dykstra, D; Blumenfeld, B; De Salvo, A; Dewhurst, A; Verguilov, V

    2017-01-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids regis...

  9. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    CERN Document Server

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities mon...

  10. Next generation WLCG File Transfer Service (FTS)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    LHC experiments at CERN and worldwide utilize WLCG resources and middleware components to perform distributed computing tasks. One of the most important tasks is reliable file replication. It is a complex problem, suffering from transfer failures, disconnections, transfer duplication, server and network overload, differences in storage systems, etc. To address these problems, EMI and gLite have provided the independent File Transfer Service (FTS) and Grid File Access Library (GFAL) tools. Their development started almost a decade ago, in the meantime, requirements in data management have changed - the old architecture of FTS and GFAL cannot keep support easily these changes. Technology has also been progressing: FTS and GFAL do not fit into the new paradigms (cloud, messaging, for example). To be able to serve the next stage of LHC data collecting (from 2013), we need a new generation of  these tools: FTS 3 and GFAL 2. We envision a service requiring minimal configuration, which can dynamically adapt to the...

  11. Providing global WLCG transfer monitoring

    CERN Document Server

    Andreeva, J; Campana, S; Flix, J; Flix, J; Keeble, O; Magini, N; Molnar, Z; Oleynik, D; Petrosyan, A; Ro, G; Saiz, P; Salichos, M; Tuckett, D; Uzhinsky, A; Wildish, T

    2012-01-01

    The WLCG[1] Transfers Dashboard is a monitoring system which aims to provide a global view of WLCG data transfers and to reduce redundancy in monitoring tasks performed by the LHC experiments. The system is designed to work transparently across LHC experiments and across the various technologies used for data transfer. Currently each LHC experiment monitors data transfers via experiment-specific systems but the overall cross-experiment picture is missing. Even for data transfers handled by FTS, which is used by 3 LHC experiments, monitoring tasks such as aggregation of FTS transfer statistics or estimation of transfer latencies are performed by every experiment separately. These tasks could be performed once, centrally, and then served to all experiments via a well-defined set of APIs. In the design and development of the new system, experience accumulated by the LHC experiments in the data management monitoring area is taken into account and a considerable part of the code of the ATLAS DDM Dashboard is being...

  12. GridPP gathers in Geneva for WLCG workshop

    CERN Multimedia

    2007-01-01

    "More than 40 GridPP members descended on CERN last month for the latest WLCG collaboration workshop. While not the first workshop for WLCG operations, it was the largest with over 270 registered participants." (1 page)

  13. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    Science.gov (United States)

    Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.

    2015-12-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.

  14. Processing of the WLCG monitoring data using NoSQL

    Science.gov (United States)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  15. The WLCG Messaging Service and its Future

    CERN Document Server

    Cons, Lionel

    2012-01-01

    Enterprise messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The production messaging service currently used by WLCG includes four tightly coupled brokers operated by EGI (running Apache ActiveMQ and designed to host the Grid operational tools such as SAM) as well as two dedicated services for ATLAS-DDM and experiments dashboards (currently also running Apache ActiveMQ). In the future, this service is expected to grow in numbers of applications supported, brokers and technologies. The WLCG Messaging Roadmap identified three areas with room for improvement (security, scalability and availability/reliability) as well as ten practical recommendations to address them. This paper describes a messaging service architecture that is in line with these recommendations as well as a software architecture based on reusable components that ease interactions with the messaging service. These two architectures wil...

  16. WLCG-specific special features in GGUS

    CERN Document Server

    Antoni, T; Dimou, M; CERN. Geneva. IT Department

    2010-01-01

    The user and operations support of the EGEE series of projects can be captioned "regional support with central coordination". Its central building block is the GGUS portal which acts as an entry point for users and support staff. It is also as an integration platform for the distributed support effort. As WLCG relies heavily on the EGEE infrastructure it is important that the support infrastructure covers the WLCG use cases of the grid. During the last year several special features have been implemented in the GGUS portal to meet the requirements of the LHC experiments needing to contact the WLCG grid infrastructure, especially their Tier 1 and Tier 2 centres. This paper summarises these special features, with particular focus on the alarm and team tickets and the direct ticket routing, in the context of the overall user and operations support infrastructure. Additionally we will present the management processes for the user support activity, detailing the options which the LHC VOs have to participate in this...

  17. x509-free access to WLCG resources

    Science.gov (United States)

    Short, H.; Manzi, A.; De Notaris, V.; Keeble, O.; Kiryanov, A.; Mikkonen, H.; Tedesco, P.; Wartel, R.

    2017-10-01

    Access to WLCG resources is authenticated using an x509 and PKI infrastructure. Even though HEP users have always been exposed to certificates directly, the development of modern Web Applications by the LHC experiments calls for simplified authentication processes keeping the underlying software unmodified. In this work we will show a solution with the goal of providing access to WLCG resources using the user’s home organisations credentials, without the need for user-acquired x509 certificates. In particular, we focus on identity providers within eduGAIN, which interconnects research and education organisations worldwide, and enables the trustworthy exchange of identity-related information. eduGAIN has been integrated at CERN in the SSO infrastructure so that users can authenticate without the need of a CERN account. This solution achieves x509-free access to Grid resources with the help of two services: STS and an online CA. The STS (Security Token Service) allows credential translation from the SAML2 format used by Identity Federations to the VOMS-enabled x509 used by most of the Grid. The IOTA CA (Identifier-Only Trust Assurance Certification Authority) is responsible for the automatic issuing of short-lived x509 certificates. The IOTA CA deployed at CERN has been accepted by EUGridPMA as the CERN LCG IOTA CA, included in the IGTF trust anchor distribution and installed by the sites in WLCG. We will also describe the first pilot projects which are integrating the solution.

  18. New solutions for large scale functional tests in the WLCG infrastructure with SAM/Nagios: the experiments experience

    CERN Document Server

    Andreeva, J; Di Girolamo, A; Kakkar, A; Litmaath, M; Magini, N; Negri, G; Ramachandran, S; Roiser, S; Saiz, P; Saiz Santos, M D; Sarkar, B; Schovancova, J; Sciabà, A; Wakankar, A

    2012-01-01

    Since several years the LHC experiments rely on the WLCG Service Availability Monitoring framework (SAM) to run functional tests on their distributed computing systems. The SAM tests have become an essential tool to measure the reliability of the Grid infrastructure and to ensure reliable computing operations, both for the sites and the experiments. Recently the old SAM framework was replaced with a completely new system based on Nagios and ActiveMQ to better support the transition to EGI and to its more distributed infrastructure support model and to implement several scalability and functionality enhancements. This required all LHC experiments and the WLCG support teams to migrate their tests, to acquire expertise on the new system, to validate the new availability and reliability computations and to adopt new visualisation tools. In this contribution we describe in detail the current state of the art of functional testing in WLCG: how the experiments use the new SAM/Nagios framework, the advanced functiona...

  19. Real-time statistic analytics for the WLCG Transfers Dashboard with Espe

    CERN Document Server

    Georgiou, Maria Varvara

    2014-01-01

    The WLCG Data Transfer Dashboard is monitoring the data movement, which are generated by the LHC experiments, between sites and scientists around the world. Currently all the transfer information data are recorded inside an Oracle database and the statistics are computed on a regular basis using PL/SQL procedures. This is a solid and reliable solution, but it can be improved because the PL/SQL procedures are not scaling with the increasing data volume and also the statistics re-computation may take days/weeks. This project aims to integrate Esper, an open-source in-memory processing engine, to the existing work flow to allow a real-time computation and visualization on fresh data and to speed-up all the statistics generation. This project is part of the WLCG monitoring analytics framework evolution.

  20. Advances in Monitoring of Grid Services in WLCG

    CERN Document Server

    Casey, J; Neilson, I

    2008-01-01

    During 2006, the Worldwide LHC Computing Grid Project (WLCG) constituted several working groups in the area of fabric and application monitoring with the mandate of improving the reliability and availability of the grid infrastructure through improved monitoring of the grid fabric. This paper discusses the work of one of these groups: the "Grid Service Monitoring Working Group". This group has the aim to evaluate the existing monitoring system and create a coherent architecture that would let the existing system run, while increasing the quality and quantity of monitoring information gathered. We describe the stakeholders in this project, and focus in particular on the needs of the site administrators, which were not well satisfied by existing solutions. Several standards for service metric gathering and grid monitoring data exchange, and the place of each in the architecture will be shown. Finally we will describe the use of a Nagios-based prototype deployment for validation of our ideas, and the progress on...

  1. WLCG and IPv6 - the HEPiX IPv6 working group

    Science.gov (United States)

    Campana, S.; Chadwick, K.; Chen, G.; Chudoba, J.; Clarke, P.; Eliáš, M.; Elwell, A.; Fayer, S.; Finnern, T.; Goossens, L.; Grigoras, C.; Hoeft, B.; Kelsey, D. P.; Kouba, T.; López Muñoz, F.; Martelli, E.; Mitchell, M.; Nairz, A.; Ohrenberg, K.; Pfeiffer, A.; Prelz, F.; Qi, F.; Rand, D.; Reale, M.; Rozsa, S.; Sciaba, A.; Voicu, R.; Walker, C. J.; Wildish, T.

    2014-06-01

    The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applying for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). This paper describes the work done by the working group and its future plans.

  2. Experience commissioning the ATLAS distributed data management system on top of the WLCG service

    CERN Document Server

    Campana, S

    2010-01-01

    The ATLAS experiment at CERN developed an automated system for distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a dedicated effort was put in place to deliver a reliable service for ATLAS data distribution, offering the necessary performance, high availability and accommodating the main use cases. This contribution will describe the various challenges and activities carried on in 2008 for the commissioning of the system, together with the experience distributing simulated data and detector data. The main commissioning activity was concentrated in two Combined Computing Resource Challenges, in February and May 2008, where it was demonstrated that the WLCG service and the ATLAS system could sustain the peak load of data transfer according to the co...

  3. Evolution of Database Replication Technologies for WLCG

    CERN Document Server

    Baranowski, Zbigniew; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-01-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  4. The production deployment of IPv6 on WLCG

    Science.gov (United States)

    Bernier, J.; Campana, S.; Chadwick, K.; Chudoba, J.; Dewhurst, A.; Eliáš, M.; Fayer, S.; Finnern, T.; Grigoras, C.; Hartmann, T.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Macmahon, E.; Martelli, E.; Millar, A. P.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Voicu, R.; Walker, C. J.; Wildish, T.

    2015-12-01

    The world is rapidly running out of IPv4 addresses; the number of IPv6 end systems connected to the internet is increasing; WLCG and the LHC experiments may soon have access to worker nodes and/or virtual machines (VMs) possessing only an IPv6 routable address. The HEPiX IPv6 Working Group has been investigating, testing and planning for dual-stack services on WLCG for several years. Following feedback from our working group, many of the storage technologies in use on WLCG have recently been made IPv6-capable. This paper presents the IPv6 requirements, tests and plans of the LHC experiments together with the tests performed on the group's IPv6 test-bed. This is primarily aimed at IPv6-only worker nodes or VMs accessing several different implementations of a global dual-stack federated storage service. Finally the plans for deployment of production dual-stack WLCG services are presented.

  5. A common language for computer security incidents

    Energy Technology Data Exchange (ETDEWEB)

    John D. Howard; Thomas A Longstaff

    1998-10-01

    Much of the computer security information regularly gathered and disseminated by individuals and organizations cannot currently be combined or compared because a common language has yet to emerge in the field of computer security. A common language consists of terms and taxonomies (principles of classification) which enable the gathering, exchange and comparison of information. This paper presents the results of a project to develop such a common language for computer security incidents. This project results from cooperation between the Security and Networking Research Group at the Sandia National Laboratories, Livermore, CA, and the CERT{reg_sign} Coordination Center at Carnegie Mellon University, Pittsburgh, PA. This Common Language Project was not an effort to develop a comprehensive dictionary of terms used in the field of computer security. Instead, the authors developed a minimum set of high-level terms, along with a structure indicating their relationship (a taxonomy), which can be used to classify and understand computer security incident information. They hope these high-level terms and their structure will gain wide acceptance, be useful, and most importantly, enable the exchange and comparison of computer security incident information. They anticipate, however, that individuals and organizations will continue to use their own terms, which may be more specific both in meaning and use. They designed the common language to enable these lower-level terms to be classified within the common language structure.

  6. Advances in monitoring of grid services in WLCG

    Energy Technology Data Exchange (ETDEWEB)

    Casey, J; Imamagic, E; Neilson, I [European Organization for Nuclear Research (CERN), CH-1211, Geneve 23 (Switzerland); University of Zagreb University Computing Centre (SRCE), Josipa Marohnica 5, 10000 Zagreb (Croatia); European Organization for Nuclear Research (CERN), CH-1211, Geneve 23 (Switzerland)], E-mail: james.casey@cern.ch, E-mail: eimamagi@srce.hr, E-mail: ian.neilson@cern.ch

    2008-07-15

    During 2006, the Worldwide LHC Computing Grid Project (WLCG) constituted several working groups in the area of fabric and application monitoring with the mandate of improving the reliability and availability of the grid infrastructure through improved monitoring of the grid fabric. This paper discusses the work of one of these groups: the 'Grid Service Monitoring Working Group'. This group has the aim to evaluate the existing monitoring system and create a coherent architecture that would let the existing system run, while increasing the quality and quantity of monitoring information gathered. We describe the stakeholders in this project, and focus in particular on the needs of the site administrators, which were not well satisfied by existing solutions. Several standards for service metric gathering and grid monitoring data exchange, and the place of each in the architecture will be shown. Finally we will describe the use of a Nagios-based prototype deployment for validation of our ideas, and the progress on turning this prototype into a production-ready system.

  7. Sustainable support for WLCG through the EGI distributed infrastructure

    Science.gov (United States)

    Antoni, Torsten; Bozic, Stefan; Reisser, Sabine

    2011-12-01

    Grid computing is now in a transition phase from development in research projects to routine usage in a sustainable infrastructure. This is mirrored in Europe by the transition from the series of EGEE projects to the European Grid Initiative (EGI). EGI aims at establishing a self-sustained grid infrastructure across Europe. The main building blocks of EGI are the national grid initiatives in the participating countries and a central coordinating institution (EGI.eu). The middleware used is provided by consortia outside of EGI. Also the user communities are organized separately from EGI. The transition to a self-sustained grid infrastructure is aided by the EGI-InSPIRE project, aiming at reducing the project-funding needed to run EGI over the course of its four year duration. Providing user support in this framework poses new technical and organisational challenges as it has to cross the boundaries of various projects and infrastructures. The EGI user support infrastructure is built around the Gobal Grid User Support system (GGUS) that was also the basis of user support in EGEE. Utmost care was taken that during the transition from EGEE to EGI support services which are already used in production were not perturbed. A year into the EGI-InSPIRE project, in this paper we would like to present the current status of the user support infrastructure provided by EGI for WLCG, new features that were needed to match the new infrastructure, issues and challenges that occurred during the transition and give an outlook on future plans and developments.

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  9. Evaluation of ZFS as an efficient WLCG storage backend

    Science.gov (United States)

    Ebert, M.; Washbrook, A.

    2017-10-01

    A ZFS based software raid system was tested for performance against a hardware raid system providing storage based on the traditional Linux file systems XFS and EXT4. These tests were done for a healthy raid array as well as for a degraded raid array and during the rebuild of a raid array. It was found that ZFS performs better in almost all test scenarios. In addition, distinct features of ZFS were tested for WLCG data storage use, like compression and higher raid levels with triple redundancy information. The long term reliability was observed after converting all production storage servers at the Edinburgh WLCG Tier-2 site to ZFS, resulting in about 1.2PB of ZFS based storage at this site.

  10. WLCG Operations and the First Prolonged LHC Run

    CERN Document Server

    Girone, M; CERN. Geneva. IT Department

    2011-01-01

    By the time of CHEP 2010 we had accumulated just over 6 months’ experience with proton-proton data taking, production and analysis at the LHC. This paper addresses the issues seen from the point of view of the WLCG Service. In particular, it answers the following questions: Did the WLCG service delivered quantitatively and qualitatively? Were the "key performance indicators" a reliable and accurate measure of the service quality? Were the inevitable service issues been resolved in a sufficiently rapid fashion? What are the key areas of improvement required not only for long-term sustainable operations, but also to embrace new technologies. It concludes with a summary of our readiness for data taking in the light of real experience.

  11. Geographical failover for the EGEE-WLCG Grid collaboration tools

    OpenAIRE

    Mathieu, Gilles; Aidel, Osman; L'Orphelin, Cyril; Lichwala, Rafal; Pagano, Alfredo; Cavalli, Alessandro

    2016-01-01

    International audience; Worldwide grid projects such as EGEE and WLCG need services with high availability, not only for grid usage, but also for associated operations. In particular, tools used for daily activities or operational procedures are considered critical. In this context, the goal of the work done to solve the EGEE failover problem is to propose, implement and document well-established mechanisms and procedures to limit service outages for the operations and monitoring tools used b...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  13. Deployment of IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.

    2017-10-01

    The fraction of Internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure upgrade often offers sites an excellent window of opportunity to configure and enable IPv6. There is a significant overhead when setting up and maintaining dual-stack machines, so where possible sites would like to upgrade their services directly to IPv6 only. In doing so, they are also expediting the transition process towards its desired completion. While the LHC experiments accept there is a need to move to IPv6, it is currently not directly affecting their work. Sites are unwilling to upgrade if they will be unable to run LHC experiment workflows. This has resulted in a very slow uptake of IPv6 from WLCG sites. For several years the HEPiX IPv6 Working Group has been testing a range of WLCG services to ensure they are IPv6 compliant. Several sites are now running many of their services as dual-stack. The working group, driven by the requirements of the LHC VOs to be able to use IPv6-only opportunistic resources, continues to encourage wider deployment of dual-stack services to make the use of such IPv6-only clients viable. This paper presents the working group’s plan and progress so far to allow sites to deploy IPv6-only CPU resources. This includes making experiment central services dual-stack as well as a number of storage services. The monitoring, accounting and information services that are used by jobs also need to be upgraded. Finally the VO testing that has taken place on hosts connected via IPv6-only is reported.

  14. Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

    CERN Document Server

    Field, L; Johansson, D; Kleist, J

    2010-01-01

    Interoperability of grid infrastructures is becoming increasingly important in the emergence of large scale grid infrastructures based on national and regional initiatives. To achieve interoperability of grid infrastructures adaptions and bridging of many different systems and services needs to be tackled. A grid infrastructure offers services for authentication, authorization, accounting, monitoring, operation besides from the services for handling and data and computations. This paper presents an outline of the work done to integrate the Nordic Tier-1 and 2s, which for the compute part is based on the ARC middleware, into the WLCG grid infrastructure co-operated by the EGEE project. Especially, a throughout description of integration of the compute services is presented.

  15. Making the most of cloud storage - a toolkit for exploitation by WLCG experiments

    Science.gov (United States)

    Alvarez Ayllon, Alejandro; Arsuaga Rios, Maria; Bitzes, Georgios; Furano, Fabrizio; Keeble, Oliver; Manzi, Andrea

    2017-10-01

    Understanding how cloud storage can be effectively used, either standalone or in support of its associated compute, is now an important consideration for WLCG. We report on a suite of extensions to familiar tools targeted at enabling the integration of cloud object stores into traditional grid infrastructures and workflows. Notable updates include support for a number of object store flavours in FTS3, Davix and gfal2, including mitigations for lack of vector reads; the extension of Dynafed to operate as a bridge between grid and cloud domains; protocol translation in FTS3; the implementation of extensions to DPM (also implemented by the dCache project) to allow 3rd party transfers over HTTP. The result is a toolkit which facilitates data movement and access between grid and cloud infrastructures, broadening the range of workflows suitable for cloud. We report on deployment scenarios and prototype experience, explaining how, for example, an Amazon S3 or Azure allocation can be exploited by grid workflows.

  16. Common accounting system for monitoring the ATLAS Distributed Computing resources

    CERN Document Server

    Karavakis, E; The ATLAS collaboration; Campana, S; Gayazov, S; Jezequel, S; Saiz, P; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  17. Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology

    Science.gov (United States)

    Campana, S.; Brown, A.; Bonacorsi, D.; Capone, V.; De Girolamo, D.; Casani, A. F.; Flix, J.; Forti, A.; Gable, I.; Gutsche, O.; Hesnaux, A.; Liu, S.; Lopez Munoz, F.; Magini, N.; McKee, S.; Mohammed, K.; Rand, D.; Reale, M.; Roiser, S.; Zielinski, M.; Zurawski, J.

    2014-06-01

    The WLCG infrastructure moved from a very rigid network topology, based on the MONARC model, to a more relaxed system, where data movement between regions or countries does not necessarily need to involve T1 centres. While this evolution brought obvious advantages, especially in terms of flexibility for the LHC experiment's data management systems, it also opened the question of how to monitor the increasing number of possible network paths, in order to provide a global reliable network service. The perfSONAR network monitoring system has been evaluated and agreed as a proper solution to cover the WLCG network monitoring use cases: it allows WLCG to plan and execute latency and bandwidth tests between any instrumented endpoint through a central scheduling configuration, it allows archiving of the metrics in a local database, it provides a programmatic and a web based interface exposing the tests results; it also provides a graphical interface for remote management operations. In this contribution we will present our activity to deploy a perfSONAR based network monitoring infrastructure, in the scope of the WLCG Operations Coordination initiative: we will motivate the main choices we agreed in terms of configuration and management, describe the additional tools we developed to complement the standard packages and present the status of the deployment, together with the possible future evolution.

  18. How common standards can diminish collective intelligence: a computational study.

    Science.gov (United States)

    Morreau, Michael; Lyon, Aidan

    2016-08-01

    Making good decisions depends on having accurate information - quickly, and in a form in which it can be readily communicated and acted upon. Two features of medical practice can help: deliberation in groups and the use of scores and grades in evaluation. We study the contributions of these features using a multi-agent computer simulation of groups of physicians. One might expect individual differences in members' grading standards to reduce the capacity of the group to discover the facts on which well-informed decisions depend. Observations of the simulated groups suggest on the contrary that this kind of diversity can in fact be conducive to epistemic performance. Sometimes, it is adopting common standards that may be expected to result in poor decisions. © 2016 John Wiley & Sons, Ltd.

  19. Experience of Google's latest deep learning library, TensorFlow, in a large-scale WLCG cluster

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Smith, Joshua Wyatt; Quadt, Arnulf [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany)

    2016-07-01

    The researchers at the Google Brain team released their second generation's Deep Learning library, TensorFlow, as an open-source package under the Apache 2.0 license in November, 2015. Google has already deployed the first generation's library using DistBlief in various systems such as Google Search, advertising systems, speech recognition systems, Google Images, Google Maps, Street View, Google Translate and many other latest products. In addition, many researchers in high energy physics have recently started to understand and use Deep Learning algorithms in their own research and analysis. We conceive a first use-case scenario of TensorFlow to create the Deep Learning models from high-dimensional inputs like physics analysis data in a large-scale WLCG computing cluster. TensorFlow carries out computations using a dataflow model and graph structure onto a wide variety of different hardware platforms and systems, such as many CPU architectures, GPUs and smartphone platforms. Having a single library that can distribute the computations to create a model to the various platforms and systems would significantly simplify the use of Deep Learning algorithms in high energy physics. We deploy TensorFlow with the Docker container environments and present the first use in our grid system.

  20. Deployment of 464XLAT (RFC6877) alongside IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Froy, T. S.; Traynor, D. P.; Walker, C. J.

    2017-10-01

    IPv4 is now officially deprecated by the IETF. A significant amount of effort has already been expended by the HEPiX IPv6 Working Group on testing dual-stacked hosts and IPv6-only CPU resources. Dual-stack adds complexity and administrative overhead to sites that may already be starved of resource. This has resulted in a very slow uptake of IPv6 from WLCG sites. 464XLAT (RFC6877) is intended for IPv6 single-stack environments that require the ability to communicate with IPv4-only endpoints. This paper will present a deployment strategy for 464XLAT, operational experiences of using 464XLAT in production at a WLCG site and important information to consider prior to deploying 464XLAT.

  1. Stationary common spatial patterns for brain-computer interfacing

    Science.gov (United States)

    Samek, Wojciech; Vidaurre, Carmen; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-04-01

    Classifying motion intentions in brain-computer interfacing (BCI) is a demanding task as the recorded EEG signal is not only noisy and has limited spatial resolution but it is also intrinsically non-stationary. The non-stationarities in the signal may come from many different sources, for instance, electrode artefacts, muscular activity or changes of task involvement, and often deteriorate classification performance. This is mainly because features extracted by standard methods like common spatial patterns (CSP) are not invariant to variations of the signal properties, thus should also change over time. Although many extensions of CSP were proposed to, for example, reduce the sensitivity to noise or incorporate information from other subjects, none of them tackles the non-stationarity problem directly. In this paper, we propose a method which regularizes CSP towards stationary subspaces (sCSP) and show that this increases classification accuracy, especially for subjects who are hardly able to control a BCI. We compare our method with the state-of-the-art approaches on different datasets, show competitive results and analyse the reasons for the improvement.

  2. 26 CFR 1.584-3 - Computation of common trust fund income.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Computation of common trust fund income. 1.584-3... fund income. The taxable income of the common trust fund shall be computed in the same manner and on... or exchanges of capital assets of the common trust fund are required to be segregated. A common trust...

  3. Common Agency and Computational Complexity : Theory and Experimental Evidence

    NARCIS (Netherlands)

    Kirchsteiger, G.; Prat, A.

    1999-01-01

    In a common agency game, several principals try to influence the behavior of an agent. Common agency games typically have multiple equilibria. One class of equilibria, called truthful, has been identified by Bernheim and Whinston and has found widespread use in the political economy literature. In

  4. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  5. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  6. Faster Algorithms for Computing Longest Common Increasing Subsequences

    DEFF Research Database (Denmark)

    Kutz, Martin; Brodal, Gerth Stølting; Kaligosi, Kanela

    2011-01-01

    of the alphabet, and Sort is the time to sort each input sequence. For k⩾3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures......We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths n and m, where m⩾n, we present an algorithm with an output-dependent expected running time of and O(m) space, where ℓ is the length of an LCIS, σ is the size....... Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an -time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small...

  7. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application

  8. The "Common Solutions" Strategy of the Experiment Support group at CERN for the LHC Experiments

    CERN Document Server

    Girone, M; Barreiro Megino, F H; Campana, S; Cinquilli, M; Di Girolamo, A; Dimou, M; Giordano, D; Karavakis, E; Kenyon, M J; Kokozkiewicz, L; Lanciotti, E; Litmaath, M; Magini, N; Negri, G; Roiser, S; Saiz, P; Saiz Santos, M D; Schovancova, J; Sciabà, A; Spiga, D; Trentadue, R; Tuckett, D; Valassi, A; Van der Ster, D C; Shiers, J D

    2012-01-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments' computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management m...

  9. Common findings and pseudolesions at computed tomography colonography: pictorial essay

    Energy Technology Data Exchange (ETDEWEB)

    Atzingen, Augusto Castelli von [Clinical Radiology, Universidade Federal de Sao Paulo (UNIFESP), Sao Paulo, SP (Brazil); Tiferes, Dario Ariel; Matsumoto, Carlos Alberto; Nunes, Thiago Franchi; Maia, Marcos Vinicius Alvim Soares [Abdominal Imaging Section, Department of Imaging Diagnosis - Universidade Federal de Sao Paulo (UNIFESP), Sao Paulo, SP (Brazil); D' Ippolito, Giuseppe, E-mail: giuseppe_dr@uol.com.br [Department of Imaging Diagnosis, Universidade Federal de Sao Paulo (UNIFESP), Sao Paulo, SP (Brazil)

    2012-05-15

    Computed tomography colonography is a minimally invasive method for screening for polyps and colorectal cancer, with extremely unusual complications, increasingly used in the clinical practice. In the last decade, developments in bowel preparation, imaging, and in the training of investigators have determined a significant increase in the method sensitivity. Images interpretation is accomplished through a combined analysis of two-dimensional source images and several types of three-dimensional renderings, with sensitivity around 96% in the detection of lesions with dimensions equal or greater than 10 mm in size, when analyzed by experienced radiologists. The present pictorial essay includes examples of diseases and pseudolesions most frequently observed in this type of imaging study. The authors present examples of flat and polypoid lesions, benign and malignant lesions, diverticular disease of the colon, among other conditions, as well as pseudolesions, including those related to inappropriate bowel preparation and misinterpretation. (author)

  10. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2010-01-01

    GoeGrid is a grid resource center located in Goettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center will be presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster will be detailed. The benefits are an efficient use of computer and manpower resources. Further interdisciplinary projects are commonly organized courses for students of all fields to support education on grid-computing.

  11. Enabling IPv6 at FZU - WLCG Tier2 in Prague

    Science.gov (United States)

    Kouba, Tomáš; Chudoba, Jiří; Eliáš, Marek

    2014-06-01

    The usage of the new IPv6 protocol in production is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution presents experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for monitoring the network latency. Since it is not always trivial to setup DNS in a dual stack environment properly, we developed a Nagios plugin for checking whether a domain name is resolvable when using only IP protocol version 6 and only version 4. We will also present local area network monitoring and tuning related to IPv6 performance. One of the most important software for a grid site is a batch system for a job execution. We will present our experience with configuring and running Torque batch system in a dual stack environment. We also discuss the steps needed to run VO specific jobs in our IPv6 testbed.

  12. Deployment of the CMS software on the WLCG Grid

    CERN Document Server

    Behrenhoff, Wolf

    2011-01-01

    The CMS Experiment is taking high energy collision data at CERN. The computing infrastructure used to analyse the data is distributed round the world in a tiered structure. In order to use the 7 Tier-1 sites, the 50 Tier-2 sites and a still growing number of about 30 Tier-3 sites, the CMS software has to be available at those sites. Except for a very few sites the deployment and the removal of CMS software is managed centrally. Since the deployment team has no local accounts at the remote sites all installation jobs have to be sent via Grid jobs. Via a VOMS role the job has a high priority in the batch system and gains write privileges to the software area. Due to the lack of interactive access the installation jobs must be very robust against possible failures, in order not to leave a broken software installation. The CMS software is packaged in RPMs that are installed in the software area independent of the host OS. The apt-get tool is used to resolve package dependencies. This paper reports about the recen...

  13. LHCb: LHCb Distributed Computing Operations

    CERN Multimedia

    Stagni, F

    2011-01-01

    The proliferation of tools for monitoring both activities and infrastructure, together with the pressing need for prompt reaction in case of problems impacting data taking, data reconstruction, data reprocessing and user analysis brought to the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many heterogeneous and independent sources of information offering different views for a better understanding of problems while an operations team and defined procedures have been put in place to handle them. This work summarizes the state-of-the-art of LHCb Grid operations emphasizing the reasons that brought to various choices and what are the tools currently in use to run our daily activities. We highlight the most common problems experienced across years of activities on the WLCG infrastructure, the services with their criticality, the procedures in place, the relevant metrics and the tools available and the ones still missing.

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  15. The Large Hadron Collider and Grid computing

    CERN Document Server

    Geddes, Neil

    2012-01-01

    We present a brief history of the beginnings, development and achievements of the worldwide Large Hadron Collider Computing Grid (wLCG). The wLCG is a huge international endeavour, which is itself embedded within, and directly influences, a much broader computing and information technology landscape. It is often impossible to identify true cause and effect, and they may appear very different from the different perspectives (e.g. information technology industry or academic researcher). This account is no different. It represents a personal view of the developments over the last two decades and is therefore inevitably biased towards those things in which the author has been personally involved.

  16. The Large Hadron Collider and Grid computing.

    Science.gov (United States)

    Geddes, Neil

    2012-02-28

    We present a brief history of the beginnings, development and achievements of the worldwide Large Hadron Collider Computing Grid (wLCG). The wLCG is a huge international endeavour, which is itself embedded within, and directly influences, a much broader computing and information technology landscape. It is often impossible to identify true cause and effect, and they may appear very different from the different perspectives (e.g. information technology industry or academic researcher). This account is no different. It represents a personal view of the developments over the last two decades and is therefore inevitably biased towards those things in which the author has been personally involved.

  17. The “Common Solutions" Strategy of the Experiment Support group at CERN for the LHC Experiments

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing as well as WLCG deployment and operations need to evolve. As part of the activities of the Experiment Support group in CERN’s IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management...

  18. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Goettingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2011-01-01

    GoeGrid is a grid resource center located in G¨ottingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and manpower resources.

  19. Cultural Commonalities and Differences in Spatial Problem-Solving: A Computational Analysis

    Science.gov (United States)

    Lovett, Andrew; Forbus, Kenneth

    2011-01-01

    A fundamental question in human cognition is how people reason about space. We use a computational model to explore cross-cultural commonalities and differences in spatial cognition. Our model is based upon two hypotheses: (1) the structure-mapping model of analogy can explain the visual comparisons used in spatial reasoning; and (2) qualitative,…

  20. MONTHLY VARIATION IN SPERM MOTILITY IN COMMON CARP ASSESSED USING COMPUTER-ASSISTED SPERM ANALYSIS (CASA)

    Science.gov (United States)

    Sperm motility variables from the milt of the common carp Cyprinus carpio were assessed using a computer-assisted sperm analysis (CASA) system across several months (March-August 1992) known to encompass the natural spawning period. Two-year-old pond-raised males obtained each mo...

  1. 5 CFR Appendix A to Subpart H of... - Information on Computing Certain Common Deductions From Back Pay Awards

    Science.gov (United States)

    2010-01-01

    ... interest or applying any offset or deduction. (c) Social Security (OASDI) and Medicare taxes Compute the... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Information on Computing Certain Common.... A Appendix A to Subpart H of Part 550—Information on Computing Certain Common Deductions From Back...

  2. Common Postmortem Computed Tomography Findings Following Atraumatic Death: Differentiation between Normal Postmortem Changes and Pathologic Lesions

    OpenAIRE

    Ishida, Masanori; Gonoi, Wataru; Okuma, Hidemi; Shirota, Go; Shintani, Yukako; Abe, Hiroyuki; Takazawa, Yutaka; Fukayama, Masashi; Ohtomo, Kuni

    2015-01-01

    Computed tomography (CT) is widely used in postmortem investigations as an adjunct to the traditional autopsy in forensic medicine. To date, several studies have described postmortem CT findings as being caused by normal postmortem changes. However, on interpretation, postmortem CT findings that are seemingly due to normal postmortem changes initially, may not have been mere postmortem artifacts. In this pictorial essay, we describe the common postmortem CT findings in cases of atraumatic in-...

  3. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    Science.gov (United States)

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  4. SSVEP recognition using common feature analysis in brain-computer interface.

    Science.gov (United States)

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-04-15

    Canonical correlation analysis (CCA) has been successfully applied to steady-state visual evoked potential (SSVEP) recognition for brain-computer interface (BCI) application. Although the CCA method outperforms the traditional power spectral density analysis through multi-channel detection, it requires additionally pre-constructed reference signals of sine-cosine waves. It is likely to encounter overfitting in using a short time window since the reference signals include no features from training data. We consider that a group of electroencephalogram (EEG) data trials recorded at a certain stimulus frequency on a same subject should share some common features that may bear the real SSVEP characteristics. This study therefore proposes a common feature analysis (CFA)-based method to exploit the latent common features as natural reference signals in using correlation analysis for SSVEP recognition. Good performance of the CFA method for SSVEP recognition is validated with EEG data recorded from ten healthy subjects, in contrast to CCA and a multiway extension of CCA (MCCA). Experimental results indicate that the CFA method significantly outperformed the CCA and the MCCA methods for SSVEP recognition in using a short time window (i.e., less than 1s). The superiority of the proposed CFA method suggests it is promising for the development of a real-time SSVEP-based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A Human-Computer Collaborative Approach to Identifying Common Data Elements in Clinical Trial Eligibility Criteria

    Science.gov (United States)

    Luo, Zhihui; Miotto, Riccardo; Weng, Chunhua

    2012-01-01

    Objective To identify Common Data Elements (CDEs) in eligibility criteria of multiple clinical trials studying the same disease using a human-computer collaborative approach. Design A set of free-text eligibility criteria from clinical trials on two representative diseases, breast cancer and cardiovascular diseases, was sampled to identify disease-specific eligibility criteria CDEs. In this proposed approach, a semantic annotator is used to recognize Unified Medical Language Systems (UMLS) terms within the eligibility criteria text. The Apriori algorithm is applied to mine frequent disease-specific UMLS terms, which are then filtered by a list of preferred UMLS semantic types, grouped by similarity based on the Dice coefficient, and, finally, manually reviewed. Measurements Standard precision, recall, and F-score of the CDEs recommended by the proposed approach were measured with respect to manually identified CDEs. Results Average precision and recall of the recommended CDEs for the two diseases were 0.823 and 0.797, respectively, leading to an average F-score of 0.810. In addition, the machine-powered CDEs covered 80% of the cardiovascular CDEs published by The American Heart Association and assigned by human experts. Conclusion It is feasible and effort saving to use a human-computer collaborative approach to augment domain experts for identifying disease-specific CDEs from free-text clinical trial eligibility criteria. PMID:22846169

  6. A common currency for the computation of motivational values in the human striatum

    Science.gov (United States)

    Li, Yansong; Dreher, Jean-Claude

    2015-01-01

    Reward comparison in the brain is thought to be achieved through the use of a ‘common currency’, implying that reward value representations are computed on a unique scale in the same brain regions regardless of the reward type. Although such a mechanism has been identified in the ventro-medial prefrontal cortex and ventral striatum in the context of decision-making, it is less clear whether it similarly applies to non-choice situations. To answer this question, we scanned 38 participants with fMRI while they were presented with single cues predicting either monetary or erotic rewards, without the need to make a decision. The ventral striatum was the main brain structure to respond to both cues while showing increasing activity with increasing expected reward intensity. Most importantly, the relative response of the striatum to monetary vs erotic cues was correlated with the relative motivational value of these rewards as inferred from reaction times. Similar correlations were observed in a fronto-parietal network known to be involved in attentional focus and motor readiness. Together, our results suggest that striatal reward value signals not only obey to a common currency mechanism in the absence of choice but may also serve as an input to adjust motivated behaviour accordingly. PMID:24837478

  7. Common spatial pattern patches - an optimized filter ensemble for adaptive brain-computer interfaces.

    Science.gov (United States)

    Sannelli, Claudia; Vidaurre, Carmen; Muller, Klaus-Robert; Blankertz, Benjamin

    2010-01-01

    Laplacian filters are commonly used in Brain Computer Interfacing (BCI). When only data from few channels are available, or when, like at the beginning of an experiment, no previous data from the same user is available complex features cannot be used. In this case band power features calculated from Laplacian filtered channels represents an easy, robust and general feature to control a BCI, since its calculation does not involve any class information. For the same reason, the performance obtained with Laplacian features is poor in comparison to subject-specific optimized spatial filters, such as Common Spatial Patterns (CSP) analysis, which, on the other hand, can be used just in a later phase of the experiment, since they require a considerable amount of training data in order to enroll a stable and good performance. This drawback is particularly evident in case of poor performing BCI users, whose data is highly non-stationary and contains little class relevant information. Therefore, Laplacian filtering is preferred to CSP, e.g., in the initial period of co-adaptive calibration, a novel BCI paradigm designed to alleviate the problem of BCI illiteracy. In fact, in the co-adaptive calibration design the experiment starts with a subject-independent classifier and simple features are needed in order to obtain a fast adaptation of the classifier to the newly acquired user's data. Here, the use of an ensemble of local CSP patches (CSPP) is proposed, which can be considered as a compromise between Laplacians and CSP: CSPP needs less data and channels than CSP, while being superior to Laplacian filtering. This property is shown to be particularly useful for the co-adaptive calibration design and is demonstrated on off-line data from a previous co-adaptive BCI study.

  8. Monkeys and humans share a common computation for face/voice integration.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2011-09-01

    Full Text Available Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a "race" model failed to account for their behavior patterns. Conversely, a "superposition model", positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.

  9. Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Science.gov (United States)

    Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.

    2011-01-01

    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576

  10. A survey of common habits of computer users as indicators of ...

    African Journals Online (AJOL)

    Yomi

    2012-01-31

    Jan 31, 2012 ... Hand-washing was observed by only 13.5% of computer users examined. Other unhealthy practices found among computer users included eating (52.1), drinking (56), coughing, .... Washing hands before/after contact with a computer ..... Bacterial contamination of hospital bed-control handsets in a surgical.

  11. Computational Investigation of a Boundary-Layer Ingesting Propulsion System for the Common Research Model

    Science.gov (United States)

    Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven

    2016-01-01

    The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  12. Computational Investigation of a Boundary-Layer Ingestion Propulsion System for the Common Research Model

    Science.gov (United States)

    Blumenthal, Brennan

    2016-01-01

    This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  13. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    CERN Document Server

    Molina-Perez, Jorge Amando

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator on duty at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is explo...

  14. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    Energy Technology Data Exchange (ETDEWEB)

    Molina-Perez, J. [UC, San Diego; Bonacorsi, D. [Bologna U.; Gutsche, O. [Fermilab; Sciaba, A. [CERN; Flix, J. [Madrid, CIEMAT; Kreuzer, P. [CERN; Fajardo, E. [Andes U., Bogota; Boccali, T. [INFN, Pisa; Klute, M. [MIT; Gomes, D. [Rio de Janeiro State U.; Kaselis, R. [Vilnius U.; Du, R. [Beijing, Inst. High Energy Phys.; Magini, N. [CERN; Butenas, I. [Vilnius U.; Wang, W. [Beijing, Inst. High Energy Phys.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  15. A survey of common habits of computer users as indicators of ...

    African Journals Online (AJOL)

    Hygiene has been recognized as an infection control strategy and the extent of the problems of environmental contamination largely depends on personal hygiene. With the development of several computer applications in recent times, the uses of computer systems have greatly expanded. And with the previous history of ...

  16. A survey of common habits of computer users as indicators of ...

    African Journals Online (AJOL)

    Yomi

    2012-01-31

    internet centers by earlier investigators, who have demonstrated that computer use in schools aids effective teaching and improves students' learning capa- bilities, enhances competent treatment of patients in hospitals, and ...

  17. The LHC Computing Grid in the starting blocks

    CERN Multimedia

    Danielle Amy Venton

    2010-01-01

    As the Large Hadron Collider ramps up operations and breaks world records, it is an exciting time for everyone at CERN. To get the computing perspective, the Bulletin this week caught up with Ian Bird, leader of the Worldwide LHC Computing Grid (WLCG). He is confident that everything is ready for the first data.   The metallic globe illustrating the Worldwide LHC Computing GRID (WLCG) in the CERN Computing Centre. The Worldwide LHC Computing Grid (WLCG) collaboration has been in place since 2001 and for the past several years it has continually run the workloads for the experiments as part of their preparations for LHC data taking. So far, the numerous and massive simulations of the full chain of reconstruction and analysis software could only be carried out using Monte Carlo simulated data. Now, for the first time, the system is starting to work with real data and with many simultaneous users accessing them from all around the world. “During the 2009 large-scale computing challenge (...

  18. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    Science.gov (United States)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  19. Thoracic Syndesmophytes Commonly Occur in the Absence of Lumbar Syndesmophytes in Ankylosing Spondylitis: A Computed Tomography Study.

    Science.gov (United States)

    Tan, Sovira; Yao, Lawrence; Ward, Michael M

    2017-12-01

    To determine the extent of thoracic involvement with syndesmophytes in ankylosing spondylitis (AS) relative to lumbar involvement. We performed thoracolumbar spine computed tomography (CT) and lumbar radiography on 18 patients. We quantitated syndesmophytes in 11 intervertebral disc spaces and related these to the presence of syndesmophytes on lumbar radiographs. Syndesmophytes were slightly more common in the thoracic than in the lumbar spine and bridging was significantly more common. Thoracic syndesmophytes were universally present in patients without visible lumbar syndesmophytes on either radiographs or CT. Syndesmophytes predominate in the thoracic spine. Lumbar radiographs underestimate the degree of thoracic involvement.

  20. CloudLCA: finding the lowest common ancestor in metagenome analysis using cloud computing.

    Science.gov (United States)

    Zhao, Guoguang; Bu, Dechao; Liu, Changning; Li, Jing; Yang, Jian; Liu, Zhiyong; Zhao, Yi; Chen, Runsheng

    2012-02-01

    Estimating taxonomic content constitutes a key problem in metagenomic sequencing data analysis. However, extracting such content from high-throughput data of next-generation sequencing is very time-consuming with the currently available software. Here, we present CloudLCA, a parallel LCA algorithm that significantly improves the efficiency of determining taxonomic composition in metagenomic data analysis. Results show that CloudLCA (1) has a running time nearly linear with the increase of dataset magnitude, (2) displays linear speedup as the number of processors grows, especially for large datasets, and (3) reaches a speed of nearly 215 million reads each minute on a cluster with ten thin nodes. In comparison with MEGAN, a well-known metagenome analyzer, the speed of CloudLCA is up to 5 more times faster, and its peak memory usage is approximately 18.5% that of MEGAN, running on a fat node. CloudLCA can be run on one multiprocessor node or a cluster. It is expected to be part of MEGAN to accelerate analyzing reads, with the same output generated as MEGAN, which can be import into MEGAN in a direct way to finish the following analysis. Moreover, CloudLCA is a universal solution for finding the lowest common ancestor, and it can be applied in other fields requiring an LCA algorithm.

  1. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  2. Can a stepwise steady flow computational fluid dynamics model reproduce unsteady particulate matter separation for common unit operations?

    Science.gov (United States)

    Pathapati, Subbu-Srikanth; Sansalone, John J

    2011-07-01

    Computational fluid dynamics (CFD) is emerging as a model for resolving the fate of particulate matter (PM) by unit operations subject to rainfall-runoff loadings. However, compared to steady flow CFD models, there are greater computational requirements for unsteady hydrodynamics and PM loading models. Therefore this study examines if integrating a stepwise steady flow CFD model can reproduce PM separation by common unit operations loaded by unsteady flow and PM loadings, thereby reducing computational effort. Utilizing monitored unit operation data from unsteady events as a metric, this study compares the two CFD modeling approaches for a hydrodynamic separator (HS), a primary clarifier (PC) tank, and a volumetric clarifying filtration system (VCF). Results indicate that while unsteady CFD models reproduce PM separation of each unit operation, stepwise steady CFD models result in significant deviation for HS and PC models as compared to monitored data; overestimating the physical size requirements of each unit required to reproduce monitored PM separation results. In contrast, the stepwise steady flow approach reproduces PM separation by the VCF, a combined gravitational sedimentation and media filtration unit operation that provides attenuation of turbulent energy and flow velocity.

  3. Variability of hemodynamic parameters using the common viscosity assumption in a computational fluid dynamics analysis of intracranial aneurysms.

    Science.gov (United States)

    Suzuki, Takashi; Takao, Hiroyuki; Suzuki, Takamasa; Suzuki, Tomoaki; Masuda, Shunsuke; Dahmani, Chihebeddine; Watanabe, Mitsuyoshi; Mamori, Hiroya; Ishibashi, Toshihiro; Yamamoto, Hideki; Yamamoto, Makoto; Murayama, Yuichi

    2017-01-01

    In most simulations of intracranial aneurysm hemodynamics, blood is assumed to be a Newtonian fluid. However, it is a non-Newtonian fluid, and its viscosity profile differs among individuals. Therefore, the common viscosity assumption may not be valid for all patients. This study aims to test the suitability of the common viscosity assumption. Blood viscosity datasets were obtained from two healthy volunteers. Three simulations were performed for three different-sized aneurysms, two using measured value-based non-Newtonian models and one using a Newtonian model. The parameters proposed to predict an aneurysmal rupture obtained using the non-Newtonian models were compared with those obtained using the Newtonian model. The largest difference (25%) in the normalized wall shear stress (NWSS) was observed in the smallest aneurysm. Comparing the difference ratio to the NWSS with the Newtonian model between the two Non-Newtonian models, the difference of the ratio was 17.3%. Irrespective of the aneurysmal size, computational fluid dynamics simulations with either the common Newtonian or non-Newtonian viscosity assumption could lead to values different from those of the patient-specific viscosity model for hemodynamic parameters such as NWSS.

  4. Managing a tier-2 computer centre with a private cloud infrastructure

    Science.gov (United States)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  5. EndoCAS navigator platform: a common platform for computer and robotic assistance in minimally invasive surgery.

    Science.gov (United States)

    Megali, Giuseppe; Ferrari, Vincenzo; Freschi, Cinzia; Morabito, Bruno; Cavallo, Filippo; Turini, Giuseppe; Troia, Elena; Cappelli, Carla; Pietrabissa, Andrea; Tonet, Oliver; Cuschieri, Alfred; Dario, Paolo; Mosca, Franco

    2008-09-01

    Computer-assisted surgery (CAS) systems are currently used in only a few surgical specialties: ear, nose and throat (ENT), neurosurgery and orthopaedics. Almost all of these systems have been developed as dedicated platforms and work on rigid anatomical structures. The development of augmented reality systems for intra-abdominal organs remains problematic because of the anatomical complexity of the human peritoneal cavity and especially because of the deformability of its organs. The aim of the present work was to develop and implement a highly modular platform (targeted for minimally invasive laparoscopic surgery) generally suitable for CAS, and to produce a prototype for demonstration of its potential clinical application and use in laparoscopic surgery. In this paper we outline details of a platform integrating several aspects of CAS and medical robotics into a modular open architecture: the EndoCAS navigator platform, which integrates all the functionalities necessary for provision of computer-based assistance to surgeons during all the management phases (diagnostic work-up, planning and intervention). A specific application for computer-assisted laparoscopic procedures has been developed on the basic modules of the platform. The system provides capabilities for three-dimensional (3D) surface model generation, 3D visualization, intra-operative registration, surgical guidance and robotic assistance during laparoscopic surgery. The description of specific modules and an account of the initial clinical experience with the system are reported. We developed a common platform for computer assisted surgery and implemented a system for intraoperative laparoscopic navigation. The preliminary clinical trials and feedback from the surgeons on its use in laparoscopic surgery have been positive, although experience has been limited to date. We have successfully developed a system for computer-assisted technologies for use in laparoscopic surgery and demonstrated, by early

  6. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    Science.gov (United States)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo

    2015-12-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  9. Women in computer science: An interpretative phenomenological analysis exploring common factors contributing to women's selection and persistence in computer science as an academic major

    Science.gov (United States)

    Thackeray, Lynn Roy

    The purpose of this study is to understand the meaning that women make of the social and cultural factors that influence their reasons for entering and remaining in study of computer science. The twenty-first century presents many new challenges in career development and workforce choices for both men and women. Information technology has become the driving force behind many areas of the economy. As this trend continues, it has become essential that U.S. citizens need to pursue a career in technologies, including the computing sciences. Although computer science is a very lucrative profession, many Americans, especially women, are not choosing it as a profession. Recent studies have shown no significant differences in math, technical and science competency between men and women. Therefore, other factors, such as social, cultural, and environmental influences seem to affect women's decisions in choosing an area of study and career choices. A phenomenological method of qualitative research was used in this study, based on interviews of seven female students who are currently enrolled in a post-secondary computer science program. Their narratives provided meaning into the social and cultural environments that contribute to their persistence in their technical studies, as well as identifying barriers and challenges that are faced by female students who choose to study computer science. It is hoped that the data collected from this study may provide recommendations for the recruiting, retention and support for women in computer science departments of U.S. colleges and universities, and thereby increase the numbers of women computer scientists in industry. Keywords: gender access, self-efficacy, culture, stereotypes, computer education, diversity.

  10. Power-aware applications for scientific cluster and distributed computing

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Grosso, Paola; Hillegas, Curtis; Holzman, Burt; Klous, Sander; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    The aggregate power use of computing hardware is an important cost factor in scientific cluster and distributed computing systems. The Worldwide LHC Computing Grid (WLCG) is a major example of such a distributed computing system, used primarily for high throughput computing (HTC) applications. It has a computing capacity and power consumption rivaling that of the largest supercomputers. The computing capacity required from this system is also expected to grow over the next decade. Optimizing the power utilization and cost of such systems is thus of great interest. A number of trends currently underway will provide new opportunities for power-aware optimizations. We discuss how power-aware software applications and scheduling might be used to reduce power consumption, both as autonomous entities and as part of a (globally) distributed system. As concrete examples of computing centers we provide information on the large HEP-focused Tier-1 at FNAL, and the Tigress High Performance Computing Center at Princeton U...

  11. An R package to compute commonality coefficients in the multiple regression case: an introduction to the package and a practical example.

    Science.gov (United States)

    Nimon, Kim; Lewis, Mitzi; Kane, Richard; Haynes, R Michael

    2008-05-01

    Multiple regression is a widely used technique for data analysis in social and behavioral research. The complexity of interpreting such results increases when correlated predictor variables are involved. Commonality analysis provides a method of determining the variance accounted for by respective predictor variables and is especially useful in the presence of correlated predictors. However, computing commonality coefficients is laborious. To make commonality analysis accessible to more researchers, a program was developed to automate the calculation of unique and common elements in commonality analysis, using the statistical package R. The program is described, and a heuristic example using data from the Holzinger and Swineford (1939) study, readily available in the MBESS R package, is presented.

  12. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  13. Handling Worldwide LHC Computing Grid Critical Service Incidents : The infrastructure and experience behind nearly 5 years of GGUS ALARMs

    CERN Multimedia

    Dimou, M; Dulov, O; Grein, G

    2013-01-01

    In the Wordwide LHC Computing Grid (WLCG) project the Tier centres are of paramount importance for storing and accessing experiment data and for running the batch jobs necessary for experiment production activities. Although Tier2 sites provide a significant fraction of the resources a non-availability of resources at the Tier0 or the Tier1s can seriously harm not only WLCG Operations but also the experiments' workflow and the storage of LHC data which are very expensive to reproduce. This is why availability requirements for these sites are high and committed in the WLCG Memorandum of Understanding (MoU). In this talk we describe the workflow of GGUS ALARMs, the only 24/7 mechanism available to LHC experiment experts for reporting to the Tier0 or the Tier1s problems with their Critical Services. Conclusions and experience gained from the detailed drills performed in each such ALARM for the last 4 years are explained and the shift with time of Type of Problems met. The physical infrastructure put in place to ...

  14. A comparison of the accuracy of ultrasound and computed tomography in common diagnoses causing acute abdominal pain

    Energy Technology Data Exchange (ETDEWEB)

    Randen, Adrienne van; Stoker, Jaap [Academic Medical Centre, Department of Radiology (suite G1-227), Amsterdam (Netherlands); Lameris, Wytze; Boermeester, Marja A. [Academic Medical Center, Department of Surgery, Amsterdam (Netherlands); Es, H.W. van; Heesewijk, Hans P.M. van [St Antonius Hospital, Department of Radiology, Nieuwegein (Netherlands); Ramshorst, Bert van [St Antonius Hospital, Department of Surgery, Nieuwegein (Netherlands); Hove, Wim ten [Gelre Hospitals, Department of Radiology, Apeldoorn (Netherlands); Bouma, Willem H. [Gelre Hospitals, Department of Surgery, Apeldoorn (Netherlands); Leeuwen, Maarten S. van [University Medical Centre, Department of Radiology, Utrecht (Netherlands); Keulen, Esteban M. van [Tergooi Hospitals, Department of Radiology, Hilversum (Netherlands); Bossuyt, Patrick M. [Academic Medical Center, Department of Clinical Epidemiology, Biostatistics, and Bioinformatics, Amsterdam (Netherlands)

    2011-07-15

    Head-to-head comparison of ultrasound and CT accuracy in common diagnoses causing acute abdominal pain. Consecutive patients with abdominal pain for >2 h and <5 days referred for imaging underwent both US and CT by different radiologists/radiological residents. An expert panel assigned a final diagnosis. Ultrasound and CT sensitivity and predictive values were calculated for frequent final diagnoses. Effect of patient characteristics and observer experience on ultrasound sensitivity was studied. Frequent final diagnoses in the 1,021 patients (mean age 47; 55% female) were appendicitis (284; 28%), diverticulitis (118; 12%) and cholecystitis (52; 5%). The sensitivity of CT in detecting appendicitis and diverticulitis was significantly higher than that of ultrasound: 94% versus 76% (p < 0.01) and 81% versus 61% (p = 0.048), respectively. For cholecystitis, the sensitivity of both was 73% (p = 1.00). Positive predictive values did not differ significantly between ultrasound and CT for these conditions. Ultrasound sensitivity in detecting appendicitis and diverticulitis was not significantly negatively affected by patient characteristics or reader experience. CT misses fewer cases than ultrasound, but both ultrasound and CT can reliably detect common diagnoses causing acute abdominal pain. Ultrasound sensitivity was largely not influenced by patient characteristics and reader experience. (orig.)

  15. Computer Security: “Hello World” - Welcome to CERN

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    Welcome to the open, liberal and free academic computing environment at CERN. Thanks to your new (or long-established!) affiliation with CERN, you are eligible for a CERN computing account, which enables you to register your devices: computers, laptops, smartphones, tablets, etc. It provides you with plenty of disk space and an e-mail address. It allows you to create websites, virtual machines and databases on demand.   You can now access most of the computing services provided by the GS and IT departments: Indico, for organising meetings and conferences; EDMS, for the approval of your engineering specifications; TWiki, for collaboration with others; and the WLCG computing grid. “Open, liberal, and free”, however, does not mean that you can do whatever you like. While we try to make your access to CERN's computing facilities as convenient and easy as possible, there are a few limits and boundaries to respect. These boundaries protect both the Organization'...

  16. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  17. Computer simulation of solitary waves in a common or auxetic elastic rod with both quadratic and cubic nonlinearities

    Energy Technology Data Exchange (ETDEWEB)

    Bui Dinh, T. [Institute of Physics, University of Zielona Gora, ul. Prof. A. Szafrana 4a, 65-516 Zielona Gora (Poland); Vinh University, 182 Duong Le Duan, Nghe An (Viet Nam); Long, V. Cao [Institute of Physics, University of Zielona Gora, ul. Prof. A. Szafrana 4a, 65-516 Zielona Gora (Poland); Xuan, K. Dinh [Vinh University, 182 Duong Le Duan, Nghe An (Viet Nam); Wojciechowski, K.W. [Institute of Molecular Physics, Polish Academy of Sciences, ul. Smoluchowskiego 17, 60-179 Poznan (Poland)

    2012-07-15

    Results of numerical simulations are presented for propagation of solitary waves in an elastic rod of positive or negative Poisson's ratio, i.e. of a common or auxetic material. Splitting of various initial pulses during propagation into a sequence of solitary waves is considered in frames of a model which contains both quadratic and cubic nonlinear terms. The obtained results are compared with some exact analytic solutions, called solitons, what leads to the conclusion that the solitons describe well the more complicated wave fields which are obtained by numerical simulations. This is because the analytic solutions reflect complete balance between various orders of nonlinearity and dispersion. Collisions between some obtained solitary waves are also presented. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  18. Optimization of Dose and Image Quality in Full-fiand Computed Radiography Systems for Common Digital Radiographic Examinations

    Directory of Open Access Journals (Sweden)

    Soo-Foon Moey

    2018-01-01

    Full Text Available IntroductionA fine balance of image quality and radiation dose can be achieved by optimization to minimize stochastic and deterministic effects. This study aimed in ensuring that images of acceptable quality for common radiographic examinations in digital imaging were produced without causing harmful effects. Materials and MethodsThe study was conducted in three phases. The pre-optimization involved ninety physically abled patients aged between 20 to 60 years and weighed between 60 and 80 kilograms for four common digital radiographic examinations. Kerma X_plus, DAP meter was utilized to measure the entrance surface dose (ESD while effective dose (ED was estimated using CALDose_X 5.0 Monte Carlo software. The second phase, an experimental study utilized an anthropomorphic phantom (PBU-50 and Leeds test object TOR CDR for relative comparison of image quality. For the optimization phase, the imaging parameters with acceptable image quality and lowest ESD from the experimental study was related to patient’s body thickness. Image quality were evaluated by two radiologists using the modified evaluation criteria score lists. ResultsSignificant differences were found for image quality for all examinations. However significant difference for ESD were found for PA chest and AP abdomen only. The ESD for three of the examinations were lower than all published data. Additionally, the ESD and ED obtained for all examinations were lower than that recommended by radiation regulatory bodies. ConclusionOptimization of image quality and dose was achieved by utilizing an appropriate tube potential, calibrated automatic exposure control and additional filtration of 0.2mm copper.

  19. An Experimental and Computational Study of the Gas-Phase Acidities of the Common Amino Acid Amides.

    Science.gov (United States)

    Plummer, Chelsea E; Stover, Michele L; Bokatzian, Samantha S; Davis, John T M; Dixon, David A; Cassady, Carolyn J

    2015-07-30

    Using proton-transfer reactions in a Fourier transform ion cyclotron resonance mass spectrometer and correlated molecular orbital theory at the G3(MP2) level, gas-phase acidities (GAs) and the associated structures for amides corresponding to the common amino acids have been determined for the first time. These values are important because amino acid amides are models for residues in peptides and proteins. For compounds whose most acidic site is the C-terminal amide nitrogen, two ions populations were observed experimentally with GAs that differ by 4-7 kcal/mol. The lower energy, more acidic structure accounts for the majority of the ions formed by electrospray ionization. G3(MP2) calculations predict that the lowest energy anionic conformer has a cis-like orientation of the [-C(═O)NH](-) group whereas the higher energy, less acidic conformer has a trans-like orientation of this group. These two distinct conformers were predicted for compounds with aliphatic, amide, basic, hydroxyl, and thioether side chains. For the most acidic amino acid amides (tyrosine, cysteine, tryptophan, histidine, aspartic acid, and glutamic acid amides) only one conformer was observed experimentally, and its experimental GA correlates with the theoretical GA related to side chain deprotonation.

  20. Signature of the WLCG Memorandum of Understanding between Norway and CERN by Ole Henrik Ellestad, Director of the Research Council of Norway and the Chief Scientific Officer J. Engelen with C. Eck, S. Foffano and B. Jacobsen on 13 December 2007.

    CERN Multimedia

    Maximilien Brice

    2007-01-01

    Signature of the WLCG Memorandum of Understanding between Norway and CERN by Ole Henrik Ellestad, Director of the Research Council of Norway and the Chief Scientific Officer J. Engelen with C. Eck, S. Foffano and B. Jacobsen on 13 December 2007.

  1. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  2. ATLAS and LHC computing on CRAY

    CERN Document Server

    Haug, Sigve; The ATLAS collaboration

    2016-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one import measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb from a dedicated cluster to the large CRAY systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  3. ATLAS and LHC computing on CRAY

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00297774; The ATLAS collaboration; Haug, Sigve

    2017-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  4. ATLAS and LHC computing on CRAY

    Science.gov (United States)

    Sciacca, F. G.; Haug, S.; ATLAS Collaboration

    2017-10-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  8. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  9. Digital dissection - using contrast-enhanced computed tomography scanning to elucidate hard- and soft-tissue anatomy in the Common Buzzard Buteo buteo.

    Science.gov (United States)

    Lautenschlager, Stephan; Bright, Jen A; Rayfield, Emily J

    2014-04-01

    Gross dissection has a long history as a tool for the study of human or animal soft- and hard-tissue anatomy. However, apart from being a time-consuming and invasive method, dissection is often unsuitable for very small specimens and often cannot capture spatial relationships of the individual soft-tissue structures. The handful of comprehensive studies on avian anatomy using traditional dissection techniques focus nearly exclusively on domestic birds, whereas raptorial birds, and in particular their cranial soft tissues, are essentially absent from the literature. Here, we digitally dissect, identify, and document the soft-tissue anatomy of the Common Buzzard (Buteo buteo) in detail, using the new approach of contrast-enhanced computed tomography using Lugol's iodine. The architecture of different muscle systems (adductor, depressor, ocular, hyoid, neck musculature), neurovascular, and other soft-tissue structures is three-dimensionally visualised and described in unprecedented detail. The three-dimensional model is further presented as an interactive PDF to facilitate the dissemination and accessibility of anatomical data. Due to the digital nature of the data derived from the computed tomography scanning and segmentation processes, these methods hold the potential for further computational analyses beyond descriptive and illustrative proposes. © 2013 The Authors. Journal of Anatomy published by John Wiley & Sons Ltd on behalf of Anatomical Society.

  10. Automated quantification of pulmonary emphysema from computed tomography scans: comparison of variation and correlation of common measures in a large cohort

    Science.gov (United States)

    Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.

    2010-03-01

    The purpose of this work was to retrospectively investigate the variation of standard indices of pulmonary emphysema from helical computed tomographic (CT) scans as related to inspiration differences over a 1 year interval and determine the strength of the relationship between these measures in a large cohort. 626 patients that had 2 scans taken at an interval of 9 months to 15 months (μ: 381 days, σ: 31 days) were selected for this work. All scans were acquired at a 1.25mm slice thickness using a low dose protocol. For each scan, the emphysema index (EI), fractal dimension (FD), mean lung density (MLD), and 15th percentile of the histogram (HIST) were computed. The absolute and relative changes for each measure were computed and the empirical 95% confidence interval was reported both in non-normalized and normalized scales. Spearman correlation coefficients are computed between the relative change in each measure and relative change in inspiration between each scan-pair, as well as between each pair-wise combination of the four measures. EI varied on a range of -10.5 to 10.5 on a non-normalized scale and -15 to 15 on a normalized scale, with FD and MLD showing slightly larger but comparable spreads, and HIST having a much larger variation. MLD was found to show the strongest correlation to inspiration change (r=0.85, pemphysema index and fractal dimension have the least variability overall of the commonly used measures of emphysema and that they offer the most unique quantification of emphysema relative to each other.

  11. Comparative evaluation of the cadaveric, radiographic and computed tomographic anatomy of the heads of green iguana (Iguana iguana), common tegu (Tupinambis merianae) and bearded dragon (Pogona vitticeps).

    Science.gov (United States)

    Banzato, Tommaso; Selleri, Paolo; Veladiano, Irene A; Martin, Andrea; Zanetti, Emanuele; Zotti, Alessandro

    2012-05-11

    Radiology and computed tomography are the most commonly available diagnostic tools for the diagnosis of pathologies affecting the head and skull in veterinary practice. Nevertheless, accurate interpretation of radiographic and CT studies requires a thorough knowledge of the gross and the cross-sectional anatomy. Despite the increasing success of reptiles as pets, only a few reports over their normal imaging features are currently available. The aim of this study is to describe the normal cadaveric, radiographic and computed tomographic features of the heads of the green iguana, tegu and bearded dragon. 6 adult green iguanas, 4 tegus, 3 bearded dragons, and, the adult cadavers of: 4 green iguana, 4 tegu, 4 bearded dragon were included in the study. 2 cadavers were dissected following a stratigraphic approach and 2 cadavers were cross-sectioned for each species. These latter specimens were stored in a freezer (-20°C) until completely frozen. Transversal sections at 5 mm intervals were obtained by means of an electric band-saw. Each section was cleaned and photographed on both sides. Radiographs of the head of each subject were obtained. Pre- and post- contrast computed tomographic studies of the head were performed on all the live animals. CT images were displayed in both bone and soft tissue windows. Individual anatomic structures were first recognised and labelled on the anatomic images and then matched on radiographs and CT images. Radiographic and CT images of the skull provided good detail of the bony structures in all species. In CT contrast medium injection enabled good detail of the soft tissues to be obtained in the iguana whereas only the eye was clearly distinguishable from the remaining soft tissues in both the tegu and the bearded dragon. The results provide an atlas of the normal anatomical and in vivo radiographic and computed tomographic features of the heads of lizards, and this may be useful in interpreting any imaging modality involving these

  12. Comparative evaluation of the cadaveric, radiographic and computed tomographic anatomy of the heads of green iguana (Iguana iguana , common tegu ( Tupinambis merianae and bearded dragon ( Pogona vitticeps

    Directory of Open Access Journals (Sweden)

    Banzato Tommaso

    2012-05-01

    Full Text Available Abstract Background Radiology and computed tomography are the most commonly available diagnostic tools for the diagnosis of pathologies affecting the head and skull in veterinary practice. Nevertheless, accurate interpretation of radiographic and CT studies requires a thorough knowledge of the gross and the cross-sectional anatomy. Despite the increasing success of reptiles as pets, only a few reports over their normal imaging features are currently available. The aim of this study is to describe the normal cadaveric, radiographic and computed tomographic features of the heads of the green iguana, tegu and bearded dragon. Results 6 adult green iguanas, 4 tegus, 3 bearded dragons, and, the adult cadavers of : 4 green iguana, 4 tegu, 4 bearded dragon were included in the study. 2 cadavers were dissected following a stratigraphic approach and 2 cadavers were cross-sectioned for each species. These latter specimens were stored in a freezer (−20°C until completely frozen. Transversal sections at 5 mm intervals were obtained by means of an electric band-saw. Each section was cleaned and photographed on both sides. Radiographs of the head of each subject were obtained. Pre- and post- contrast computed tomographic studies of the head were performed on all the live animals. CT images were displayed in both bone and soft tissue windows. Individual anatomic structures were first recognised and labelled on the anatomic images and then matched on radiographs and CT images. Radiographic and CT images of the skull provided good detail of the bony structures in all species. In CT contrast medium injection enabled good detail of the soft tissues to be obtained in the iguana whereas only the eye was clearly distinguishable from the remaining soft tissues in both the tegu and the bearded dragon. Conclusions The results provide an atlas of the normal anatomical and in vivo radiographic and computed tomographic features of the heads of lizards, and this may be

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  14. Cross-sectional anatomy, computed tomography and magnetic resonance imaging of the head of common dolphin (Delphinus delphis) and striped dolphin (Stenella coeruleoalba).

    Science.gov (United States)

    Alonso-Farré, J M; Gonzalo-Orden, M; Barreiro-Vázquez, J D; Barreiro-Lois, A; André, M; Morell, M; Llarena-Reino, M; Monreal-Pawlowsky, T; Degollada, E

    2015-02-01

    Computed tomography (CT) and low-field magnetic resonance imaging (MRI) were used to scan seven by-caught dolphin cadavers, belonging to two species: four common dolphins (Delphinus delphis) and three striped dolphins (Stenella coeruleoalba). CT and MRI were obtained with the animals in ventral recumbency. After the imaging procedures, six dolphins were frozen at -20°C and sliced in the same position they were examined. Not only CT and MRI scans, but also cross sections of the heads were obtained in three body planes: transverse (slices of 1 cm thickness) in three dolphins, sagittal (5 cm thickness) in two dolphins and dorsal (5 cm thickness) in two dolphins. Relevant anatomical structures were identified and labelled on each cross section, obtaining a comprehensive bi-dimensional topographical anatomy guide of the main features of the common and the striped dolphin head. Furthermore, the anatomical cross sections were compared with their corresponding CT and MRI images, allowing an imaging identification of most of the anatomical features. CT scans produced an excellent definition of the bony and air-filled structures, while MRI allowed us to successfully identify most of the soft tissue structures in the dolphin's head. This paper provides a detailed anatomical description of the head structures of common and striped dolphins and compares anatomical cross sections with CT and MRI scans, becoming a reference guide for the interpretation of imaging studies. © 2014 Blackwell Verlag GmbH.

  15. Cross-sectional anatomy, computed tomography and magnetic resonance imaging of the thoracic region of common dolphin (Delphinus delphis) and striped dolphin (Stenella coeruleoalba).

    Science.gov (United States)

    Alonso-Farré, J M; Gonzalo-Orden, M; Barreiro-Vázquez, J D; Ajenjo, J M; Barreiro-Lois, A; Llarena-Reino, M; Degollada, E

    2014-06-01

    The aim of this study was to provide a detailed anatomical description of the thoracic region features in normal common (Delphinus delphis) and striped dolphins (Stenella coeruleoalba) and to compare anatomical cross-sections with computed tomography (CT) and magnetic resonance imaging (MRI) scans. CT and MRI were used to scan 7 very fresh by-caught dolphin cadavers: four common and three striped dolphins. Diagnostic images were obtained from dolphins in ventral recumbency, and after the examinations, six dolphins were frozen (-20°C) and sliced in the same position. As well as CT and MRI scans, cross-sections were obtained in the three body planes: transverse (slices of 1 cm thickness), sagittal (5 cm thickness) and dorsal (5 cm thickness). Relevant anatomical features of the thoracic region were identified and labelled on each section, obtaining a complete bi-dimensional atlas. Furthermore, we compared CT and MRI scans with anatomical cross-sections, and results provided a complete reference guide for the interpretation of imaging studies of common and striped dolphin's thoracic structures. © 2013 Blackwell Verlag GmbH.

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  17. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  18. Computational Fluid Dynamic Analyses for the High-Lift Common Research Model Using the USM3D and FUN3D Flow Solvers

    Science.gov (United States)

    Rivers, Melissa; Hunter, Craig; Vatsa, Veer

    2017-01-01

    Two Navier-Stokes codes were used to compute flow over the High-Lift Common Research Model (HL-CRM) in preparation for a wind tunnel test to be performed at the NASA Langley Research Center 14-by-22-Foot Subsonic Tunnel in fiscal year 2018. Both flight and wind tunnel conditions were simulated by the two codes at set Mach numbers and Reynolds numbers over a full angle-of-attack range for three configurations: cruise, landing and takeoff. Force curves, drag polars and surface pressure contour comparisons are shown for the two codes. The lift and drag curves compare well for the cruise configuration up to 10deg angle of attack but not as well for the other two configurations. The drag polars compare reasonably well for all three configurations. The surface pressure contours compare well for some of the conditions modeled but not as well for others.

  19. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  20. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  1. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  5. Computer Security: Geneva, Suisse Romande and beyond

    CERN Multimedia

    Computer Security Team

    2014-01-01

    To ensure good computer security, it is essential for us to keep in close contact and collaboration with a multitude of official and unofficial, national and international bodies, agencies, associations and organisations in order to discuss best practices, to learn about the most recent (and, at times, still unpublished) vulnerabilities, and to handle jointly any security incident. A network of peers - in particular a network of trusted peers - can provide important intelligence about new vulnerabilities or ongoing attacks much earlier than information published in the media. In this article, we would like to introduce a few of the official peers we usually deal with.*   Directly relevant for CERN are SWITCH, our partner for networking in Switzerland, and our contacts within the WLCG, i.e. the European Grid Infrastructure (EGI), and the U.S. Open Science Grid (OSG). All three are essential partners when discussing security implementations and resolving security incidents. SWITCH, in...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  8. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  9. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  10. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  11. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  13. Common Cold

    Science.gov (United States)

    ... nose, coughing - everyone knows the symptoms of the common cold. It is probably the most common illness. In ... avoid colds. There is no cure for the common cold. For relief, try Getting plenty of rest Drinking ...

  14. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration; Velikhov, Vasily; Konoplich, Rostislav

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of computer institutes' clusters, commercial computing resources and supercomputers. To perform precision measurements of the Higgs boson properties in these realities, it is also highly required to have effective instruments to simulate kinematic distributions of signal events. In this talk we give a brief description of the modern distribution reconstruction method called Morphing and perform few efficiency tests to demonstrate its potential. These studies have been performed on the WLCG and Kurchatov Institute’s Data Processing Center, including Tier-1 GRID site and supercomputer as well. We also analyze the CPU efficienc...

  15. Police Districts, CommonPlaces-The data set is a point feature consisting of 830 common place points representing Spillman CAD common places. It was created to maintain the Spillman computer aided dispatch system for the sheriff office., Published in 2004, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Police Districts dataset current as of 2004. CommonPlaces-The data set is a point feature consisting of 830 common place points representing Spillman CAD common...

  16. Common Warts

    Science.gov (United States)

    ... from spreading Common warts Symptoms & causes Diagnosis & treatment Advertisement Mayo Clinic does not endorse companies or products. ... a Job Site Map About This Site Twitter Facebook Google YouTube Pinterest Mayo Clinic is a not- ...

  17. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab

    2017-11-22

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  18. Exploiting analytics techniques in CMS computing monitoring

    Science.gov (United States)

    Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.

    2017-10-01

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  19. Evolution of the ATLAS Distributed Computing during the LHC long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2013-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  20. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  1. Science commons

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    SCP: Creative Commons licensing for open access publishing, Open Access Law journal-author agreements for converting journals to open access, and the Scholar's Copyright Addendum Engine for retaining rights to self-archive in meaningful formats and locations for future re-use. More than 250 science and technology journals already publish under Creative Commons licensing while 35 law journals utilize the Open Access Law agreements. The Addendum Engine is a new tool created in partnership with SPARC and U.S. universities. View John Wilbanks's biography

  2. Creative Commons

    DEFF Research Database (Denmark)

    Jensen, Lone

    2006-01-01

    En Creative Commons licens giver en forfatter mulighed for at udbyde sit værk i en alternativ licensløsning, som befinder sig på forskellige trin på en skala mellem yderpunkterne "All rights reserved" og "No rights reserved". Derved opnås licensen "Some rights reserved"......En Creative Commons licens giver en forfatter mulighed for at udbyde sit værk i en alternativ licensløsning, som befinder sig på forskellige trin på en skala mellem yderpunkterne "All rights reserved" og "No rights reserved". Derved opnås licensen "Some rights reserved"...

  3. Common approach to common interests

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-06-01

    In referring to issues confronting the energy field in this region and options to be exercised in the future, I would like to mention the fundamental condition of the utmost importance. That can be summed up as follows: any subject in energy area can never be solved by one country alone, given the geographical and geopolitical characteristics intrinsically possessed by energy. So, a regional approach is needed and it is especially necessary for the main players in the region to jointly address problems common to them. Though it may be a matter to be pursued in the distant future, I am personally dreaming a 'Common Energy Market for Northeast Asia,' in which member countries' interests are adjusted so that the market can be integrated and the region can become a most economically efficient market, thus formulating an effective power to encounter the outside. It should be noted that Europe needed forty years to integrate its market as the unified common market. It is necessary for us to follow a number of steps over the period to eventually materialize our common market concept, too. Now is the time for us to take a first step to lay the foundation for our descendants to enjoy prosperity from such a common market.

  4. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... News Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... of CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  5. Computing fundamentals introduction to computers

    CERN Document Server

    Wempen, Faithe

    2014-01-01

    The absolute beginner's guide to learning basic computer skills Computing Fundamentals, Introduction to Computers gets you up to speed on basic computing skills, showing you everything you need to know to conquer entry-level computing courses. Written by a Microsoft Office Master Instructor, this useful guide walks you step-by-step through the most important concepts and skills you need to be proficient on the computer, using nontechnical, easy-to-understand language. You'll start at the very beginning, getting acquainted with the actual, physical machine, then progress through the most common

  6. Communication and common interest.

    Directory of Open Access Journals (Sweden)

    Peter Godfrey-Smith

    Full Text Available Explaining the maintenance of communicative behavior in the face of incentives to deceive, conceal information, or exaggerate is an important problem in behavioral biology. When the interests of agents diverge, some form of signal cost is often seen as essential to maintaining honesty. Here, novel computational methods are used to investigate the role of common interest between the sender and receiver of messages in maintaining cost-free informative signaling in a signaling game. Two measures of common interest are defined. These quantify the divergence between sender and receiver in their preference orderings over acts the receiver might perform in each state of the world. Sampling from a large space of signaling games finds that informative signaling is possible at equilibrium with zero common interest in both senses. Games of this kind are rare, however, and the proportion of games that include at least one equilibrium in which informative signals are used increases monotonically with common interest. Common interest as a predictor of informative signaling also interacts with the extent to which agents' preferences vary with the state of the world. Our findings provide a quantitative description of the relation between common interest and informative signaling, employing exact measures of common interest, information use, and contingency of payoff under environmental variation that may be applied to a wide range of models and empirical systems.

  7. Common Sense Computing (Episode 4): Debugging

    Science.gov (United States)

    Simon, Beth; Bouvier, Dennis; Chen, Tzu-Yi; Lewandowski, Gary; McCartney, Robert; Sanders, Kate

    2008-01-01

    We report on responses to a series of four questions designed to identify pre-existing abilities related to debugging and troubleshooting experiences of novice students before they begin programming instruction. The focus of these questions include general troubleshooting, bug location, exploring unfamiliar environments, and describing students'…

  8. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known as a ... of page What are some common uses of the procedure? CT of the sinuses is primarily used ...

  9. High-throughput landslide modelling using computational grids

    Science.gov (United States)

    Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.

    2012-04-01

    physicists and geographical scientists are collaborating to develop methods for providing simple and effective access to landslide models and associated simulation data. Particle physicists have valuable experience in dealing with data complexity and management due to the scale of data generated by particle accelerators such as the Large Hadron Collider (LHC). The LHC generates tens of petabytes of data every year which is stored and analysed using the Worldwide LHC Computing Grid (WLCG). Tools and concepts from the WLCG are being used to drive the development of a Software-as-a-Service (SaaS) platform to provide access to hosted landslide simulation software and data. It contains advanced data management features and allows landslide simulations to be run on the WLCG, dramatically reducing simulation runtimes by parallel execution. The simulations are accessed using a web page through which users can enter and browse input data, submit jobs and visualise results. Replication of the data ensures a local copy can be accessed should a connection to the platform be unavailable. The platform does not know the details of the simulation software it runs, so it is therefore possible to use it to run alternative models at similar scales. This creates the opportunity for activities such as model sensitivity analysis and performance comparison at scales that are impractical using standalone software.

  10. Computed Tomography (CT) - Spine

    Science.gov (United States)

    ... Professions Site Index A-Z Computed Tomography (CT) - Spine Computed tomography (CT) of the spine is a ... the Spine? What is CT Scanning of the Spine? Computed tomography, more commonly known as a CT ...

  11. The Tragedy of the Commons

    Science.gov (United States)

    Short, Daniel

    2016-01-01

    The tragedy of the commons is one of the principal tenets of ecology. Recent developments in experiential computer-based simulation of the tragedy of the commons are described. A virtual learning environment is developed using the popular video game "Minecraft". The virtual learning environment is used to experience first-hand depletion…

  12. Unified Monitoring Architecture for IT and Grid Services

    Science.gov (United States)

    Aimar, A.; Aguado Corman, A.; Andrade, P.; Belov, S.; Delgado Fernandez, J.; Garrido Bear, B.; Georgiou, M.; Karavakis, E.; Magnoni, L.; Rama Ballesteros, R.; Riahi, H.; Rodriguez Martinez, J.; Saiz, P.; Zolnai, D.

    2017-10-01

    This paper provides a detailed overview of the Unified Monitoring Architecture (UMA) that aims at merging the monitoring of the CERN IT data centres and the WLCG monitoring using common and widely-adopted open source technologies such as Flume, Elasticsearch, Hadoop, Spark, Kibana, Grafana and Zeppelin. It provides insights and details on the lessons learned, explaining the work performed in order to monitor the CERN IT data centres and the WLCG computing activities such as the job processing, data access and transfers, and the status of sites and services.

  13. Maintaining Traceability in an Evolving Distributed Computing Environment

    Science.gov (United States)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  14. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  15. Computer Viruses. Technology Update.

    Science.gov (United States)

    Ponder, Tim, Comp.; Ropog, Marty, Comp.; Keating, Joseph, Comp.

    This document provides general information on computer viruses, how to help protect a computer network from them, measures to take if a computer becomes infected. Highlights include the origins of computer viruses; virus contraction; a description of some common virus types (File Virus, Boot Sector/Partition Table Viruses, Trojan Horses, and…

  16. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... are the limitations of CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed ... nasal cavity by small openings. top of page What are some common uses of the procedure? CT ...

  17. Availability measurement of grid services from the perspective of a scientific computing centre

    Science.gov (United States)

    Marten, H.; Koenig, T.

    2011-12-01

    The Karlsruhe Institute of Technology (KIT) is the merger of Forschungszentrum Karlsruhe and the Technical University Karlsruhe. The Steinbuch Centre for Computing (SCC) was one of the first new organizational units of KIT, combining the former Institute for Scientific Computing of Forschungszentrum Karlsruhe and the Computing Centre of the University. IT service management according to the worldwide de-facto-standard "IT Infrastructure Library (ITIL)" [1] was chosen by SCC as a strategic element to support the merging of the two existing computing centres located at a distance of about 10 km. The availability and reliability of IT services directly influence the customer satisfaction as well as the reputation of the service provider, and unscheduled loss of availability due to hardware or software failures may even result in severe consequences like data loss. Fault tolerant and error correcting design features are reducing the risk of IT component failures and help to improve the delivered availability. The ITIL process controlling the respective design is called Availability Management [1]. This paper discusses Availability Management regarding grid services delivered to WLCG and provides a few elementary guidelines for availability measurements and calculations of services consisting of arbitrary numbers of components.

  18. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... of the Head? What is CT Scanning of the Head? Computed tomography, more commonly known as a ... of page What are some common uses of the procedure? CT scanning of the head is typically ...

  19. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  20. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... News Physician Resources Professions Site Index A-Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) ... are the limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known as a ...

  1. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  2. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2013-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  3. Common Readout System in ALICE

    CERN Document Server

    Jubin, Mitra

    2016-01-01

    The ALICE experiment at the CERN Large Hadron Collider is going for a major physics upgrade in 2018. This upgrade is necessary for getting high statistics and high precision measurement for probing into rare physics channels needed to understand the dynamics of the condensed phase of QCD. The high interaction rate and the large event size in the upgraded detectors will result in an experimental data flow traffic of about 1 TB/s from the detectors to the on-line computing system. A dedicated Common Readout Unit (CRU) is proposed for data concentration, multiplexing, and trigger distribution. CRU, as common interface unit, handles timing, data and control signals between on-detector systems and online-offline computing system. An overview of the CRU architecture is presented in this manuscript.

  4. Common Control System Vulnerability

    Energy Technology Data Exchange (ETDEWEB)

    Trent Nelson

    2005-12-01

    The Control Systems Security Program and other programs within the Idaho National Laboratory have discovered a vulnerability common to control systems in all sectors that allows an attacker to penetrate most control systems, spoof the operator, and gain full control of targeted system elements. This vulnerability has been identified on several systems that have been evaluated at INL, and in each case a 100% success rate of completing the attack paths that lead to full system compromise was observed. Since these systems are employed in multiple critical infrastructure sectors, this vulnerability is deemed common to control systems in all sectors. Modern control systems architectures can be considered analogous to today's information networks, and as such are usually approached by attackers using a common attack methodology to penetrate deeper and deeper into the network. This approach often is composed of several phases, including gaining access to the control network, reconnaissance, profiling of vulnerabilities, launching attacks, escalating privilege, maintaining access, and obscuring or removing information that indicates that an intruder was on the system. With irrefutable proof that an external attack can lead to a compromise of a computing resource on the organization's business local area network (LAN), access to the control network is usually considered the first phase in the attack plan. Once the attacker gains access to the control network through direct connections and/or the business LAN, the second phase of reconnaissance begins with traffic analysis within the control domain. Thus, the communications between the workstations and the field device controllers can be monitored and evaluated, allowing an attacker to capture, analyze, and evaluate the commands sent among the control equipment. Through manipulation of the communication protocols of control systems (a process generally referred to as ''reverse engineering''), an

  5. Common Career Technical Core: Common Standards, Common Vision for CTE

    Science.gov (United States)

    Green, Kimberly

    2012-01-01

    This article provides an overview of the National Association of State Directors of Career Technical Education Consortium's (NASDCTEc) Common Career Technical Core (CCTC), a state-led initiative that was created to ensure that career and technical education (CTE) programs are consistent and high quality across the United States. Forty-two states,…

  6. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... the limitations of CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known as a ... top of page What are some common uses of the procedure? CT of the sinuses is primarily ...

  7. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... News Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Head Computed tomography (CT) of the head uses special x-ray ... Head? What is CT Scanning of the Head? Computed tomography, more commonly known as a CT or CAT ...

  8. Review of quantum computation

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, S.

    1992-12-01

    Digital computers are machines that can be programmed to perform logical and arithmetical operations. Contemporary digital computers are ``universal,`` in the sense that a program that runs on one computer can, if properly compiled, run on any other computer that has access to enough memory space and time. Any one universal computer can simulate the operation of any other; and the set of tasks that any such machine can perform is common to all universal machines. Since Bennett`s discovery that computation can be carried out in a non-dissipative fashion, a number of Hamiltonian quantum-mechanical systems have been proposed whose time-evolutions over discrete intervals are equivalent to those of specific universal computers. The first quantum-mechanical treatment of computers was given by Benioff, who exhibited a Hamiltonian system with a basis whose members corresponded to the logical states of a Turing machine. In order to make the Hamiltonian local, in the sense that its structure depended only on the part of the computation being performed at that time, Benioff found it necessary to make the Hamiltonian time-dependent. Feynman discovered a way to make the computational Hamiltonian both local and time-independent by incorporating the direction of computation in the initial condition. In Feynman`s quantum computer, the program is a carefully prepared wave packet that propagates through different computational states. Deutsch presented a quantum computer that exploits the possibility of existing in a superposition of computational states to perform tasks that a classical computer cannot, such as generating purely random numbers, and carrying out superpositions of computations as a method of parallel processing. In this paper, we show that such computers, by virtue of their common function, possess a common form for their quantum dynamics.

  9. Common Childhood Orthopedic Conditions

    Science.gov (United States)

    ... Videos for Educators Search English Español Common Childhood Orthopedic Conditions KidsHealth / For Parents / Common Childhood Orthopedic Conditions What's in this article? Flatfeet Toe Walking ...

  10. To Realize the Commons

    NARCIS (Netherlands)

    Dolphijn, R.

    2016-01-01

    In this contribution to the Common Conflict [www.onlineopen.org/commonconflict] virtual roundtable, Rick Dolphijn emphasizes that the commons is not a humanist concept but much more a materialist concept. He argues that  the commons depends upon the creation of new assemblages: it is the accidental

  11. Computational Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Westbrook, C K; Mizobuchi, Y; Poinsot, T J; Smith, P J; Warnatz, J

    2004-08-26

    Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surface and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.

  12. Tragedy of the Commons

    OpenAIRE

    Nørgaard, Jørgen

    2009-01-01

    The tittle refers to an article from 1968 by Garrett Hardin, using the metaphore of the common grazing land in villages in old time. These 'Commons' were for free use for people in the commounity to have some sheep grazing. This system was based on a certain social solidarity and ethic. With an individualistic and selfish attitude this would collaps, since each single citizen could benefit from putting more sheep on the common, which would eventually collapse by overgrazing. The metaphore is ...

  13. Tragedy of the Commons

    DEFF Research Database (Denmark)

    Nørgaard, Jørgen

    The tittle refers to an article from 1968 by Garrett Hardin, using the metaphore of the common grazing land in villages in old time. These 'Commons' were for free use for people in the commounity to have some sheep grazing. This system was based on a certain social solidarity and ethic....... With an individualistic and selfish attitude this would collaps, since each single citizen could benefit from putting more sheep on the common, which would eventually collapse by overgrazing. The metaphore is applied to our common planet, and our ability to built up institutions, economics and ethics, geared for sharing...

  14. Efektivitas Instagram Common Grounds

    OpenAIRE

    Wifalin, Michelle

    2016-01-01

    Efektivitas Instagram Common Grounds merupakan rumusan masalah yang diambil dalam penelitian ini. Efektivitas Instagram diukur menggunakan Customer Response Index (CRI), dimana responden diukur dalam berbagai tingkatan, mulai dari awareness, comprehend, interest, intentions dan action. Tingkatan respons inilah yang digunakan untuk mengukur efektivitas Instagram Common Grounds. Teori-teori yang digunakan untuk mendukung penelitian ini yaitu teori marketing Public Relations, teori iklan, efekti...

  15. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  16. Challenging Data Management in CMS Computing with Network-aware Systems

    CERN Document Server

    Bonacorsi, Daniele

    2013-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of �?��??Intelligent Network Services�?��?�, including also bandwidt...

  17. Challenging data and workload management in CMS Computing with network-aware systems

    CERN Document Server

    Wildish, Anthony

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of "Intelligent Network Services", including also bandwidth on demand concepts. In this paper, we will ...

  18. Challenging data and workload management in CMS Computing with network-aware systems

    Science.gov (United States)

    D, Bonacorsi; T, Wildish

    2014-06-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  19. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  20. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... the limitations of CT Scanning of the Head? What is CT Scanning of the Head? Computed tomography, ... than regular radiographs (x-rays). top of page What are some common uses of the procedure? CT ...

  1. Applications of computer algebra

    CERN Document Server

    1985-01-01

    Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in­ creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa­ tion and obtains a closed for...

  2. MA Common Tern Census

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The official State census period for common terns was June 1-10. The survey was conducted on June 4 by Biologist Healey, Biotech Springfield, and Maintenance...

  3. Common Knowledge on Networks

    CERN Document Server

    Liddell, Torrin M

    2015-01-01

    Common knowledge of intentions is crucial to basic social tasks ranging from cooperative hunting to oligopoly collusion, riots, revolutions, and the evolution of social norms and human culture. Yet little is known about how common knowledge leaves a trace on the dynamics of a social network. Here we show how an individual's network properties---primarily local clustering and betweenness centrality---provide strong signals of the ability to successfully participate in common knowledge tasks. These signals are distinct from those expected when practices are contagious, or when people use less-sophisticated heuristics that do not yield true coordination. This makes it possible to infer decision rules from observation. We also find that tasks that require common knowledge can yield significant inequalities in success, in contrast to the relative equality that results when practices spread by contagion alone.

  4. Common Mental Health Issues

    Science.gov (United States)

    Stock, Susan R.; Levine, Heidi

    2016-01-01

    This chapter provides an overview of common student mental health issues and approaches for student affairs practitioners who are working with students with mental illness, and ways to support the overall mental health of students on campus.

  5. Five Common Glaucoma Tests

    Science.gov (United States)

    ... About Us Donate In This Section Five Common Glaucoma Tests en Español email Send this article to ... year or two after age 35. A Comprehensive Glaucoma Exam To be safe and accurate, five factors ...

  6. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... are the limitations of CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known as a CT or CAT scan, is a diagnostic medical test that, like traditional x-rays, produces multiple images or pictures of the inside of ...

  7. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... What are the limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known as ... top of page What are some common uses of the procedure? CT is used to help diagnose ...

  8. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... risks? What are the limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known ... newborns, infants and older children. top of page What are some common uses of the procedure? CT ...

  9. Common Culture Cabaret

    OpenAIRE

    Campbell, David; Durden, Mark; Brown, Ian

    2016-01-01

    Common Culture’s solo exhibition ‘Cabaret’ includes the major newly commissioned moving image work Vent, made in response to the unruly traditions of popular entertainment. Commissioned for MAC, Birmingham as the main work in Common Culture’s solo exhibition Cabaret, the multi-channel video installation explores the seductive allure of the entertainment industry, a space where individual creative ambition is processed as formulaic spectacle. Vent examines popular culture’s obsessive fascinati...

  10. Common Vestibular Disorders

    OpenAIRE

    Balatsouras, Dimitrios G

    2017-01-01

    The three most common vestibular diseases, benign paroxysmal positional vertigo (BPPV), Meniere's disease (MD) and vestibular neuritis (VN), are presented in this paper. BPPV, which is the most common peripheral vestibular disorder, can be defined as transient vertigo induced by a rapid head position change, associated with a characteristic paroxysmal positional nystagmus. Canalolithiasis of the posterior semicircular canal is considered the most convincing theory of its pathogenesis and the ...

  11. Personal computer viruses

    Energy Technology Data Exchange (ETDEWEB)

    Cremonesi, C.; Martella, G. (Milan Univ. (Italy). Dipt. di Scienza dell' Informazione)

    1991-01-01

    This article reveals the origin and nature of what is known as the 'computer virus'. For illustrative purposes, the most common types of computer viruses are described and classified; the relative contagion and damage mechanisms are analyzed. Then techniques are presented to assist wary users in detecting and removing viruses, as well as, in protecting their computer systems from becoming contaminated.

  12. The Common Geometry Module (CGM).

    Energy Technology Data Exchange (ETDEWEB)

    Tautges, Timothy James

    2004-12-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and on top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also includes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.

  13. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  14. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... limitations of CT Scanning of the Head? What is CT Scanning of the Head? Computed tomography, more commonly known as a ... top of page What are some common uses of the procedure? CT scanning of the head is ...

  15. The Common Good

    DEFF Research Database (Denmark)

    Feldt, Liv Egholm

    being "polluted" by the state and market logic and maintain their distinctness rooted in civil society´s values and logics. Through a historical case analysis of the Egmont Foundation from Denmark (a corporate philanthropic foundation from 1920), the paper shows how concrete gift-giving practices...... and concepts continuously over time have blurred the different sectors and “polluted” contemporary definitions of the “common good”. The analysis shows that “the common good” is not an autonomous concept owned or developed by specific spheres of society. The analysis stresses that historically, “the common...... good” has always been a contested concept. It is established through messy and blurred heterogeneity of knowledge, purposes and goal achievements originating from a multitude of scientific, religious, political and civil society spheres contested not only in terms of words and definitions but also...

  16. COMMON FISCAL POLICY

    Directory of Open Access Journals (Sweden)

    Gabriel Mursa

    2014-08-01

    Full Text Available The purpose of this article is to demonstrate that a common fiscal policy, designed to support the euro currency, has some significant drawbacks. The greatest danger is the possibility of leveling the tax burden in all countries. This leveling of the tax is to the disadvantage of countries in Eastern Europe, in principle, countries poorly endowed with capital, that use a lax fiscal policy (Romania, Bulgaria, etc. to attract foreign investment from rich countries of the European Union. In addition, common fiscal policy can lead to a higher degree of centralization of budgetary expenditures in the European Union.

  17. Optimal blind quantum computation.

    Science.gov (United States)

    Mantri, Atul; Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2013-12-06

    Blind quantum computation allows a client with limited quantum capabilities to interact with a remote quantum computer to perform an arbitrary quantum computation, while keeping the description of that computation hidden from the remote quantum computer. While a number of protocols have been proposed in recent years, little is currently understood about the resources necessary to accomplish the task. Here, we present general techniques for upper and lower bounding the quantum communication necessary to perform blind quantum computation, and use these techniques to establish concrete bounds for common choices of the client's quantum capabilities. Our results show that the universal blind quantum computation protocol of Broadbent, Fitzsimons, and Kashefi, comes within a factor of 8/3 of optimal when the client is restricted to preparing single qubits. However, we describe a generalization of this protocol which requires exponentially less quantum communication when the client has a more sophisticated device.

  18. Programming in biomolecular computation

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Our goal is to provide a top-down approach to biomolecular computation. In spite of widespread discussion about connections between biology and computation, one question seems notable by its absence: Where are the programs? We identify a number of common features in programming that seem...... conspicuously absent from the literature on biomolecular computing; to partially redress this absence, we introduce a model of computation that is evidently programmable, by programs reminiscent of low-level computer machine code; and at the same time biologically plausible: its functioning is defined...... by a single and relatively small set of chemical-like reaction rules. Further properties: the model is stored-program: programs are the same as data, so programs are not only executable, but are also compilable and interpretable. It is universal: all computable functions can be computed (in natural ways...

  19. AN ALGORITHM FOR ASSEMBLING A COMMON IMAGE OF VLSI LAYOUT

    Directory of Open Access Journals (Sweden)

    Y. Y. Lankevich

    2015-01-01

    Full Text Available We consider problem of assembling a common image of VLSI layout. Common image is composedof frames obtained by electron microscope photographing. Many frames require a lot of computation for positioning each frame inside the common image. Employing graphics processing units enables acceleration of computations. We realize algorithms and programs for assembling a common image of VLSI layout. Specificity of this work is to use abilities of CUDA to reduce computation time. Experimental results show efficiency of the proposed programs.

  20. Common Tests for Arrhythmia

    Science.gov (United States)

    ... State SELECT YOUR LANGUAGE Español (Spanish) 简体中文 (Traditional Chinese) 繁体中文 (Simplified Chinese) Tiếng Việt (Vietnamese) Healthy Living for Heart.org ... help your doctor diagnose an arrhythmia. View an animation of arrhythmia . Common Tests for Arrhythmia Holter monitor ( ...

  1. 'Crossing a Bare Common'

    DEFF Research Database (Denmark)

    Balle, Søren Hattesen

    2006-01-01

    than not described as a ‘sublime rhetoric’. From the stock of rhetorical tropes the most favoured by Emerson and picked out as the trademark of his rhetorical sublimity critics mention in particular his use of hyperbole, chiasmus and metalepsis. Common to all three tropes is said to be their ability...

  2. Sequential Common Agency

    NARCIS (Netherlands)

    Prat, A.; Rustichini, A.

    1998-01-01

    In a common agency game a set of principals promises monetary transfers to an agent which depend on the action he will take. The agent then chooses the action, and is paid the corresponding transfers. Principals announce their transfers simultaneously. This game has many equilibria; Bernheim and

  3. Common mistakes of investors

    Directory of Open Access Journals (Sweden)

    Yuen Wai Pong Raymond

    2012-09-01

    Full Text Available Behavioral finance is an actively discussed topic in the academic and investment circle. The main reason is because behavioral finance challenges the validity of a cornerstone of the modern financial theory: rationality of investors. In this paper, the common irrational behaviors of investors are discussed

  4. Common eye emergencies

    African Journals Online (AJOL)

    2007-10-11

    Oct 11, 2007 ... Common eye emergencies may present as an acute red eye, sudden visual loss or acute ocular trauma. Most eye emergencies will require referral to an ophthalmologist after initial basic examination and primary management. A relevant history of onset and symptoms of the current problem must be ...

  5. Common Dermatoses of Infancy

    OpenAIRE

    Gora, Irv

    1986-01-01

    Within the pediatric population of their practices, family physicians frequently encounter infants with skin rashes. This article discusses several of the more common rashes of infancy: atopic dermatitis, cradle cap, diaper dermatitis and miliaria. Etiology, clinical picture and possible approaches to treatment are presented.

  6. Common dermatoses of infancy.

    Science.gov (United States)

    Gora, I

    1986-09-01

    Within the pediatric population of their practices, family physicians frequently encounter infants with skin rashes. This article discusses several of the more common rashes of infancy: atopic dermatitis, cradle cap, diaper dermatitis and miliaria. Etiology, clinical picture and possible approaches to treatment are presented.

  7. A Culture in Common.

    Science.gov (United States)

    Ravitch, Diane

    1991-01-01

    If public schools abandon their historic common school mission to promote racial and ethnic separatism, they will forfeit their claim to public support. If they remain true to their historic role, the public schools will rightfully serve as a bulwark against ethnic chauvinism and counter the forces of social fragmentation by instilling democratic…

  8. Common Magnets, Unexpected Polarities

    Science.gov (United States)

    Olson, Mark

    2013-01-01

    In this paper, I discuss a "misconception" in magnetism so simple and pervasive as to be typically unnoticed. That magnets have poles might be considered one of the more straightforward notions in introductory physics. However, the magnets common to students' experiences are likely different from those presented in educational…

  9. Computers and Computer Cultures.

    Science.gov (United States)

    Papert, Seymour

    1981-01-01

    Instruction using computers is viewed as different from most other approaches to education, by allowing more than right or wrong answers, by providing models for systematic procedures, by shifting the boundary between formal and concrete processes, and by influencing the development of thinking in many new ways. (MP)

  10. Common tester platform concept.

    Energy Technology Data Exchange (ETDEWEB)

    Hurst, Michael James

    2008-05-01

    This report summarizes the results of a case study on the doctrine of a common tester platform, a concept of a standardized platform that can be applicable across the broad spectrum of testing requirements throughout the various stages of a weapons program, as well as across the various weapons programs. The common tester concept strives to define an affordable, next-generation design that will meet testing requirements with the flexibility to grow and expand; supporting the initial development stages of a weapons program through to the final production and surveillance stages. This report discusses a concept investing key leveraging technologies and operational concepts combined with prototype tester-development experiences and practical lessons learned gleaned from past weapons programs.

  11. Cloud Computing

    Indian Academy of Sciences (India)

    IAS Admin

    2014-03-01

    Mar 1, 2014 ... group of computers connected to the Internet in a cloud-like boundary (Box 1)). In essence computing is transitioning from an era of users owning computers to one in which users do not own computers but have access to computing hardware and software maintained by providers. Users access the ...

  12. Management of common fractures.

    Science.gov (United States)

    Walker, Jennie

    2013-02-01

    The incidence of fractures increases with advancing age partly due to the presence of multiple comorbidities and increased risk of falls. Common fracture sites in older people include femoral neck, distal radius and vertebral bodies. Nurses have an important role in caring for older patients who have sustained fractures, not only to maximise function and recovery, but as part of a team to minimise the morbidity and mortality associated with fractures in this group.

  13. Common sense codified

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    At CERN, people of more than a hundred different nationalities and hundreds of different professions work together towards a common goal. The new Code of Conduct is a tool that has been designed to help us keep our workplace pleasant and productive through common standards of behaviour. Its basic principle is mutual respect and common sense. This is only natural, but not trivial…  The Director-General announced it in his speech at the beginning of the year, and the Bulletin wrote about it immediately afterwards. "It" is the new Code of Conduct, the document that lists our Organization's values and describes the basic standards of behaviour that we should both adopt and expect from others. "The Code of Conduct is not going to establish new rights or new obligations," explains Anne-Sylvie Catherin, Head of the Human Resources Department (HR). But what it will do is provide a framework for our existing rights and obligations." The aim of a co...

  14. Common HEP UNIX Environment

    Science.gov (United States)

    Taddei, Arnaud

    After it had been decided to design a common user environment for UNIX platforms among HEP laboratories, a joint project between DESY and CERN had been started. The project consists in 2 phases: 1. Provide a common user environment at shell level, 2. Provide a common user environment at graphical level (X11). Phase 1 is in production at DESY and at CERN as well as at PISA and RAL. It has been developed around the scripts originally designed at DESY Zeuthen improved and extended with a 2 months project at CERN with a contribution from DESY Hamburg. It consists of a set of files which are customizing the environment for the 6 main shells (sh, csh, ksh, bash, tcsh, zsh) on the main platforms (AIX, HP-UX, IRIX, SunOS, Solaris 2, OSF/1, ULTRIX, etc.) and it is divided at several "sociological" levels: HEP, site, machine, cluster, group of users and user with some levels which are optional. The second phase is under design and a first proposal has been published. A first version of the phase 2 exists already for AIX and Solaris, and it should be available for all other platforms, by the time of the conference. This is a major collective work between several HEP laboratories involved in the HEPiX-scripts and HEPiX-X11 working-groups.

  15. 'Historicising common sense'.

    Science.gov (United States)

    Millstone, Noah

    2012-12-01

    This essay is an expanded set of comments on the social psychology papers written for the special issue on History and Social Psychology. It considers what social psychology, and particularly the theory of social representations, might offer historians working on similar problems, and what historical methods might offer social psychology. The social history of thinking has been a major theme in twentieth and twenty-first century historical writing, represented most recently by the genre of 'cultural history'. Cultural history and the theory of social representations have common ancestors in early twentieth-century social science. Nevertheless, the two lines of research have developed in different ways and are better seen as complementary than similar. The theory of social representations usefully foregrounds issues, like social division and change over time, that cultural history relegates to the background. But for historians, the theory of social representations seems oddly fixated on comparing the thought styles associated with positivist science and 'common sense'. Using historical analysis, this essay tries to dissect the core opposition 'science : common sense' and argues for a more flexible approach to comparing modes of thought.

  16. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... Professions Site Index A-Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) is a fast, painless exam that uses ... limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known as a CT or CAT ...

  17. Upgrading and Expanding Lustre Storage for use with the WLCG

    Science.gov (United States)

    Traynor, D. P.; Froy, T. S.; Walker, C. J.

    2017-10-01

    The Queen Mary University of London Grid site’s Lustre file system has recently undergone a major upgrade from version 1.8 to the most recent 2.8 release, and a capacity increase to over 3 PB. Lustre is an open source, POSIX compatible, clustered file system presented to the Grid using the StoRM Storage Resource Manager. The motivation and benefits of upgrading including hardware and software choices, are discussed. The testing, performance improvements and data migration procedure are outlined as is the source code modifications needed for StoRM compatibility. Benchmarks and real world performance are presented and future plans discussed.

  18. Common Lisp a gentle introduction to symbolic computation

    CERN Document Server

    Touretzky, David S

    2013-01-01

    This highly accessible introduction to Lisp is suitable both for novices approaching their first programming language and experienced programmers interested in exploring a key tool for artificial intelligence research. The text offers clear, reader-friendly explanations of such essential concepts as cons cell structures, evaluation rules, programs as data, and recursive and applicative programming styles. The treatment incorporates several innovative instructional devices, such as the use of function boxes in the first two chapters to visually distinguish functions from data, use of evaltrace

  19. Verifying the Absence of Common Runtime Errors in Computer Programs

    Science.gov (United States)

    1981-06-01

    the langauge designer another way to test a design, in addition to the usual ways based on experience with other languages and difficulty of...simplified verification conditions is a special skill that one must learn in order to use the verifier. In theI process of analyzing a VC, une notes...Discovery of Linear Restraints Among Variables of a Program, Proceedings of the Fifth ACM Symposium on Principles of Programming Languages, January

  20. Quantum Computation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 9. Quantum Computation - Particle and Wave Aspects of Algorithms. Apoorva Patel. General Article Volume 16 ... Keywords. Boolean logic; computation; computational complexity; digital language; Hilbert space; qubit; superposition; Feynman.

  1. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  2. Common Vestibular Disorders

    Directory of Open Access Journals (Sweden)

    Dimitrios G. Balatsouras

    2017-01-01

    Full Text Available The three most common vestibular diseases, benign paroxysmal positional vertigo (BPPV, Meniere's disease (MD and vestibular neuritis (VN, are presented in this paper. BPPV, which is the most common peripheral vestibular disorder, can be defined as transient vertigo induced by a rapid head position change, associated with a characteristic paroxysmal positional nystagmus. Canalolithiasis of the posterior semicircular canal is considered the most convincing theory of its pathogenesis and the development of appropriate therapeutic maneuvers resulted in its effective treatment. However, involvement of the horizontal or the anterior canal has been found in a significant rate and the recognition and treatment of these variants completed the clinical picture of the disease. MD is a chronic condition characterized by episodic attacks of vertigo, fluctuating hearing loss, tinnitus, aural pressure and a progressive loss of audiovestibular functions. Presence of endolymphatic hydrops on postmortem examination is its pathologic correlate. MD continues to be a diagnostic and therapeutic challenge. Patients with the disease range from minimally symptomatic, highly functional individuals to severely affected, disabled patients. Current management strategies are designed to control the acute and recurrent vestibulopathy but offer minimal remedy for the progressive cochlear dysfunction. VN is the most common cause of acute spontaneous vertigo, attributed to acute unilateral loss of vestibular function. Key signs and symptoms are an acute onset of spinning vertigo, postural imbalance and nausea as well as a horizontal rotatory nystagmus beating towards the non-affected side, a pathological headimpulse test and no evidence for central vestibular or ocular motor dysfunction. Vestibular neuritis preferentially involves the superior vestibular labyrinth and its afferents. Symptomatic medication is indicated only during the acute phase to relieve the vertigo and nausea

  3. English for common entrance

    CERN Document Server

    Kossuth, Kornel

    2013-01-01

    Succeed in the exam with this revision guide, designed specifically for the brand new Common Entrance English syllabus. It breaks down the content into manageable and straightforward chunks with easy-to-use, step-by-step instructions that should take away the fear of CE and guide you through all aspects of the exam. - Gives you step-by-step guidance on how to recognise various types of comprehension questions and answer them. - Shows you how to write creatively as well as for a purpose for the section B questions. - Reinforces and consolidates learning with tips, guidance and exercises through

  4. Common Ground and Delegation

    DEFF Research Database (Denmark)

    Dobrajska, Magdalena; Foss, Nicolai Juul; Lyngsie, Jacob

    Much recent research suggests that firms need to increase their level of delegation to better cope with, for example, the challenges introduced by dynamic rapid environments and the need to engage more with external knowledge sources. However, there is less insight into the organizational...... preconditions of increasing delegation. We argue that key HR practices?namely, hiring, training and job-rotation?are associated with delegation of decision-making authority. These practices assist in the creation of shared knowledge conditions between managers and employees. In turn, such a ?common ground...

  5. Common Sense Biblical Hermeneutics

    Directory of Open Access Journals (Sweden)

    Michael B. Mangini

    2014-12-01

    Full Text Available Since the noetics of moderate realism provide a firm foundation upon which to build a hermeneutic of common sense, in the first part of his paper the author adopts Thomas Howe’s argument that the noetical aspect of moderate realism is a necessary condition for correct, universally valid biblical interpretation, but he adds, “insofar as it gives us hope in discovering the true meaning of a given passage.” In the second part, the author relies on John Deely’s work to show how semiotics may help interpreters go beyond meaning and seek the significance of the persons, places, events, ideas, etc., of which the meaning of the text has presented as objects to be interpreted. It is in significance that the unity of Scripture is found. The chief aim is what every passage of the Bible signifies. Considered as a genus, Scripture is composed of many parts/species that are ordered to a chief aim. This is the structure of common sense hermeneutics; therefore in the third part the author restates Peter Redpath’s exposition of Aristotle and St. Thomas’s ontology of the one and the many and analogously applies it to the question of how an exegete can discern the proper significance and faithfully interpret the word of God.

  6. True and common balsams

    Directory of Open Access Journals (Sweden)

    Dayana L. Custódio

    2012-08-01

    Full Text Available Balsams have been used since ancient times, due to their therapeutic and healing properties; in the perfume industry, they are used as fixatives, and in the cosmetics industry and in cookery, they are used as preservatives and aromatizers. They are generally defined as vegetable material with highly aromatic properties that supposedly have the ability to heal diseases, not only of the body, but also of the soul. When viewed according to this concept, many substances can be considered balsams. A more modern concept is based on its chemical composition and origin: a secretion or exudate of plants that contain cinnamic and benzoic acids, and their derivatives, in their composition. The most common naturally-occurring balsams (i.e. true balsams are the Benzoins, Liquid Storaque and the Balsams of Tolu and Peru. Many other aromatic exudates, such as Copaiba Oil and Canada Balsam, are wrongly called balsam. These usually belong to other classes of natural products, such as essential oils, resins and oleoresins. Despite the understanding of some plants, many plants are still called balsams. This article presents a chemical and pharmacological review of the most common balsams.

  7. Developing a Science Commons for Geosciences

    Science.gov (United States)

    Lenhardt, W. C.; Lander, H.

    2016-12-01

    Many scientific communities, recognizing the research possibilities inherent in data sets, have created domain specific archives such as the Incorporated Research Institutions for Seismology (iris.edu) and ClinicalTrials.gov. Though this is an important step forward, most scientists, including geoscientists, also use a variety of software tools and at least some amount of computation to conduct their research. While the archives make it simpler for scientists to locate the required data, provisioning disk space, compute resources, and network bandwidth can still require significant efforts. This challenge exists despite the wealth of resources available to researchers, namely lab IT resources, institutional IT resources, national compute resources (XSEDE, OSG), private clouds, public clouds, and the development of cyberinfrastructure technologies meant to facilitate use of those resources. Further tasks include obtaining and installing required tools for analysis and visualization. If the research effort is a collaboration or involves certain types of data, then the partners may well have additional non-scientific tasks such as securing the data and developing secure sharing methods for the data. These requirements motivate our investigations into the "Science Commons". This paper will present a working definition of a science commons, compare and contrast examples of existing science commons, and describe a project based at RENCI to implement a science commons for risk analytics. We will then explore what a similar tool might look like for the geosciences.

  8. 26 CFR 1.584-2 - Income of participants in common trust fund.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Income of participants in common trust fund. 1... common trust fund. (a) Each participant in a common trust fund is required to include in computing its... ordinary taxable income or the ordinary net loss of the common trust fund, computed as provided in § 1.584...

  9. [Halitosis. A common problem].

    Science.gov (United States)

    Laine, M L; Slot, D E; Danser, M M

    2011-12-01

    Halitosis is a frequently occurring problem, the cause of which is generally to be found in the mouth. The challenge for oral health care providers is to diagnose it correctly and treat it effectively. Differential diagnosis is of great importance in making a distinction between halitosis which originates in the mouth and which does not originate in the mouth. Oral halitosis can be treated effectively by good oral health care. Plaque accumulation on the tongue is the most common cause of oral halitosis. Tongue cleansing, possibly in combination with a specific mouth wash, is consequently recommended as an element of oral hygiene care. Other oral health problems, such as periodontal disease, caries and ill-fitting removable dentures should be treated adequately to eliminate these problems as potential causes of halitosis.

  10. Challenging some common beliefs

    Directory of Open Access Journals (Sweden)

    Arndt Broder

    2008-03-01

    Full Text Available The authors review their own empirical work inspired by the adaptive toolbox metaphor. The review examines factors influencing strategy selection and execution in multi-attribute inference tasks (e.g., information costs, time pressure, memory retrieval, dynamic environments, stimulus formats, intelligence. An emergent theme is the re-evaluation of contingency model claims about the elevated cognitive costs of compensatory in comparison with non-compensatory strategies. Contrary to common assertions about the impact of cognitive complexity, the empirical data suggest that manipulated variables exert their influence at the meta-level of deciding how to decide (i.e., which strategy to select rather than at the level of strategy execution. An alternative conceptualisation of strategy selection, namely threshold adjustment in an evidence accumulation model, is also discussed and the difficulty in distinguishing empirically between these metaphors is acknowledged.

  11. Quaternion Common Spatial Patterns.

    Science.gov (United States)

    Enshaeifar, S; Took, C Cheong; Park, C; Mandic, D P

    2017-08-01

    A novel quaternion-valued common spatial patterns (QCSP) algorithm is introduced to model co-channel coupling of multi-dimensional processes. To cater for the generality of quaternion-valued non-circular data, we propose a generalized QCSP (G-QCSP) which incorporates the information on power difference between the real and imaginary parts of data channels. As an application, we demonstrate how G-QCSP can be used to provide high classification rates, even at a signal-to-noise ratio (SNR) as low as -10 dB. To illustrate the usefulness of our method in EEG analysis, we employ G-QCSP to extract features for discriminating between imagery left and right hand movements. The classification accuracy using these features is 70%. Furthermore, the proposed method is used to distinguish between Parkinson's disease (PD) patients and healthy control subjects, providing an accuracy of 87%.

  12. Common Influence Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Mamoulis, Nikos; Karras, Panagiotis

    2008-01-01

    We identify and formalize a novel join operator for two spatial pointsets P and Q. The common influence join (CIJ) returns the pairs of points (p,q),p isin P,q isin Q, such that there exists a location in space, being closer to p than to any other point in P and at the same time closer to q than...... to any other point in Q. In contrast to existing join operators between pointsets (i.e., e-distance joins and fc-closest pairs), CIJ is parameter- free, providing a natural join result that finds application in marketing and decision support. We propose algorithms for the efficient evaluation of CIJ......-demand, is very efficient in practice, incurring only slightly higher I/O cost than the theoretical lower bound cost for the problem....

  13. 'Crossing a Bare Common'

    DEFF Research Database (Denmark)

    Balle, Søren Hattesen

    2006-01-01

    and thinking about the sublime. To put it very simply, this means to present a version of the sublime in which transcendence is transposed ‘into a naturalistic key’ – to use Thomas Weiskel’s apt phrase. Admittedly, this is hardly a new, nor a very original way of situating Emerson’s sublime. Here I follow...... Emerson critics to construe his exorbitant rhetorical style as somehow perfectly conducive to achieving the sense of transcendence that his effort to convey an American’s experience of the sublime would have to involve. Characteristically, Emerson’s highly charged eloquence on the sublime is more often...... than not described as a ‘sublime rhetoric’. From the stock of rhetorical tropes the most favoured by Emerson and picked out as the trademark of his rhetorical sublimity critics mention in particular his use of hyperbole, chiasmus and metalepsis. Common to all three tropes is said to be their ability...

  14. Common Superficial Bursitis.

    Science.gov (United States)

    Khodaee, Morteza

    2017-02-15

    Superficial bursitis most often occurs in the olecranon and prepatellar bursae. Less common locations are the superficial infrapatellar and subcutaneous (superficial) calcaneal bursae. Chronic microtrauma (e.g., kneeling on the prepatellar bursa) is the most common cause of superficial bursitis. Other causes include acute trauma/hemorrhage, inflammatory disorders such as gout or rheumatoid arthritis, and infection (septic bursitis). Diagnosis is usually based on clinical presentation, with a particular focus on signs of septic bursitis. Ultrasonography can help distinguish bursitis from cellulitis. Blood testing (white blood cell count, inflammatory markers) and magnetic resonance imaging can help distinguish infectious from noninfectious causes. If infection is suspected, bursal aspiration should be performed and fluid examined using Gram stain, crystal analysis, glucose measurement, blood cell count, and culture. Management depends on the type of bursitis. Acute traumatic/hemorrhagic bursitis is treated conservatively with ice, elevation, rest, and analgesics; aspiration may shorten the duration of symptoms. Chronic microtraumatic bursitis should be treated conservatively, and the underlying cause addressed. Bursal aspiration of microtraumatic bursitis is generally not recommended because of the risk of iatrogenic septic bursitis. Although intrabursal corticosteroid injections are sometimes used to treat microtraumatic bursitis, high-quality evidence demonstrating any benefit is unavailable. Chronic inflammatory bursitis (e.g., gout, rheumatoid arthritis) is treated by addressing the underlying condition, and intrabursal corticosteroid injections are often used. For septic bursitis, antibiotics effective against Staphylococcus aureus are generally the initial treatment, with surgery reserved for bursitis not responsive to antibiotics or for recurrent cases. Outpatient antibiotics may be considered in those who are not acutely ill; patients who are acutely ill

  15. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  16. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  17. Quantum computing

    OpenAIRE

    Traub, Joseph F.

    2014-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  18. (UnCommonly Connected

    Directory of Open Access Journals (Sweden)

    Emily M. Hodge

    2016-11-01

    Full Text Available As states continue to implement the Common Core State Standards (CCSS, state educational agencies (SEAs are providing professional development and curricular resources to help districts and teachers understand the standards. However, little is known about the resources SEAs endorse, the states and/or organizations sponsoring these resources, and how states and organizations are connected. This study investigates the secondary English/language arts resources provided by 51 SEAs (2,023 resources sponsored by 51 SEAs and 262 intermediary organizations. Social network analysis of states and sponsoring organizations revealed a core-periphery network in which certain states and organizations were frequently named as the sponsors of resources, while other organizations were named as resource sponsors by only one state. SEAs are providing a variety of types of resources, including professional development, curriculum guidelines, articles, and instructional aids. This study offers insight into the most influential actors providing CCSS resources at the state level, as well as how SEAs are supporting instructional capacity through the resources they provide for teachers.

  19. Building the common

    DEFF Research Database (Denmark)

    Agustin, Oscar Garcia

    In opposition to positivism the so called postpositivism reject the emphasis on the empirical truth and proposes an interpretative approach to the social world (Fischer, 1993). Policy analysis begins to address the sense-making constructions and the competing discourses on social meanings whilst ...... on migration as positive for economy (and demography) and its realistic acceptation (the immigration flows will not decrease) is partly based on its reduction to an economic (as legal) or security (as illegal) issue that can be managed with appropriate means....... a wider social approach based on Maarten Hayer’s (1995) discourse analysis of policy making and the more linguistic one, specifically Ruth Wodak’s (Reisigl & Wodak, 2001; Wodak & Weiss, 2005) Historic Discourse Approach (HDA). Thus I will be able to identify which discursive structure on immigration...... document, A Common Immigration Policy for Europe: Principles, actions and tools (2008) as a part of Hague Programme (2004) on actions against terrorism, organised crime and migration and asylum management and influenced by the renewed Lisbon Strategy (2005-2010) for growth and jobs. My aim is to explore...

  20. Indirection and computer security.

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Michael J.

    2011-09-01

    The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

  1. Urban green commons: Insights on urban common property systems

    NARCIS (Netherlands)

    Colding, J.; Barthel, S.; Bendt, P.; Snep, R.P.H.; Knaap, van der W.G.M.; Ernstson, H.

    2013-01-01

    The aim of this paper is to shed new light on urban common property systems. We deal with urban commons in relation to urban green-space management, referring to them as urban green commons. Applying a property-rights analytic perspective, we synthesize information on urban green commons from three

  2. Experts' views on digital competence: commonalities and differences

    NARCIS (Netherlands)

    Janssen, José; Stoyanov, Slavi; Ferrari, Anusca; Punie, Yves; Pannekeet, Kees; Sloep, Peter

    2013-01-01

    Janssen, J., Stoyanov, S., Ferrari, A., Punie, Y., Pannekeet, K., & Sloep, P. B. (2013). Experts’ views on digital competence: commonalities and differences. Computers & Education, 68, 473–481. doi:10.1016/j.compedu.2013.06.008

  3. Adjusting the fairshare policy to prevent computing power loss

    Science.gov (United States)

    Dal Pra, Stefano

    2017-10-01

    On a typical WLCG site providing batch access to computing resources according to a fairshare policy, the idle time lapse after a job ends and before a new one begins on a given slot is negligible if compared to the duration of typical jobs. The overall amount of these intervals over a time window increases with the size of the cluster and the inverse of job duration and can be considered equivalent to an average number of unavailable slots over that time window. This value has been investigated for the Tier-1 at CNAF, and observed to occasionally grow and reach up to more than the 10% of the about 20,000 available computing slots. Analysis reveals that this happens when a sustained rate of short jobs is submitted to the cluster and dispatched by the batch system. Because of how the default fairshare policy works, it increases the dynamic priority of those users mostly submitting short jobs, since they are not accumulating runtime, and will dispatch more of their jobs at the next round, thus worsening the situation until the submission flow ends. To address this problem the default behaviour of the fairshare have been altered by adding a correcting term to the default formula for the dynamic priority. The LSF batch system, currently adopted at CNAF, provides a way to define its value by invoking a C function, which returns it for each user in the cluster. The correcting term works by rounding up to a minimum defined runtime the most recently done jobs. Doing so, each short job looks almost like a regular one and the dynamic priority value settles to a proper value. The net effect is a reduction of the dispatching rate of short jobs and, consequently, the average number of available slots greatly improves. Furthermore, a potential starvation problem, actually observed at least once is also prevented. After describing short jobs and reporting about their impact on the cluster, possible workarounds are discussed and the selected solution is motivated. Details on the

  4. Computational dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Siebert, B.R.L.; Thomas, R.H.

    1996-01-01

    The paper presents a definition of the term ``Computational Dosimetry`` that is interpreted as the sub-discipline of computational physics which is devoted to radiation metrology. It is shown that computational dosimetry is more than a mere collection of computational methods. Computational simulations directed at basic understanding and modelling are important tools provided by computational dosimetry, while another very important application is the support that it can give to the design, optimization and analysis of experiments. However, the primary task of computational dosimetry is to reduce the variance in the determination of absorbed dose (and its related quantities), for example in the disciplines of radiological protection and radiation therapy. In this paper emphasis is given to the discussion of potential pitfalls in the applications of computational dosimetry and recommendations are given for their avoidance. The need for comparison of calculated and experimental data whenever possible is strongly stressed.

  5. Quantum computing

    OpenAIRE

    Li, Shu-shen; Long, Gui-Lu; Bai, Feng-Shan; Feng, Song-Lin; Zheng, Hou-Zhi

    2001-01-01

    Quantum computing is a quickly growing research field. This article introduces the basic concepts of quantum computing, recent developments in quantum searching, and decoherence in a possible quantum dot realization.

  6. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  7. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot en...... cybernetics and Maturana and Varela’s theory of autopoiesis, which are both erroneously taken to support info-computationalism....

  8. Cognitive Computing

    OpenAIRE

    2015-01-01

    "Cognitive Computing" has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and th...

  9. Computable models

    CERN Document Server

    Turner, Raymond

    2009-01-01

    Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al

  10. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  11. Computational Complexity

    Directory of Open Access Journals (Sweden)

    J. A. Tenreiro Machado

    2017-02-01

    Full Text Available Complex systems (CS involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...

  12. Optical Computing

    Indian Academy of Sciences (India)

    tal computers are still some years away, however a number of devices that can ultimately lead to real optical computers have already been manufactured, including optical logic gates, optical switches, optical interconnections, and opti- cal memory. The most likely near-term optical computer will really be a hybrid composed ...

  13. Quantum Computing

    Indian Academy of Sciences (India)

    In the early 1980s Richard Feynman noted that quan- tum systems cannot be efficiently simulated on a clas- sical computer. Till then the accepted view was that any reasonable !{lodel of computation can be efficiently simulated on a classical computer. Hence, this observa- tion led to a lot of rethinking about the basic ...

  14. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  15. Cloud Computing

    Indian Academy of Sciences (India)

    Cloud computing; services on a cloud; cloud types; computing utility; risks in using cloud computing. Author Affiliations. V Rajaraman1. Supercomputer Education and Research Centre, Indian Institute of Science, Bangalore 560 012, India. Resonance – Journal of Science Education. Current Issue : Vol. 22, Issue 11. Current ...

  16. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  17. Motivating Contributions for Home Computer Security

    Science.gov (United States)

    Wash, Richard L.

    2009-01-01

    Recently, malicious computer users have been compromising computers en masse and combining them to form coordinated botnets. The rise of botnets has brought the problem of home computers to the forefront of security. Home computer users commonly have insecure systems; these users do not have the knowledge, experience, and skills necessary to…

  18. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... the limitations of CT Scanning of the Head? What is CT Scanning of the Head? Computed tomography, more commonly known as a CT or CAT scan, is a diagnostic medical test that, like traditional x-rays, produces multiple images or pictures of the inside of ...

  19. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  20. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  1. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  2. inheritance of resistance to common bacterial blight in common

    African Journals Online (AJOL)

    Prof. Adipala Ekwamu

    INHERITANCE OF RESISTANCE TO COMMON BACTERIAL BLIGHT IN. COMMON BEAN. B.Y.E. CHATAIKA, J.M. ... common bacterial blight caused by Xanthomonas axonopodis pv phaseoli (Xap). Effective breeding for resistance ... (2004) reported great genetic diversity and co- evolution for Xap across geographic ...

  3. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  4. Introduction to computer networking

    CERN Document Server

    Robertazzi, Thomas G

    2017-01-01

    This book gives a broad look at both fundamental networking technology and new areas that support it and use it. It is a concise introduction to the most prominent, recent technological topics in computer networking. Topics include network technology such as wired and wireless networks, enabling technologies such as data centers, software defined networking, cloud and grid computing and applications such as networks on chips, space networking and network security. The accessible writing style and non-mathematical treatment makes this a useful book for the student, network and communications engineer, computer scientist and IT professional. • Features a concise, accessible treatment of computer networking, focusing on new technological topics; • Provides non-mathematical introduction to networks in their most common forms today;< • Includes new developments in switching, optical networks, WiFi, Bluetooth, LTE, 5G, and quantum cryptography.

  5. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  6. Towards a Global Service Registry for the World-Wide LHC Computing Grid

    Science.gov (United States)

    Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro

    2014-06-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the

  7. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  8. COMMON LANGUAGE VERSUS SPECIALIZED LANGUAGE

    OpenAIRE

    Mariana Coancă

    2011-01-01

    This paper deals with the presentation of the common language and the specialized one. We also highlighted the relations and the differences between them. The specialized language is a vector of specialized knowledge, but sometimes it contains units from the common language. The common language is unmarked and it is based on the daily non-specialized exchange. The specialized languages are different from the common languages, regarding their usage and the information they convey. The communic...

  9. Computational Design of Urban Layouts

    KAUST Repository

    Wonka, Peter

    2015-10-07

    A fundamental challenge in computational design is to compute layouts by arranging a set of shapes. In this talk I will present recent urban modeling projects with applications in computer graphics, urban planning, and architecture. The talk will look at different scales of urban modeling (streets, floorplans, parcels). A common challenge in all these modeling problems are functional and aesthetic constraints that should be respected. The talk also highlights interesting links to geometry processing problems, such as field design and quad meshing.

  10. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  11. Quantum Computing

    Science.gov (United States)

    Steffen, Matthias

    Solving computational problems require resources such as time, memory, and space. In the classical model of computation, computational complexity theory has categorized problems according to how difficult it is to solve them as the problem size increases. Remarkably, a quantum computer could solve certain problems using fundamentally fewer resources compared to a conventional computer, and therefore has garnered significant attention. Yet because of the delicate nature of entangled quantum states, the construction of a quantum computer poses an enormous challenge for experimental and theoretical scientists across multi-disciplinary areas including physics, engineering, materials science, and mathematics. While the field of quantum computing still has a long way to grow before reaching full maturity, state-of-the-art experiments on the order of 10 qubits are beginning to reach a fascinating stage at which they can no longer be emulated using even the fastest supercomputer. This raises the hope that small quantum computer demonstrations could be capable of approximately simulating or solving problems that also have practical applications. In this talk I will review the concepts behind quantum computing, and focus on the status of superconducting qubits which includes steps towards quantum error correction and quantum simulations.

  12. Biological computation

    CERN Document Server

    Lamm, Ehud

    2011-01-01

    Introduction and Biological BackgroundBiological ComputationThe Influence of Biology on Mathematics-Historical ExamplesBiological IntroductionModels and Simulations Cellular Automata Biological BackgroundThe Game of Life General Definition of Cellular Automata One-Dimensional AutomataExamples of Cellular AutomataComparison with a Continuous Mathematical Model Computational UniversalitySelf-Replication Pseudo Code Evolutionary ComputationEvolutionary Biology and Evolutionary ComputationGenetic AlgorithmsExample ApplicationsAnalysis of the Behavior of Genetic AlgorithmsLamarckian Evolution Genet

  13. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    to understand the computer as a material like any other material we would use for design, like wood, aluminum, or plastic. That as soon as the computer forms a composition with other materials it becomes just as approachable and inspiring as other smart materials. I present a series of investigations of what...... Computational Composite, and Telltale). Through the investigations, I show how the computer can be understood as a material and how it partakes in a new strand of materials whose expressions come to be in context. I uncover some of their essential material properties and potential expressions. I develop a way...

  14. Common Visual Pattern Discovery via Directed Graph.

    Science.gov (United States)

    Wang, Chen; Ma, Kai-Kuang

    2014-03-01

    A directed graph (or digraph) approach is proposed in this paper for identifying all the visual objects commonly presented in the two images under comparison. As a model, the directed graph is superior to the undirected graph, since there are two link weights with opposite orientations associated with each link of the graph. However, it inevitably draws two main challenges: 1) how to compute the two link weights for each link and 2) how to extract the subgraph from the digraph. For 1), a novel n-ranking process for computing the generalized median and the Gaussian link-weight mapping function are developed that basically map the established undirected graph to the digraph. To achieve this graph mapping, the proposed process and function are applied to each vertex independently for computing its directed link weight by not only considering the influences inserted from its immediately adjacent neighboring vertices (in terms of their link-weight values), but also offering other desirable merits-i.e., link-weight enhancement and computational complexity reduction. For 2), an evolutionary iterative process for solving the non-cooperative game theory is exploited to handle the non-symmetric weighted adjacency matrix. The abovementioned two stages of processes will be conducted for each assumed scale-change factor, experimented over a range of possible values, one factor at a time. If there is a match on the scale-change factor under experiment, the common visual patterns with the same scale-change factor will be extracted. If more than one pattern are extracted, the proposed topological splitting method is able to further differentiate among them provided that the visual objects are sufficiently far apart from each other. Extensive simulation results have clearly demonstrated the superior performance accomplished by the proposed digraph approach, compared with those of using the undirected graph approach.

  15. Computer science II essentials

    CERN Document Server

    Raus, Randall

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Computer Science II includes organization of a computer, memory and input/output, coding, data structures, and program development. Also included is an overview of the most commonly

  16. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  17. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  18. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...

  19. Computational Deception

    NARCIS (Netherlands)

    Nijholt, Antinus; Acosta, P.S.; Cravo, P.

    2010-01-01

    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behaviour, and our

  20. Computational astrophysics

    Science.gov (United States)

    Miller, Richard H.

    1987-01-01

    Astronomy is an area of applied physics in which unusually beautiful objects challenge the imagination to explain observed phenomena in terms of known laws of physics. It is a field that has stimulated the development of physical laws and of mathematical and computational methods. Current computational applications are discussed in terms of stellar and galactic evolution, galactic dynamics, and particle motions.

  1. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  2. Computational Streetscapes

    Directory of Open Access Journals (Sweden)

    Paul M. Torrens

    2016-09-01

    Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber

  3. Proceedings of the second workshop of LHC Computing Grid, LCG-France; ACTES, 2e colloque LCG-France

    Energy Technology Data Exchange (ETDEWEB)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin (eds.) [Laboratoire de Physique Corpusculaire Clermont-Ferrand, Campus des Cezeaux, 24, avenue des Landais, Clermont-Ferrand (France)

    2007-03-15

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3

  4. Competence across Europe: Highest Common Factor or Lowest Common Denominator?

    Science.gov (United States)

    Winterton, Jonathan

    2009-01-01

    Purpose: The purpose of this article is to explore diversity in competence models across Europe and consider the extent to which there is sufficient common ground for a common European approach to underpin the European Qualifications Framework. Design/methodology/approach: The paper uses a literature review and interviews with policy makers.…

  5. How Common is Common Use Facilities at Airports

    Science.gov (United States)

    Barbeau, Addison D.

    This study looked at common use airports across the country and at the implementation of common use facailities at airports. Common use consists of several elements that maybe installed at an airport. One of the elements is the self-service kiosks that allow passengers to have a faster check-in process, therefore moving them more quickly within the airport. Another element is signage and the incorporation of each airline's logo. Another aspect of common useis an airport regaining control of terminal gates by reducing the number of gates that are exclusively leased to a specific air carrier. This research focused on the current state of the common use facilities across the United States and examines the advantages and disadvantages of this approach. The research entailed interviews with personnel at a wide range of airports and found that each airport is in a different stage of implementation; some have fully implemented the common use concept while others are in the beginning stages of implementation. The questions were tailored to determine what the advantages and disadvantages are of a common use facility. The most common advantages reported included flexibility and cost. In the commom use system the airport reserves the right to move any airline to a different gate at any time for any reason. In turn, this helps reduce gates delays at that facility. For the airports that were interviewed no major disadvantages were reported. One down side of common use facilities for the airport involved is the major capital cost that is required to move to a common use system.

  6. Five Theses on the Common

    Directory of Open Access Journals (Sweden)

    Gigi Roggero

    2011-01-01

    Full Text Available I present five theses on the common within the context of the transformations of capitalist social relations as well as their contemporary global crisis. My framework involves ‘‘cognitive capitalism,’’ new processes of class composition, and the production of living knowledge and subjectivity. The commons is often discussed today in reference to the privatizationand commodification of ‘‘common goods.’’ This suggests a naturalistic and conservative image of the common, unhooked from the relations of production. I distinguish between commons and the common: the first model is related to Karl Polanyi, the second to Karl Marx. As elaborated in the postoperaista debate, the common assumes an antagonistic double status: it is boththe plane of the autonomy of living labor and it is subjected to capitalist ‘‘capture.’’ Consequently, what is at stake is not the conservation of ‘‘commons,’’ but rather the production of the common and its organization into new institutions that would take us beyond the exhausted dialectic between public and private.

  7. Common problems in endurance athletes

    National Research Council Canada - National Science Library

    Cosca, DD

    2007-01-01

    .... Common overuse injuries in runners and other endurance athletes include patellofemoral pain syndrome, iliotibial band friction syndrome, medial tibial stress syndrome, Achilles tendinopathy, plantar...

  8. Chromatin computation.

    Directory of Open Access Journals (Sweden)

    Barbara Bryant

    Full Text Available In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this "chromatin computer" to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal--and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines.

  9. Compute Canada: Advancing Computational Research

    Science.gov (United States)

    Baldwin, Susan

    2012-02-01

    High Performance Computing (HPC) is redefining the way that research is done. Compute Canada's HPC infrastructure provides a national platform that enables Canadian researchers to compete on an international scale, attracts top talent to Canadian universities and broadens the scope of research.

  10. Computational physics

    CERN Document Server

    Newman, Mark

    2013-01-01

    A complete introduction to the field of computational physics, with examples and exercises in the Python programming language. Computers play a central role in virtually every major physics discovery today, from astrophysics and particle physics to biophysics and condensed matter. This book explains the fundamentals of computational physics and describes in simple terms the techniques that every physicist should know, such as finite difference methods, numerical quadrature, and the fast Fourier transform. The book offers a complete introduction to the topic at the undergraduate level, and is also suitable for the advanced student or researcher who wants to learn the foundational elements of this important field.

  11. Computer interfacing

    CERN Document Server

    Dixey, Graham

    1994-01-01

    This book explains how computers interact with the world around them and therefore how to make them a useful tool. Topics covered include descriptions of all the components that make up a computer, principles of data exchange, interaction with peripherals, serial communication, input devices, recording methods, computer-controlled motors, and printers.In an informative and straightforward manner, Graham Dixey describes how to turn what might seem an incomprehensible 'black box' PC into a powerful and enjoyable tool that can help you in all areas of your work and leisure. With plenty of handy

  12. Computing methods

    CERN Document Server

    Berezin, I S

    1965-01-01

    Computing Methods, Volume 2 is a five-chapter text that presents the numerical methods of solving sets of several mathematical equations. This volume includes computation sets of linear algebraic equations, high degree equations and transcendental equations, numerical methods of finding eigenvalues, and approximate methods of solving ordinary differential equations, partial differential equations and integral equations.The book is intended as a text-book for students in mechanical mathematical and physics-mathematical faculties specializing in computer mathematics and persons interested in the

  13. Computational Viscoelasticity

    CERN Document Server

    Marques, Severino P C

    2012-01-01

    This text is a guide how to solve problems in which viscoelasticity is present using existing commercial computational codes. The book gives information on codes’ structure and use, data preparation  and output interpretation and verification. The first part of the book introduces the reader to the subject, and to provide the models, equations and notation to be used in the computational applications. The second part shows the most important Computational techniques: Finite elements formulation, Boundary elements formulation, and presents the solutions of Viscoelastic problems with Abaqus.

  14. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  15. Malheur - Common Carp Movement Control

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Invasive common carp Cyprinus carpio were introduced into the Harney Basin in the 1920’s and were recognized as a problem in Malheur Lake in 1952. The common carp...

  16. Knowledge production, agriculture and commons

    NARCIS (Netherlands)

    Basu, S.

    2016-01-01

    Keywords: Knowledge Production; Agrarian Research; Research Networks; Research Policy; (non)-instrumentality; CBPP; Commons; GCP; Drought; Sahbhagi Dhan; India Knowledge Production, Agriculture and Commons: The Case of Generation Challenge Programme Soutrik

  17. Common Ash (Fraxinus excelsior L.)

    NARCIS (Netherlands)

    Douglas, G.C.; Pliura, A.; Dufour, J.; Mertens, P.; Jacques, D.; Buiteveld, J.

    2013-01-01

    Common ash (Fraxinus excelsior L.) has an extensive natural distribution across Europe and extends as far east as the Volga river and south into northern Iran. Country statistics and national programmes show that common ash has major economic and ecological importance in many countries. Genetic

  18. Universal computer interfaces

    CERN Document Server

    Dheere, RFBM

    1988-01-01

    Presents a survey of the latest developments in the field of the universal computer interface, resulting from a study of the world patent literature. Illustrating the state of the art today, the book ranges from basic interface structure, through parameters and common characteristics, to the most important industrial bus realizations. Recent technical enhancements are also included, with special emphasis devoted to the universal interface adapter circuit. Comprehensively indexed.

  19. Computational Literacy

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Robering, Klaus

    2016-01-01

    In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies for the acquisit......In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies...... for the acquisition of Computational Literacy at basic educational levels, focus on higher levels of education has been much less prominent. The present paper considers the case of courses for higher education programs within the Humanities. A model is proposed which conceives of Computational Literacy as a layered...

  20. Computing Religion

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Braxton, Donald M.; Upal, Afzal

    2012-01-01

    The computational approach has become an invaluable tool in many fields that are directly relevant to research in religious phenomena. Yet the use of computational tools is almost absent in the study of religion. Given that religion is a cluster of interrelated phenomena and that research...... concerning these phenomena should strive for multilevel analysis, this article argues that the computational approach offers new methodological and theoretical opportunities to the study of religion. We argue that the computational approach offers 1.) an intermediary step between any theoretical construct...... and its targeted empirical space and 2.) a new kind of data which allows the researcher to observe abstract constructs, estimate likely outcomes, and optimize empirical designs. Because sophisticated mulitilevel research is a collaborative project we also seek to introduce to scholars of religion some...

  1. Nonparametric Regression with Common Shocks

    Directory of Open Access Journals (Sweden)

    Eduardo A. Souza-Rodrigues

    2016-09-01

    Full Text Available This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small number of factors. I investigate the properties of the Nadaraya-Watson kernel estimator and determine how general the common shocks can be while still obtaining meaningful kernel estimates. Restrictions on the common shocks are necessary because kernel estimators typically manipulate conditional densities, and conditional densities do not necessarily exist in the present case. By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.

  2. Security in Computer Applications

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    Computer security has been an increasing concern for IT professionals for a number of years, yet despite all the efforts, computer systems and networks remain highly vulnerable to attacks of different kinds. Design flaws and security bugs in the underlying software are among the main reasons for this. This lecture addresses the following question: how to create secure software? The lecture starts with a definition of computer security and an explanation of why it is so difficult to achieve. It then introduces the main security principles (like least-privilege, or defense-in-depth) and discusses security in different phases of the software development cycle. The emphasis is put on the implementation part: most common pitfalls and security bugs are listed, followed by advice on best practice for security development. The last part of the lecture covers some miscellaneous issues like the use of cryptography, rules for networking applications, and social engineering threats. This lecture was first given on Thursd...

  3. Project plan for computing

    CERN Document Server

    Harvey, J

    1998-01-01

    The LHCB Computing Project covers both on- and off-line activities. Nine sub-projects are identified, six of which correspond to specific applications, such as Reconstruction, DAQ etc., one takes charge of developing components that can be classed as of common interest to the various applications, and two which take responsibility for supporting the software development environment and computing infrastructure respectively. A Steering Group, comprising the convenors of the nine subprojects and the overall Computing Co-ordinator, is responsible for project management and planning. The planning assumes four life-cycle phases; preparation, implementation, commissioning and operation. A global planning chart showing the timescales of each phase is included. A more detailed chart for the planning of the introduction of Object Technologies is also described. Manpower requirements are given for each sub-project in terms of task description and FTEs needed. The evolution of these requirements with time is also given....

  4. Personal computers on Ethernet

    Science.gov (United States)

    Kao, R.

    1988-01-01

    Many researchers in the Division have projects which require transferring large files between their personal computers (PC) and VAX computers in the Laboratory for Oceans Computing Facility (LOCF). Since Ethernet local area network provides high speed communication channels which make file transfers (among other capabilities) practical, a network plan was assembled to connect IBM and IBM compatible PC's to Ethernet for participating personnel. The design employs ThinWire Ethernet technology. A simplified configuration diagram is shown. A DEC multiport repeater (DEMPR) is used for connection of ThinWire Ethernet segments. One port of DEMPR is connected to a H4000 transceiver and the transceiver is clamped onto the Goddard Ethernet backbonecoaxial cable so that the PC's can be optionally on the SPAN network. All these common elements were successfully installed and tested.

  5. COMPUTERS HAZARDS

    Directory of Open Access Journals (Sweden)

    Andrzej Augustynek

    2007-01-01

    Full Text Available In June 2006, over 12.6 million Polish users of the Web registered. On the average, each of them spent 21 hours and 37 minutes monthly browsing the Web. That is why the problems of the psychological aspects of computer utilization have become an urgent research subject. The results of research into the development of Polish information society carried out in AGH University of Science and Technology, under the leadership of Leslaw H. Haber, in the period from 2000 until present time, indicate the emergence dynamic changes in the ways of computer utilization and their circumstances. One of the interesting regularities has been the inverse proportional relation between the level of computer skills and the frequency of the Web utilization.It has been found that in 2005, compared to 2000, the following changes occurred:- A significant drop in the number of students who never used computers and the Web;- Remarkable increase in computer knowledge and skills (particularly pronounced in the case of first years student- Decreasing gap in computer skills between students of the first and the third year; between male and female students;- Declining popularity of computer games.It has been demonstrated also that the hazard of computer screen addiction was the highest in he case of unemployed youth outside school system. As much as 12% of this group of young people were addicted to computer. A lot of leisure time that these youths enjoyed inducted them to excessive utilization of the Web. Polish housewives are another population group in risk of addiction to the Web. The duration of long Web charts carried out by younger and younger youths has been another matter of concern. Since the phenomenon of computer addiction is relatively new, no specific therapy methods has been developed. In general, the applied therapy in relation to computer addition syndrome is similar to the techniques applied in the cases of alcohol or gambling addiction. Individual and group

  6. Computational sustainability

    CERN Document Server

    Kersting, Kristian; Morik, Katharina

    2016-01-01

    The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

  7. Computational oncology.

    Science.gov (United States)

    Lefor, Alan T

    2011-08-01

    Oncology research has traditionally been conducted using techniques from the biological sciences. The new field of computational oncology has forged a new relationship between the physical sciences and oncology to further advance research. By applying physics and mathematics to oncologic problems, new insights will emerge into the pathogenesis and treatment of malignancies. One major area of investigation in computational oncology centers around the acquisition and analysis of data, using improved computing hardware and software. Large databases of cellular pathways are being analyzed to understand the interrelationship among complex biological processes. Computer-aided detection is being applied to the analysis of routine imaging data including mammography and chest imaging to improve the accuracy and detection rate for population screening. The second major area of investigation uses computers to construct sophisticated mathematical models of individual cancer cells as well as larger systems using partial differential equations. These models are further refined with clinically available information to more accurately reflect living systems. One of the major obstacles in the partnership between physical scientists and the oncology community is communications. Standard ways to convey information must be developed. Future progress in computational oncology will depend on close collaboration between clinicians and investigators to further the understanding of cancer using these new approaches.

  8. Computer viruses

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, F.B.

    1986-01-01

    This thesis investigates a recently discovered vulnerability in computer systems which opens the possibility that a single individual with an average user's knowledge could cause widespread damage to information residing in computer networks. This vulnerability is due to a transitive integrity corrupting mechanism called a computer virus which causes corrupted information to spread from program to program. Experiments have shown that a virus can spread at an alarmingly rapid rate from user to user, from system to system, and from network to network, even when the best-availability security techniques are properly used. Formal definitions of self-replication, evolution, viruses, and protection mechanisms are used to prove that any system that allows sharing, general functionality, and transitivity of information flow cannot completely prevent viral attack. Computational aspects of viruses are examined, and several undecidable problems are shown. It is demonstrated that a virus may evolve so as to generate any computable sequence. Protection mechanisms are explored, and the design of computer networks that prevent both illicit modification and dissemination of information are given. Administration and protection of information networks based on partial orderings are examined, and probably correct automated administrative assistance is introduced.

  9. Chromatin Computation

    Science.gov (United States)

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  10. Asthma and Chronic Obstructive Pulmonary Disease Common Genes, Common Environments?

    NARCIS (Netherlands)

    Postma, Dirkje S.; Kerkhof, Marjan; Boezen, H. Marike; Koppelman, Gerard H.

    2011-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) show similarities and substantial differences. The Dutch hypothesis stipulated that asthma and COPD have common genetic and environmental risk factors (allergens, infections, smoking), which ultimately lead to clinical disease depending on the

  11. The Role of Grid Computing Technologies in Cloud Computing

    Science.gov (United States)

    Villegas, David; Rodero, Ivan; Fong, Liana; Bobroff, Norman; Liu, Yanbin; Parashar, Manish; Sadjadi, S. Masoud

    The fields of Grid, Utility and Cloud Computing have a set of common objectives in harnessing shared resources to optimally meet a great variety of demands cost-effectively and in a timely manner Since Grid Computing started its technological journey about a decade earlier than Cloud Computing, the Cloud can benefit from the technologies and experience of the Grid in building an infrastructure for distributed computing. Our comparison of Grid and Cloud starts with their basic characteristics and interaction models with clients, resource consumers and providers. Then the similarities and differences in architectural layers and key usage patterns are examined. This is followed by an in depth look at the technologies and best practices that have applicability from Grid to Cloud computing, including scheduling, service orientation, security, data management, monitoring, interoperability, simulation and autonomic support. Finally, we offer insights on how these techniques will help solve the current challenges faced by Cloud computing.

  12. Epistemologi Common Sense Abad XX

    Directory of Open Access Journals (Sweden)

    Abbas Hamami Mintaredja

    2007-12-01

    Full Text Available The presence of G.E. Moore (1873-1956, undoubtedly has brought a new wave of thought. A thought that has changed the development of English philosophical thinking into analytic and neo realism. Moore has deconstructed Bradley's idealsm. Moore revived English philosophy of common sense. Common sense is a belief in direct apprehension of material things. It is important to solve daily life probelms. Common sense epistemology is specifically Moore epistemology. Is separates the subjects from objects distingtively. A subject sees factual objects in direct experience so that he gets sense data. To apprehend sense data directly, it involves conscious activity. The result of activity is the true and necessary knowledge. Common sense Moore's epistemology based on Aristotelian epistemology. Moore common sense epistemology influenced later philosophies of Russell and Ayer in English, and Ayn Rand in America. Russell perceived common sense as an inderence rule to daily experience based on istinct. It differed from Ayer who developed his philosophy based on verification. Common sense is an understanding to given object that is directly observed. Ayn Rand in America developed his epistemology based on objective object as a real material things. The truth of knowledge is apriori. Its based on truism like Moore's epistemology.

  13. Algorithms on ensemble quantum computers.

    Science.gov (United States)

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  14. Computer methods in electric network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Saver, P.; Hajj, I.; Pai, M.; Trick, T.

    1983-06-01

    The computational algorithms utilized in power system analysis have more than just a minor overlap with those used in electronic circuit computer aided design. This paper describes the computer methods that are common to both areas and highlights the differences in application through brief examples. Recognizing this commonality has stimulated the exchange of useful techniques in both areas and has the potential of fostering new approaches to electric network analysis through the interchange of ideas.

  15. Vaccines for the common cold.

    Science.gov (United States)

    Simancas-Racines, Daniel; Franco, Juan Va; Guerra, Claudia V; Felix, Maria L; Hidalgo, Ricardo; Martinez-Zapata, Maria José

    2017-05-18

    The common cold is a spontaneously remitting infection of the upper respiratory tract, characterised by a runny nose, nasal congestion, sneezing, cough, malaise, sore throat, and fever (usually common cold worldwide is related to its ubiquitousness rather than its severity. The development of vaccines for the common cold has been difficult because of antigenic variability of the common cold virus and the indistinguishable multiple other viruses and even bacteria acting as infective agents. There is uncertainty regarding the efficacy and safety of interventions for preventing the common cold in healthy people. This is an update of a Cochrane review first published in 2011 and previously updated in 2013. To assess the clinical effectiveness and safety of vaccines for preventing the common cold in healthy people. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (September 2016), MEDLINE (1948 to September 2016), Embase (1974 to September 2016), CINAHL (1981 to September 2016), and LILACS (1982 to September 2016). We also searched three trials registers for ongoing studies and four websites for additional trials (February 2017). We included no language or date restrictions. Randomised controlled trials (RCTs) of any virus vaccines compared with placebo to prevent the common cold in healthy people. Two review authors independently evaluated methodological quality and extracted trial data. We resolved disagreements by discussion or by consulting a third review author. We found no additional RCTs for inclusion in this update. This review includes one RCT dating from the 1960s with an overall high risk of bias. The RCT included 2307 healthy participants, all of whom were included in analyses. This trial compared the effect of an adenovirus vaccine against placebo. No statistically significant difference in common cold incidence was found: there were 13 (1.14%) events in 1139 participants in the vaccines group and 14 (1.19%) events in 1168

  16. Learning Commons in Academic Libraries

    Directory of Open Access Journals (Sweden)

    Larisa González Martínez

    2015-01-01

    Full Text Available Like all human creations, institutions transform and evolve over time. Libraries also have changed to respond the needs of its users. Academic libraries physical spaces are one of the turned aspects, an example are the Learning Commons (spaces for collaborative work in academic libraries. The main purpose of this paper is to expose the characteristics of the Learning Commons model with a brief account of the history of planning and construction of academic libraries. This paper also aims to present the manner in which a Learning Commons has been implemented at the library of Instituto Tecnológico y de Estudios Superiores de Monterrey (ITESM, Campus Monterrey in Mexico.

  17. Computational creativity

    Directory of Open Access Journals (Sweden)

    López de Mántaras Badia, Ramon

    2013-12-01

    Full Text Available New technologies, and in particular artificial intelligence, are drastically changing the nature of creative processes. Computers are playing very significant roles in creative activities such as music, architecture, fine arts, and science. Indeed, the computer is already a canvas, a brush, a musical instrument, and so on. However, we believe that we must aim at more ambitious relations between computers and creativity. Rather than just seeing the computer as a tool to help human creators, we could see it as a creative entity in its own right. This view has triggered a new subfield of Artificial Intelligence called Computational Creativity. This article addresses the question of the possibility of achieving computational creativity through some examples of computer programs capable of replicating some aspects of creative behavior in the fields of music and science.Las nuevas tecnologías y en particular la Inteligencia Artificial están cambiando de forma importante la naturaleza del proceso creativo. Los ordenadores están jugando un papel muy significativo en actividades artísticas tales como la música, la arquitectura, las bellas artes y la ciencia. Efectivamente, el ordenador ya es el lienzo, el pincel, el instrumento musical, etc. Sin embargo creemos que debemos aspirar a relaciones más ambiciosas entre los ordenadores y la creatividad. En lugar de verlos solamente como herramientas de ayuda a la creación, los ordenadores podrían ser considerados agentes creativos. Este punto de vista ha dado lugar a un nuevo subcampo de la Inteligencia Artificial denominado Creatividad Computacional. En este artículo abordamos la cuestión de la posibilidad de alcanzar dicha creatividad computacional mediante algunos ejemplos de programas de ordenador capaces de replicar algunos aspectos relacionados con el comportamiento creativo en los ámbitos de la música y la ciencia.

  18. Frustration: A common user experience

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2010-01-01

    The use of computer applications can be a frustrating experience. This study replicates previous studies of the amount of time users – involuntarily – spend trying to diagnose and recover from problems they encounter while using computer applications such as web browsers, email, and text processing....... In the present study, 21 users self-reported their frustrating experiences during an average of 1.72 hours of computer use. As in the previous studies the amount of time lost due to frustrating experiences was disturbing. The users spent 16% of their time trying to fix encountered problems and another 11......% of their time redoing lost work. Thus, the frustrating experiences accounted for a total of 27% of the time, This main finding is exacerbated by several supplementary findings. For example, the users were unable to fix 26% of the experienced problems, and they rated that the problems recurred with a median...

  19. Common Effects Methodology for Pesticides

    Science.gov (United States)

    EPA is exploring how to build on the substantial high quality science developed under both OPP programs to develop additional tools and approaches to support a consistent and common set of effects characterization methods using best available information.

  20. 6 Common Cancers - Colorectal Cancer

    Science.gov (United States)

    ... Home Current Issue Past Issues 6 Common Cancers - Colorectal Cancer Past Issues / Spring 2007 Table of Contents For ... of colon cancer. Photo: AP Photo/Ron Edmonds Colorectal Cancer Cancer of the colon (large intestine) or rectum ( ...

  1. 6 Common Cancers - Prostate Cancer

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues 6 Common Cancers - Prostate Cancer Past Issues / Spring 2007 Table of Contents ... early screening. Photo: AP Photo/Danny Moloshok Prostate Cancer The prostate gland is a walnut-sized structure that makes ...

  2. NIH Common Data Elements Repository

    Data.gov (United States)

    U.S. Department of Health & Human Services — The NIH Common Data Elements (CDE) Repository has been designed to provide access to structured human and machine-readable definitions of data elements that have...

  3. The drama of the commons

    National Research Council Canada - National Science Library

    Ostrom, Elinor

    2002-01-01

    ... to determine whether or not the many dramas of the commons end happily. In this book, leaders in the field review the evidence from several disciplines and many lines of research and present a state-of-the-art assessment...

  4. 6 Common Cancers - Breast Cancer

    Science.gov (United States)

    ... Home Current Issue Past Issues 6 Common Cancers - Breast Cancer Past Issues / Spring 2007 Table of Contents For ... slow her down. Photo: AP Photo/Brett Flashnick Breast Cancer Breast cancer is a malignant (cancerous) growth that ...

  5. Communication, timing, and common learning

    Czech Academy of Sciences Publication Activity Database

    Steiner, Jakub; Stewart, C.

    2011-01-01

    Roč. 146, č. 1 (2011), s. 230-247 ISSN 0022-0531 Institutional research plan: CEZ:AV0Z70850503 Keywords : common knowledge * learning * communication Subject RIV: AH - Economics Impact factor: 1.235, year: 2011

  6. 6 Common Cancers - Lung Cancer

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues 6 Common Cancers - Lung Cancer Past Issues / Spring 2007 Table of Contents ... Desperate Housewives. (Photo ©2005 Kathy Hutchins / Hutchins) Lung Cancer Lung cancer causes more deaths than the next three ...

  7. Facts about the Common Cold

    Science.gov (United States)

    ... different viruses. Rhinovirus is the most common cause, accounting for 10 to 40 percent of colds. Other ... Of Use | Privacy Our Family Of Sites nonprofit software Join the fight for healthy lungs and healthy ...

  8. Computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Goudreau, G.L.

    1993-03-01

    The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

  9. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  10. Separating common from distinctive variation.

    Science.gov (United States)

    van der Kloet, Frans M; Sebastián-León, Patricia; Conesa, Ana; Smilde, Age K; Westerhuis, Johan A

    2016-06-06

    Joint and individual variation explained (JIVE), distinct and common simultaneous component analysis (DISCO) and O2-PLS, a two-block (X-Y) latent variable regression method with an integral OSC filter can all be used for the integrated analysis of multiple data sets and decompose them in three terms: a low(er)-rank approximation capturing common variation across data sets, low(er)-rank approximations for structured variation distinctive for each data set, and residual noise. In this paper these three methods are compared with respect to their mathematical properties and their respective ways of defining common and distinctive variation. The methods are all applied on simulated data and mRNA and miRNA data-sets from GlioBlastoma Multiform (GBM) brain tumors to examine their overlap and differences. When the common variation is abundant, all methods are able to find the correct solution. With real data however, complexities in the data are treated differently by the three methods. All three methods have their own approach to estimate common and distinctive variation with their specific strength and weaknesses. Due to their orthogonality properties and their used algorithms their view on the data is slightly different. By assuming orthogonality between common and distinctive, true natural or biological phenomena that may not be orthogonal at all might be misinterpreted.

  11. Quantum computers.

    Science.gov (United States)

    Ladd, T D; Jelezko, F; Laflamme, R; Nakamura, Y; Monroe, C; O'Brien, J L

    2010-03-04

    Over the past several decades, quantum information science has emerged to seek answers to the question: can we gain some advantage by storing, transmitting and processing information encoded in systems that exhibit unique quantum properties? Today it is understood that the answer is yes, and many research groups around the world are working towards the highly ambitious technological goal of building a quantum computer, which would dramatically improve computational power for particular tasks. A number of physical systems, spanning much of modern physics, are being developed for quantum computation. However, it remains unclear which technology, if any, will ultimately prove successful. Here we describe the latest developments for each of the leading approaches and explain the major challenges for the future.

  12. Computational Psychiatry

    Science.gov (United States)

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  13. Computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Raboin, P J

    1998-01-01

    The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

  14. The Common Information of N Dependent Random Variables

    CERN Document Server

    Liu, Wei; Chen, Biao

    2010-01-01

    This paper generalizes Wyner's definition of common information of a pair of random variables to that of $N$ random variables. We prove coding theorems that show the same operational meanings for the common information of two random variables generalize to that of $N$ random variables. As a byproduct of our proof, we show that the Gray-Wyner source coding network can be generalized to $N$ source squences with $N$ decoders. We also establish a monotone property of Wyner's common information which is in contrast to other notions of the common information, specifically Shannon's mutual information and G\\'{a}cs and K\\"{o}rner's common randomness. Examples about the computation of Wyner's common information of $N$ random variables are also given.

  15. MIMO Common Feedback Method for Multicast H-ARQ Transmission

    Science.gov (United States)

    Jung, Young-Ho

    An orthogonal sequence based MIMO common feedback method for multicast hybrid automatic-repeat-request (H-ARQ) transmission is presented. The proposed method can obtain more diversity gain proportional to the number of transmit antennas than the conventional on-off keying (OOK) based common feedback method. The ACK/NACK detection performance gain of the proposed scheme over the OOK based method is verified by analysis and computer simulation results.

  16. Reconfigurable Computing

    CERN Document Server

    Cardoso, Joao MP

    2011-01-01

    As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a comp

  17. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  18. Computational artifacts

    DEFF Research Database (Denmark)

    Schmidt, Kjeld; Bansler, Jørgen P.

    2016-01-01

    The key concern of CSCW research is that of understanding computing technologies in the social context of their use, that is, as integral features of our practices and our lives, and to think of their design and implementation under that perspective. However, the question of the nature...... of that which is actually integrated in our practices is often discussed in confusing ways, if at all. The article aims to try to clarify the issue and in doing so revisits and reconsiders the notion of ‘computational artifact’....

  19. Computer busses

    CERN Document Server

    Buchanan, William

    2000-01-01

    As more and more equipment is interface or'bus' driven, either by the use of controllers or directly from PCs, the question of which bus to use is becoming increasingly important both in industry and in the office. 'Computer Busses' has been designed to help choose the best type of bus for the particular application.There are several books which cover individual busses, but none which provide a complete guide to computer busses. The author provides a basic theory of busses and draws examples and applications from real bus case studies. Busses are analysed using from a top-down approach, helpin

  20. Computer viruses

    Science.gov (United States)

    Denning, Peter J.

    1988-01-01

    The worm, Trojan horse, bacterium, and virus are destructive programs that attack information stored in a computer's memory. Virus programs, which propagate by incorporating copies of themselves into other programs, are a growing menace in the late-1980s world of unprotected, networked workstations and personal computers. Limited immunity is offered by memory protection hardware, digitally authenticated object programs,and antibody programs that kill specific viruses. Additional immunity can be gained from the practice of digital hygiene, primarily the refusal to use software from untrusted sources. Full immunity requires attention in a social dimension, the accountability of programmers.

  1. Computer security

    CERN Document Server

    Gollmann, Dieter

    2011-01-01

    A completely up-to-date resource on computer security Assuming no previous experience in the field of computer security, this must-have book walks you through the many essential aspects of this vast topic, from the newest advances in software and technology to the most recent information on Web applications security. This new edition includes sections on Windows NT, CORBA, and Java and discusses cross-site scripting and JavaScript hacking as well as SQL injection. Serving as a helpful introduction, this self-study guide is a wonderful starting point for examining the variety of competing sec

  2. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  3. Common Randomness Principles of Secrecy

    Science.gov (United States)

    Tyagi, Himanshu

    2013-01-01

    This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…

  4. Lab Inputs for Common Micros.

    Science.gov (United States)

    Tinker, Robert

    1984-01-01

    The game paddle inputs of Apple microcomputers provide a simple way to get laboratory measurements into the computer. Discusses these game paddles and the necessary interface software. Includes schematics for Apple built-in paddle electronics, TRS-80 game paddle I/O, Commodore circuit for user port, and bus interface for Sinclair/Timex, Commodore,…

  5. Research on cloud computing solutions

    Directory of Open Access Journals (Sweden)

    Liudvikas Kaklauskas

    2015-07-01

    Full Text Available Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, hybrid cloud and community. The most common and well-known deployment model is Public Cloud. A Private Cloud is suited for sensitive data, where the customer is dependent on a certain degree of security.According to the different types of services offered, cloud computing can be considered to consist of three layers (services models: IaaS (infrastructure as a service, PaaS (platform as a service, SaaS (software as a service. Main cloud computing solutions: web applications, data hosting, virtualization, database clusters and terminal services. The advantage of cloud com-puting is the ability to virtualize and share resources among different applications with the objective for better server utilization and without a clustering solution, a service may fail at the moment the server crashes.DOI: 10.15181/csat.v2i2.914

  6. Induced artificial androgenesis in common tench, Tinca tinca (L., using common carp and common bream eggs

    Directory of Open Access Journals (Sweden)

    Dariusz Kucharczyk

    2014-03-01

    Full Text Available This study presents artificial induction using tench eggs, Tinca tinca (L., of androgenetic origin. The oocytes taken from common bream, Abramis brama (L. and common carp, Cyprinus carpio L. were genetically inactivated using UV irradiation and then inseminated using tench spermatozoa. Androgenetic origin (haploid or diploid embryos was checked using a recessive colour (blond and morphological markers. The percentage of hatched embryos in all experimental groups was much lower than in the control groups. All haploid embryos showed morphological abnormalities, which were recorded as haploid syndrome (stunted body, poorly formed retina, etc.. The optimal dose of UV irradiation of common bream and common carp eggs was 3456 J m–2. At this dose, almost 100% of haploid embryos were produced at a hatching rate of over 6%. Lower UV-ray doses affected abnormal embryo development. The highest yield of tench androgenesis (about 2% was noted when eggs were exposed to thermal shock 30 min after egg activation.

  7. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  8. Garlic for the common cold.

    Science.gov (United States)

    Lissiman, Elizabeth; Bhasale, Alice L; Cohen, Marc

    2014-11-11

    Background Garlic is alleged to have antimicrobial and antiviral properties that relieve the common cold, among other beneficial effects. There is widespread usage of garlic supplements. The common cold is associated with significant morbidity and economic consequences. On average, children have six to eight colds per year and adults have two to four.Objectives To determine whether garlic (Allium sativum) is effective for the prevention or treatment of the common cold, when compared to placebo, no treatment or other treatments.Search methods We searched CENTRAL (2014, Issue 7),OLDMEDLINE (1950 to 1965),MEDLINE (January 1966 to July week 5, 2014), EMBASE(1974 to August 2014) and AMED (1985 to August 2014).Selection criteria Randomised controlled trials of common cold prevention and treatment comparing garlic with placebo, no treatment or standard treatment.Data collection and analysis Two review authors independently reviewed and selected trials from searches, assessed and rated study quality and extracted relevant data.Main results In this updated review, we identified eight trials as potentially relevant from our searches. Again, only one trial met the inclusion criteria.This trial randomly assigned 146 participants to either a garlic supplement (with 180 mg of allicin content) or a placebo (once daily)for 12 weeks. The trial reported 24 occurrences of the common cold in the garlic intervention group compared with 65 in the placebo group (P value garlic group compared with the placebo group (111 versus 366). The number of days to recovery from an occurrence of the common cold was similar in both groups (4.63 versus 5.63). Only one trial met the inclusion criteria, therefore limited conclusions can be drawn. The trial relied on self reported episodes of the common cold but was of reasonable quality in terms of randomisation and allocation concealment. Adverse effects included rash and odour. Authors' conclusions There is insufficient clinical trial evidence

  9. Susceptibility of common urinary isolates to the commonly used ...

    African Journals Online (AJOL)

    ... amoxycillin and cefuroxime but were either moderately or highly sensitive to the quinolones and nitrofurantoin. We conclude that majority of the antimicrobial agents that are commonly used to treat UTIs in the hospitals are no longer effective. Therefore, the development and strict management of antimicrobial policy, and ...

  10. Cloud computing.

    Science.gov (United States)

    Wink, Diane M

    2012-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes how cloud computing can be used in nursing education.

  11. Quantum Computers

    Science.gov (United States)

    2010-03-04

    be required. In 2001, a breakthrough known as the KLM (Knill–Laflamme– Milburn13) scheme showed that scalable quantum computing is possible using only...and single-photon detection to induce interactions nondeterministically. In the past five years, the KLM scheme has moved from a mathematical proof

  12. Computational Logistics

    DEFF Research Database (Denmark)

    Pacino, Dario; Voss, Stefan; Jensen, Rune Møller

    2013-01-01

    This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized...... in topical sections named: maritime shipping, road transport, vehicle routing problems, aviation applications, and logistics and supply chain management....

  13. Computational Logistics

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized...... in topical sections named: maritime shipping, road transport, vehicle routing problems, aviation applications, and logistics and supply chain management....

  14. Quantum Computation

    Indian Academy of Sciences (India)

    can be represented using only n = log2 N bits, which is an exponential reduction in the required resources com- pared to the situation where every value is represented by a different physical state. Mathematically this struc- ture is known as a `tensor product', and I will refer to a similar break-up of computational algorithms ...

  15. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    global view and optimize the configuration and the use of all the computers, particularly high performance servers wherever they are. An enterprise grid is ... attributes: business model, architecture, resource management, security model, programming model, and applications [6]. Business Model. Grid is formed by 'not for ...

  16. Computing News

    CERN Multimedia

    McCubbin, N

    2001-01-01

    We are still five years from the first LHC data, so we have plenty of time to get the computing into shape, don't we? Well, yes and no: there is time, but there's an awful lot to do! The recently-completed CERN Review of LHC Computing gives the flavour of the LHC computing challenge. The hardware scale for each of the LHC experiments is millions of 'SpecInt95' (SI95) units of cpu power and tens of PetaBytes of data storage. PCs today are about 20-30SI95, and expected to be about 100 SI95 by 2005, so it's a lot of PCs. This hardware will be distributed across several 'Regional Centres' of various sizes, connected by high-speed networks. How to realise this in an orderly and timely fashion is now being discussed in earnest by CERN, Funding Agencies, and the LHC experiments. Mixed in with this is, of course, the GRID concept...but that's a topic for another day! Of course hardware, networks and the GRID constitute just one part of the computing. Most of the ATLAS effort is spent on software development. What we ...

  17. [Grid computing

    CERN Multimedia

    Wolinsky, H

    2003-01-01

    "Turn on a water spigot, and it's like tapping a bottomless barrel of water. Ditto for electricity: Flip the switch, and the supply is endless. But computing is another matter. Even with the Internet revolution enabling us to connect in new ways, we are still limited to self-contained systems running locally stored software, limited by corporate, institutional and geographic boundaries" (1 page).

  18. Quantum Computing

    Indian Academy of Sciences (India)

    start-up company at liT. Mumbai. Part 1. Building Blocks of Quan- tum Computers, Resonance, ..... by modeling the errors caused by decoherence. The interaction of a quantum system with the environment obstructs the unitary evolution of the system and causes dissipation of information, reducing coherence of information.

  19. Computational biology

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Computation via biological devices has been the subject of close scrutiny since von Neumann’s early work some 60 years ago. In spite of the many relevant works in this field, the notion of programming biological devices seems to be, at best, ill-defined. While many devices are claimed or proved t...

  20. Statistical Computing

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 10. Statistical Computing - Understanding Randomness and Random Numbers. Sudhakar Kunte. Series Article Volume 4 Issue 10 October 1999 pp 16-21. Fulltext. Click here to view fulltext PDF. Permanent link:

  1. Data mining in Cloud Computing

    OpenAIRE

    Ruxandra-Ştefania PETRE

    2012-01-01

    This paper describes how data mining is used in cloud computing. Data Mining is used for extracting potentially useful information from raw data. The integration of data mining techniques into normal day-to-day activities has become common place. Every day people are confronted with targeted advertising, and data mining techniques help businesses to become more efficient by reducing costs. Data mining techniques and applications are very much needed in the cloud computing paradigm. The implem...

  2. Vibrio chromosomes share common history

    Directory of Open Access Journals (Sweden)

    Gevers Dirk

    2010-05-01

    Full Text Available Abstract Background While most gamma proteobacteria have a single circular chromosome, Vibrionales have two circular chromosomes. Horizontal gene transfer is common among Vibrios, and in light of this genetic mobility, it is an open question to what extent the two chromosomes themselves share a common history since their formation. Results Single copy genes from each chromosome (142 genes from chromosome I and 42 genes from chromosome II were identified from 19 sequenced Vibrionales genomes and their phylogenetic comparison suggests consistent phylogenies for each chromosome. Additionally, study of the gene organization and phylogeny of the respective origins of replication confirmed the shared history. Conclusions Thus, while elements within the chromosomes may have experienced significant genetic mobility, the backbones share a common history. This allows conclusions based on multilocus sequence analysis (MLSA for one chromosome to be applied equally to both chromosomes.

  3. Philosophy vs the common sense

    Directory of Open Access Journals (Sweden)

    V. V. Chernyshov

    2017-01-01

    Full Text Available The paper deals with the antinomy of philosophy and the common sense. Philosophy emerges as a way of specifically human knowledge, which purposes analytics of the reality of subjective experience. The study reveals that in order to alienate philosophy from the common sense it was essential to revise the understanding of wisdom. The new, philosophical interpretation of wisdom – offered by Pythagoras – has laid the foundation of any future philosophy. Thus, philosophy emerges, alienating itself from the common sense, which refers to the common or collective experience. Moreover, the study examines the role of emotions, conformity and conventionality which they play with respect to the common sense. Next the author focuses on the role of philosophical intuition, guided with principles of rationality, nonconformity and scepticism, which the author professes the foundation stones of any sound philosophy. The common sense, described as deeply routed in the world of human emotions, aims at empathy, as the purpose of philosophy is to provide the rational means of knowledge. Therefore, philosophy uses thinking, keeping the permanent efforts to check and recheck data of its own experience. Thus, the first task of philosophical thinking appears to overcome the suggestion of the common sense, which purposes the social empathy, as philosophical intuition aims at independent thinking, the analytics of subjective experience. The study describes the fundamental principles of the common sense, on the one hand, and those of philosophy, on the other. The author arrives to conclusion that the common sense is unable to exceed the limits of sensual experience. Even there, where it apparently rises to a form of any «spiritual unity», even there it cannot avoid referring to the data of commonly shared sensual experience; though, philosophy, meanwhile, goes beyond sensuality, creating a discourse that would be able to alienate from it, and to make its rational

  4. UMTS Common Channel Sensitivity Analysis

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António; Santos, Frederico

    2006-01-01

    The UMTS common transport channels forward access channel (FACH) and the random access channel (RACH) are two of the three fundamental channels for a functional implementation of an UMTS network. Most signaling procedures, such as the registration procedure, make use of these channels...... and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....

  5. From computer to brain foundations of computational neuroscience

    CERN Document Server

    Lytton, William W

    2002-01-01

    Biology undergraduates, medical students and life-science graduate students often have limited mathematical skills. Similarly, physics, math and engineering students have little patience for the detailed facts that make up much of biological knowledge. Teaching computational neuroscience as an integrated discipline requires that both groups be brought forward onto common ground. This book does this by making ancillary material available in an appendix and providing basic explanations without becoming bogged down in unnecessary details. The book will be suitable for undergraduates and beginning graduate students taking a computational neuroscience course and also to anyone with an interest in the uses of the computer in modeling the nervous system.

  6. Duality and Recycling Computing in Quantum Computers

    OpenAIRE

    Long, Gui Lu; Liu, Yang

    2007-01-01

    Quantum computer possesses quantum parallelism and offers great computing power over classical computer \\cite{er1,er2}. As is well-know, a moving quantum object passing through a double-slit exhibits particle wave duality. A quantum computer is static and lacks this duality property. The recently proposed duality computer has exploited this particle wave duality property, and it may offer additional computing power \\cite{r1}. Simply put it, a duality computer is a moving quantum computer pass...

  7. The Messiness of Common Good

    DEFF Research Database (Denmark)

    Feldt, Liv Egholm

    Civil society and its philanthropic and voluntary organisations are currently experiencing public and political attention and demands to safeguard society’s ‘common good’ through social cohesion and as providers of welfare services. This has raised the question by both practitioners and researchers...... alike of whether civil society and its organisations can maintain their specific institutional logic if they are messed up with other logics (state and market). These concerns spring from a sector model that has championed research of civil society. The paper dismisses the sector model and claims...... has been messed up with other logics and that it is this mess that creates contemporary definitions of the common good....

  8. Antivirals for the common cold.

    Science.gov (United States)

    Jefferson, T O; Tyrrell, D

    2001-01-01

    The common cold is a ubiquitous short and usually mild illness for which preventive and treatment interventions have been under development since the mid-40s. As our understanding of the disease has increased, more experimental antivirals have been developed. This review attempts to draw together experimental evidence of the effects of these compounds. To identify, assemble, evaluate and (if possible) synthesise the results of published and unpublished randomised controlled trials of the effects of antivirals to prevent or minimise the impact of the common cold. We searched electronic databases, corresponded with researchers and handsearched the archives of the MRC's Common Cold Unit (CCU). We included original reports of randomised and quasi-randomised trials assessing the effects of antivirals on volunteers artificially infected and in individuals exposed to colds in the community. We included 241 studies assessing the effects of Interferons, interferon-inducers and other antivirals on experimental and naturally occurring common colds, contained in 230 reports. We structured our comparisons by experimental or community setting. Although intranasal interferons have high preventive efficacy against experimental colds (protective efficacy 46%, 37% to 54%) and to a lesser extent against natural colds (protective efficacy 24%, 21% to 27%) and are also significantly more effective than placebo in attenuating the course of experimental colds (WMD 15.90, 13.42 to 18.38), their safety profile makes compliance with their use difficult. For example, prolonged prevention of community colds with interferons causes blood-tinged nasal discharge (OR 4.52, 3.78 to 5.41). Dipyridamole (protective efficacy against natural colds 49%, 30% to 62%), ICI 130, 685 (protective efficacy against experimental colds 58%, 35% to 74% ), Impulsin (palmitate) (protective efficacy against natural colds 44%, CI 35% to 52% ) and Pleconaril (protective efficacy against experimental colds 71%, 15% to

  9. Common Emergencies in Pet Birds.

    Science.gov (United States)

    Stout, Jane D

    2016-05-01

    Treating avian emergencies can be a challenging task. Pet birds often mask signs of illness until they are critically ill and require quick initiation of supportive care with minimal handling to stabilize them. This article introduces the clinician to common avian emergency presentations and details initial therapeutics and diagnostics that can be readily performed in the small-animal emergency room. Common disease presentations covered include respiratory and extrarespiratory causes of dyspnea, gastrointestinal signs, reproductive disease, neurologic disorders, trauma, and toxin exposure. The duration and severity of the avian patient's disease and the clinician's initiation of appropriate therapy often determines clinical outcome. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Amorphous Computing

    Science.gov (United States)

    Sussman, Gerald

    2002-03-01

    Digital computers have always been constructed to behave as precise arrangements of reliable parts, and our techniques for organizing computations depend upon this precision and reliability. Two emerging technologies, however, are begnning to undercut these assumptions about constructing and programming computers. These technologies -- microfabrication and bioengineering -- will make it possible to assemble systems composed of myriad information- processing units at almost no cost, provided: 1) that not all the units need to work correctly; and 2) that there is no need to manufacture precise geometrical arrangements or interconnection patterns among them. Microelectronic mechanical components are becoming so inexpensive to manufacture that we can anticipate combining logic circuits, microsensors, actuators, and communications devices integrated on the same chip to produce particles that could be mixed with bulk materials, such as paints, gels, and concrete. Imagine coating bridges or buildings with smart paint that can sense and report on traffic and wind loads and monitor structural integrity of the bridge. A smart paint coating on a wall could sense vibrations, monitor the premises for intruders, or cancel noise. Even more striking, there has been such astounding progress in understanding the biochemical mechanisms in individual cells, that it appears we'll be able to harness these mechanisms to construct digital- logic circuits. Imagine a discipline of cellular engineering that could tailor-make biological cells that function as sensors and actuators, as programmable delivery vehicles for pharmaceuticals, as chemical factories for the assembly of nanoscale structures. Fabricating such systems seem to be within our reach, even if it is not yet within our grasp Fabrication, however, is only part of the story. We can envision producing vast quantities of individual computing elements, whether microfabricated particles, engineered cells, or macromolecular computing

  11. Longest common extensions in trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gawrychowski, Pawel; Gørtz, Inge Li

    2016-01-01

    The longest common extension (LCE) of two indices in a string is the length of the longest identical substrings starting at these two indices. The LCE problem asks to preprocess a string into a compact data structure that supports fast LCE queries. In this paper we generalize the LCE problem to t...

  12. Common High Blood Pressure Myths

    Science.gov (United States)

    ... Artery Disease Venous Thromboembolism Aortic Aneurysm More Common High Blood Pressure Myths Updated:Dec 8,2017 Knowing the facts ... health. This content was last reviewed October 2016. High Blood Pressure • Home • Get the Facts About HBP Introduction What ...

  13. Autism: Many Genes, Common Pathways?

    OpenAIRE

    Geschwind, Daniel H.

    2008-01-01

    Autism is a heterogeneous neurodevelopmental syndrome with a complex genetic etiology. It is still not clear whether autism comprises a vast collection of different disorders akin to intellectual disability or a few disorders sharing common aberrant pathways. Unifying principles among cases of autism are likely to be at the level of brain circuitry in addition to molecular pathways.

  14. Nonparametric Regression with Common Shocks

    National Research Council Canada - National Science Library

    Souza-Rodrigues, Eduardo

    2016-01-01

    .... By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.

  15. Experiments on common property management

    NARCIS (Netherlands)

    van Soest, D.P.; Shogren, J.F.

    2013-01-01

    Common property resources are (renewable) natural resources where current excessive extraction reduces future resource availability, and the use of which is de facto restricted to a specific set of agents, such as inhabitants of a village or members of a community; think of community-owned forests,

  16. The common European flexicurity principles

    DEFF Research Database (Denmark)

    Mailand, Mikkel

    2010-01-01

    This article analyses the decision-making process underlying the adoption of common EU flexicurity principles. Supporters of the initiative succeeded in convincing the sceptics one by one; the change of government in France and the last-minute support of the European social partner organizations...

  17. Five Common Cancers in Iran

    NARCIS (Netherlands)

    Kolandoozan, Shadi; Sadjadi, Alireza; Radmard, Amir Reza; Khademi, Hooman

    Iran as a developing nation is in epidemiological transition from communicable to non-communicable diseases. Although, cancer is the third cause of death in Iran, ifs mortality are on the rise during recent decades. This mini-review was carried out to provide a general viewpoint on common cancers

  18. Health Education. Common Curriculum Goals.

    Science.gov (United States)

    Oregon State Dept. of Education, Salem.

    This guide presents the common curriculm goals for health education developed by the Oregon State Department of Education. Four content strands--safe living, stressor/risk-taking management, physical fitness, and nutrition--are a synthesis of the traditional health education and health promotion objectives. Knowledge and skills objectives are…

  19. Common sleep disorders in children.

    Science.gov (United States)

    Carter, Kevin A; Hathaway, Nathanael E; Lettieri, Christine F

    2014-03-01

    Up to 50% of children will experience a sleep problem. Early identification of sleep problems may prevent negative consequences, such as daytime sleepiness, irritability, behavioral problems, learning difficulties, motor vehicle crashes in teenagers, and poor academic performance. Obstructive sleep apnea occurs in 1% to 5% of children. Polysomnography is needed to diagnose the condition because it may not be detected through history and physical examination alone. Adenotonsillectomy is the primary treatment for most children with obstructive sleep apnea. Parasomnias are common in childhood; sleepwalking, sleep talking, confusional arousals, and sleep terrors tend to occur in the first half of the night, whereas nightmares are more common in the second half of the night. Only 4% of parasomnias will persist past adolescence; thus, the best management is parental reassurance and proper safety measures. Behavioral insomnia of childhood is common and is characterized by a learned inability to fall and/or stay asleep. Management begins with consistent implementation of good sleep hygiene practices, and, in some cases, use of extinction techniques may be appropriate. Delayed sleep phase disorder is most common in adolescence, presenting as difficulty falling asleep and awakening at socially acceptable times. Treatment involves good sleep hygiene and a consistent sleep-wake schedule, with nighttime melatonin and/or morning bright light therapy as needed. Diagnosing restless legs syndrome in children can be difficult; management focuses on trigger avoidance and treatment of iron deficiency, if present.

  20. Common Core: Fact vs. Fiction

    Science.gov (United States)

    Greene, Kim

    2012-01-01

    Despite students' interest in informational text, it has played second fiddle in literacy instruction for years. Now, though, nonfiction is getting its turn in the spotlight. The Common Core State Standards require that students become thoughtful consumers of complex, informative texts--taking them beyond the realm of dry textbooks and…

  1. Technology: Technology and Common Sense

    Science.gov (United States)

    Van Horn, Royal

    2004-01-01

    The absence of common sense in the world of technology continues to amaze the author. Things that seem so logical to just aren nott for many people. The installation of Voice-over IP (VoIP, with IP standing for Internet Protocol) in many school districts is a good example. Schools have always had trouble with telephones. Many districts don't even…

  2. Separating common from distinctive variation

    NARCIS (Netherlands)

    van der Kloet, F.M.; Sebastián-León, P.; Conesa, A.; Smilde, A.K.; Westerhuis, J.A.

    2016-01-01

    BACKGROUND: Joint and individual variation explained (JIVE), distinct and common simultaneous component analysis (DISCO) and O2-PLS, a two-block (X-Y) latent variable regression method with an integral OSC filter can all be used for the integrated analysis of multiple data sets and decompose them in

  3. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  4. Common source identification of images in large databases

    NARCIS (Netherlands)

    Gisolf, F.; Barens, P.; Snel, E.; Malgoezar, A.; Vos, M.; Mieremet, A.; Geradts, Z.

    2014-01-01

    Photo-response non-uniformity noise patterns are a robust way to identify the source of an image. However, identifying a common source of images in a large database may be impractical due to long computation times. In this paper a solution for large volume digital camera identification is proposed,

  5. Antihistamines for the common cold.

    Science.gov (United States)

    De Sutter, An I M; Saraswat, Avadhesh; van Driel, Mieke L

    2015-11-29

    The common cold is an upper respiratory tract infection, most commonly caused by a rhinovirus. It affects people of all age groups and although in most cases it is self limiting, the common cold still causes significant morbidity. Antihistamines are commonly offered over the counter to relieve symptoms for patients affected by the common cold, however there is not much evidence of their efficacy. To assess the effects of antihistamines on the common cold. We searched CENTRAL (2015, Issue 6), MEDLINE (1948 to July week 4, 2015), EMBASE (2010 to August 2015), CINAHL (1981 to August 2015), LILACS (1982 to August 2015) and Biosis Previews (1985 to August 2015). We selected randomised controlled trials (RCTs) using antihistamines as monotherapy for the common cold. We excluded any studies with combination therapy or using antihistamines in patients with an allergic component in their illness. Two authors independently assessed trial quality and extracted data. We collected adverse effects information from the included trials. We included 18 RCTs, which were reported in 17 publications (one publication reports on two trials) with 4342 participants (of which 212 were children) suffering from the common cold, both naturally occurring and experimentally induced. The interventions consisted of an antihistamine as monotherapy compared with placebo. In adults there was a short-term beneficial effect of antihistamines on severity of overall symptoms: on day one or two of treatment 45% had a beneficial effect with antihistamines versus 38% with placebo (odds ratio (OR) 0.74, 95% confidence interval (CI) 0.60 to 0.92). However, there was no difference between antihistamines and placebo in the mid term (three to four days) to long term (six to 10 days). When evaluating individual symptoms such as nasal congestion, rhinorrhoea and sneezing, there was some beneficial effect of the sedating antihistamines compared to placebo (e.g. rhinorrhoea on day three: mean difference (MD) -0

  6. Twenty-First Century Diseases: Commonly Rare and Rarely Common?

    Science.gov (United States)

    Daunert, Sylvia; Sittampalam, Gurusingham Sitta; Goldschmidt-Clermont, Pascal J

    2017-09-20

    Alzheimer's drugs are failing at a rate of 99.6%, and success rate for drugs designed to help patients with this form of dementia is 47 times less than for drugs designed to help patients with cancers ( www.scientificamerican.com/article/why-alzheimer-s-drugs-keep-failing/2014 ). How can it be so difficult to produce a valuable drug for Alzheimer's disease? Each human has a unique genetic and epigenetic makeup, thus endowing individuals with a highly unique complement of genes, polymorphisms, mutations, RNAs, proteins, lipids, and complex sugars, resulting in distinct genome, proteome, metabolome, and also microbiome identity. This editorial is taking into account the uniqueness of each individual and surrounding environment, and stresses the point that a more accurate definition of a "common" disorder could be simply the amalgamation of a myriad of "rare" diseases. These rare diseases are being grouped together because they share a rather constant complement of common features and, indeed, generally respond to empirically developed treatments, leading to a positive outcome consistently. We make the case that it is highly unlikely that such treatments, despite their statistical success measured with large cohorts using standardized clinical research, will be effective on all patients until we increase the depth and fidelity of our understanding of the individual "rare" diseases that are grouped together in the "buckets" of common illnesses. Antioxid. Redox Signal. 27, 511-516.

  7. Computational Electromagnetics

    CERN Document Server

    Rylander, Thomas; Bondeson, Anders

    2013-01-01

    Computational Electromagnetics is a young and growing discipline, expanding as a result of the steadily increasing demand for software for the design and analysis of electrical devices. This book introduces three of the most popular numerical methods for simulating electromagnetic fields: the finite difference method, the finite element method and the method of moments. In particular it focuses on how these methods are used to obtain valid approximations to the solutions of Maxwell's equations, using, for example, "staggered grids" and "edge elements." The main goal of the book is to make the reader aware of different sources of errors in numerical computations, and also to provide the tools for assessing the accuracy of numerical methods and their solutions. To reach this goal, convergence analysis, extrapolation, von Neumann stability analysis, and dispersion analysis are introduced and used frequently throughout the book. Another major goal of the book is to provide students with enough practical understan...

  8. Computational models of syntactic acquisition.

    Science.gov (United States)

    Yang, Charles

    2012-03-01

    The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Spatial Computation

    Science.gov (United States)

    2003-12-01

    2001), Las Vegas, June 2001. [BRM+99] Jonathan Babb, Martin Rinard, Csaba Andras Moritz, Walter Lee, Matthew Frank Rajeev Barua, and Saman...Springer Verlag. [CA88] David E. Culler and Arvind. Resource requirements of dataflow programs. In International Symposium on Computer Architecture...Rajeev Barua, Matthew Frank, Devabhaktuni Srikrishna, Jonathan Babb, Vivek Sarkar, and Saman Amarasinghe. Space-time scheduling of instruction-level

  10. Computational Introspection.

    Science.gov (United States)

    1983-02-01

    to a Theory of Formal Reasoning, by Richard Weyhrauch [Weyhrauch, 1978]; A Model for Deliberation, Action and Introspection, by Jon Doyle [Doyle...program’s progress through a "𔄁 task domain was in [Sussman & Stallman , 1975]. In this system, whenever an assumption was made, the supporting facts...Sussman, G. and Stallman , R. (1975) "Heuristic Techniques in Computer-Aided Circuit Analysis," IEEE Transactions on Circuits and Systems, CAS-22, 857

  11. Computer Game

    Science.gov (United States)

    1992-01-01

    Using NASA studies of advanced lunar exploration and colonization, KDT Industries, Inc. and Wesson International have developed MOONBASE, a computer game. The player, or team commander, must build and operate a lunar base using NASA technology. He has 10 years to explore the surface, select a site and assemble structures brought from Earth into an efficient base. The game was introduced in 1991 by Texas Space Grant Consortium.

  12. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  13. Quantifying Morphological Computation

    Directory of Open Access Journals (Sweden)

    Nihat Ay

    2013-05-01

    Full Text Available The field of embodied intelligence emphasises the importance of the morphology and environment with respect to the behaviour of a cognitive system. The contribution of the morphology to the behaviour, commonly known as morphological computation, is well-recognised in this community. We believe that the field would benefit from a formalisation of this concept as we would like to ask how much the morphology and the environment contribute to an embodied agent’s behaviour, or how an embodied agent can maximise the exploitation of its morphology within its environment. In this work we derive two concepts of measuring morphological computation, and we discuss their relation to the Information Bottleneck Method. The first concepts asks how much the world contributes to the overall behaviour and the second concept asks how much the agent’s action contributes to a behaviour. Various measures are derived from the concepts and validated in two experiments that highlight their strengths and weaknesses.

  14. Computer Spectrometers

    Science.gov (United States)

    Dattani, Nikesh S.

    2017-06-01

    Ideally, the cataloguing of spectroscopic linelists would not demand laborious and expensive experiments. Whatever an experiment might achieve, the same information would be attainable by running a calculation on a computer. Kolos and Wolniewicz were the first to demonstrate that calculations on a computer can outperform even the most sophisticated molecular spectroscopic experiments of the time, when their 1964 calculations of the dissociation energies of H_2 and D_{2} were found to be more than 1 cm^{-1} larger than the best experiments by Gerhard Herzberg, suggesting the experiment violated a strict variational principle. As explained in his Nobel Lecture, it took 5 more years for Herzberg to perform an experiment which caught up to the accuracy of the 1964 calculations. Today, numerical solutions to the Schrödinger equation, supplemented with relativistic and higher-order quantum electrodynamics (QED) corrections can provide ro-vibrational spectra for molecules that we strongly believe to be correct, even in the absence of experimental data. Why do we believe these calculated spectra are correct if we do not have experiments against which to test them? All evidence seen so far suggests that corrections due to gravity or other forces are not needed for a computer simulated QED spectrum of ro-vibrational energy transitions to be correct at the precision of typical spectrometers. Therefore a computer-generated spectrum can be considered to be as good as one coming from a more conventional spectrometer, and this has been shown to be true not just for the H_2 energies back in 1964, but now also for several other molecules. So are we at the stage where we can launch an array of calculations, each with just the atomic number changed in the input file, to reproduce the NIST energy level databases? Not quite. But I will show that for the 6e^- molecule Li_2, we have reproduced the vibrational spacings to within 0.001 cm^{-1} of the experimental spectrum, and I will

  15. Easy identification of generalized common and conserved nested intervals.

    Science.gov (United States)

    de Montgolfier, Fabien; Raffinot, Mathieu; Rusu, Irena

    2014-07-01

    In this article we explain how to easily compute gene clusters, formalized by classical or generalized nested common or conserved intervals, between a set of K genomes represented as K permutations. A b-nested common (resp. conserved) interval I of size |I| is either an interval of size 1 or a common (resp. conserved) interval that contains another b-nested common (resp. conserved) interval of size at least |I|-b. When b=1, this corresponds to the classical notion of nested interval. We exhibit two simple algorithms to output all b-nested common or conserved intervals between K permutations in O(Kn+nocc) time, where nocc is the total number of such intervals. We also explain how to count all b-nested intervals in O(Kn) time. New properties of the family of conserved intervals are proposed to do so.

  16. The illusion of common ground

    DEFF Research Database (Denmark)

    Cowley, Stephen; Harvey, Matthew

    2016-01-01

    isolated individuals “use” language to communicate. Autonomous cognitive agents are said to use words to communicate inner thoughts and experiences; in such a framework, ‘common ground’ describes a body of information that people allegedly share, hold common, and use to reason about how intentions have...... been made manifest. We object to this view, above all, because it leaves out mechanisms that demonstrably enable people to manage joint activities by doing things together. We present an alternative view of linguistic understanding on which appeal to inner representations is replaced by tracing...... language to synergetic coordination between biological agents who draw on wordings to act within cultural ecosystems. Crucially, human coordination depends on, not just bodies, but also salient patterns of articulatory movement (‘wordings’). These rich patterns function as non-local resources that...

  17. Common skin conditions during pregnancy.

    Science.gov (United States)

    Tunzi, Marc; Gray, Gary R

    2007-01-15

    Common skin conditions during pregnancy generally can be separated into three categories: hormone-related, preexisting, and pregnancy-specific. Normal hormone changes during pregnancy may cause benign skin conditions including striae gravidarum (stretch marks); hyperpigmentation (e.g., melasma); and hair, nail, and vascular changes. Preexisting skin conditions (e.g., atopic dermatitis, psoriasis, fungal infections, cutaneous tumors) may change during pregnancy. Pregnancy-specific skin conditions include pruritic urticarial papules and plaques of pregnancy, prurigo of pregnancy, intrahepatic cholestasis of pregnancy, pemphigoid gestationis, impetigo herpetiformis, and pruritic folliculitis of pregnancy. Pruritic urticarial papules and plaques of pregnancy are the most common of these disorders. Most skin conditions resolve postpartum and only require symptomatic treatment. However, there are specific treatments for some conditions (e.g., melasma, intrahepatic cholestasis of pregnancy, impetigo herpetiformis, pruritic folliculitis of pregnancy). Antepartum surveillance is recommended for patients with intrahepatic cholestasis of pregnancy, impetigo herpetiformis, and pemphigoid gestationis.

  18. Trade, Tragedy, and the Commons

    OpenAIRE

    Copeland, Brian R.; M. Scott Taylor

    2009-01-01

    We develop a theory of resource management where the degree to which countries escape the tragedy of the commons is endogenously determined and explicitly linked to changes in world prices and other possible effects of market integration. We show how changes in world prices can move some countries from de facto open access situations to ones where management replicates that of an unconstrained social planner. Not all countries can follow this path of institutional reform and we identify key c...

  19. A Case for the Commons

    DEFF Research Database (Denmark)

    Kaiser, Brooks; Kourantidou, Melina; Fernandez, Linda

    the regulatory environment so that the crab no longer resides in international waters but is now part of the extended Russian Continental shelf, subject to Russian harvesting regulations. We argue that, as the Russians have maintained a closed, limited experimental fishery for C. Opilio, the positive externality...... define as a clear boon. We delineate and examine this complex story here in order to bring awareness to dimensions of commons management that the literature has yet to address....

  20. Common Perspectives in Qualitative Research.

    Science.gov (United States)

    Flannery, Marie

    2016-07-01

    The primary purpose of this column is to focus on several common core concepts that are foundational to qualitative research. Discussion of these concepts is at an introductory level and is designed to raise awareness and understanding of several conceptual foundations that undergird qualitative research. Because of the variety of qualitative approaches, not all concepts are relevant to every design and tradition. However, foundational aspects were selected for highlighting.

  1. LEADERS AND PROJECTS - COMMON ISSUES

    Directory of Open Access Journals (Sweden)

    A. Vacar

    2016-10-01

    Full Text Available This article is a small part of a long empirical and practical research and it began from the necessity of models to be followed in organizations and the way they can generate that expected behavior from others. Nowadays, projects seem to be the modern way of doing things in organizations because of their advantages. The article tries to present common issues between leaders and projects, both of them being as determinant factors for organizational success.

  2. Common Ground Between Three Cultures

    Directory of Open Access Journals (Sweden)

    Yehuda Peled

    2009-12-01

    Full Text Available The Triwizard program with Israel brought together students from three different communities: an Israeli Arab school, an Israeli Jewish school, and an American public school with few Jews and even fewer Muslims. The two Israeli groups met in Israel to find common ground and overcome their differences through dialogue and understanding. They communicated with the American school via technology such as video-conferencing, Skype, and emails. The program culminated with a visit to the U.S. The goal of the program was to embark upon a process that would bring about intercultural awareness and acceptance at the subjective level, guiding all involved to develop empathy and an insider's view of the other's culture. It was an attempt to have a group of Israeli high school students and a group of Arab Israeli students who had a fearful, distrustful perception of each other find common ground and become friends. TriWizard was designed to have participants begin a dialogue about issues, beliefs, and emotions based on the premise that cross-cultural training strategies that are effective in changing knowledge are those that engage the emotions, and actively develop empathy and an insider's views of another culture focused on what they have in common. Participants learned that they could become friends despite their cultural differences.

  3. Scientific Research: Commodities or Commons?

    Science.gov (United States)

    Vermeir, Koen

    2013-10-01

    Truth is for sale today, some critics claim. The increased commodification of science corrupts it, scientific fraud is rampant and the age-old trust in science is shattered. This cynical view, although gaining in prominence, does not explain very well the surprising motivation and integrity that is still central to the scientific life. Although scientific knowledge becomes more and more treated as a commodity or as a product that is for sale, a central part of academic scientific practice is still organized according to different principles. In this paper, I critically analyze alternative models for understanding the organization of knowledge, such as the idea of the scientific commons and the gift economy of science. After weighing the diverse positive and negative aspects of free market economies of science and gift economies of science, a commons structured as a gift economy seems best suited to preserve and take advantage of the specific character of scientific knowledge. Furthermore, commons and gift economies promote the rich social texture that is important for supporting central norms of science. Some of these basic norms might break down if the gift character of science is lost. To conclude, I consider the possibility and desirability of hybrid economies of academic science, which combine aspects of gift economies and free market economies. The aim of this paper is to gain a better understanding of these deeper structural challenges faced by science policy. Such theoretical reflections should eventually assist us in formulating new policy guidelines.

  4. Multiparty Computations

    DEFF Research Database (Denmark)

    Dziembowski, Stefan

    papers [1,2]. In [1] we assume that the adversary can corrupt any set from a given adversary structure. In this setting we study a problem of doing efficient VSS and MPC given an access to a secret sharing scheme (SS). For all adversary structures where VSS is possible at all, we show that, up...... here and discuss other problems caused by the adaptiveness. All protocols in the thesis are formally specified and the proofs of their security are given. [1]Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, and Tal Rabin. Efficient multiparty computations with dishonest minority...

  5. Customizable computing

    CERN Document Server

    Chen, Yu-Ting; Gill, Michael; Reinman, Glenn; Xiao, Bingjun

    2015-01-01

    Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory

  6. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  7. Cloud Computing

    Science.gov (United States)

    2010-04-29

    campaigning to make it true. Richard   Stallman , founder of the GNU project and the Free  Software Foundation, quoted in The Guardian, September 29,  2008... Richard   Stallman , known for his advocacy of “free software”, thinks cloud computing is  a trap for users—if applications and data are managed “in the cloud

  8. Computational Finance

    DEFF Research Database (Denmark)

    Rasmussen, Lykke

    One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...... this get increasingly dicult on available hardware. In this paper three alternative methods for evaluating derivatives are compared: the complex-step derivative approximation, the algorithmic forward mode and the algorithmic backward mode. These are applied to the price of the Credit Value Adjustment...

  9. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... risks? What are the limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known as a CT or CAT scan, is a diagnostic medical test that, like traditional x-rays, produces multiple images or pictures of the inside of ...

  10. Joint Service Common Operating Environment (COE) Common Geographic Information System functional requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meitzler, W.D.

    1992-06-01

    In the context of this document and COE, the Geographic Information Systems (GIS) are decision support systems involving the integration of spatially referenced data in a problem solving environment. They are digital computer systems for capturing, processing, managing, displaying, modeling, and analyzing geographically referenced spatial data which are described by attribute data and location. The ability to perform spatial analysis and the ability to combine two or more data sets to create new spatial information differentiates a GIS from other computer mapping systems. While the CCGIS allows for data editing and input, its primary purpose is not to prepare data, but rather to manipulate, analyte, and clarify it. The CCGIS defined herein provides GIS services and resources including the spatial and map related functionality common to all subsystems contained within the COE suite of C4I systems. The CCGIS, which is an integral component of the COE concept, relies on the other COE standard components to provide the definition for other support computing services required.

  11. Computational crystallization.

    Science.gov (United States)

    Altan, Irem; Charbonneau, Patrick; Snell, Edward H

    2016-07-15

    Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Zinc for the common cold.

    Science.gov (United States)

    Singh, Meenu; Das, Rashmi R

    2013-06-18

    The common cold is one of the most widespread illnesses and is a leading cause of visits to the doctor and absenteeism from school and work. Trials conducted in high-income countries since 1984 investigating the role of zinc for the common cold symptoms have had mixed results. Inadequate treatment masking and reduced bioavailability of zinc from some formulations have been cited as influencing results. To assess whether zinc (irrespective of the zinc salt or formulation used) is efficacious in reducing the incidence, severity and duration of common cold symptoms. In addition, we aimed to identify potential sources of heterogeneity in results obtained and to assess their clinical significance. In this updated review, we searched CENTRAL (2012, Issue 12), MEDLINE (1966 to January week 2, 2013), EMBASE (1974 to January 2013), CINAHL (1981 to January 2013), Web of Science (1985 to January 2013), LILACS (1982 to January 2013), WHO ICTRP and clinicaltrials.gov. Randomised, double-blind, placebo-controlled trials using zinc for at least five consecutive days to treat, or for at least five months to prevent the common cold. Two review authors independently extracted data and assessed trial quality. Five trials were identified in the updated searches in January 2013 and two of them did not meet our inclusion criteria. We included 16 therapeutic trials (1387 participants) and two preventive trials (394 participants). Intake of zinc was associated with a significant reduction in the duration (days) (mean difference (MD) -1.03, 95% confidence interval (CI) -1.72 to -0.34) (P = 0.003) (I(2) statistic = 89%) but not the severity of common cold symptoms (MD -1.06, 95% CI -2.36 to 0.23) (P = 0.11) (I(2) statistic = 84%). The proportion of participants who were symptomatic after seven days of treatment was significantly smaller (odds ratio (OR) 0.45, 95% CI 0.20 to 1.00) (P = 0.05) than those in the control, (I(2 )statistic = 75%). The incidence rate ratio (IRR) of developing a

  13. Treatment of the common cold.

    Science.gov (United States)

    Supiyaphun, Pakpoom; Kerekhanjananarong, Virachai; Saengpanich, Supinda; Cutchavaree, Amnuay

    2003-06-01

    Common colds are usually treated by the patients themselves with over-the-counter (OTC) cold medications. Many cough and cold remedies are available and sold freely without prescription. The authors conducted a study to compare the efficacy, adverse effects, the quality of life (QOL) and the patient's opinion and appreciation on the drugs (POD) between Dayquil/Nyquil and Actifed DM plus paracetamol syrup. In this prospective, investigator-blinded clinical trial, 120 patients, aged between 15 and 60 years old, with common colds within 72 hours, who accepted the trial and gave informed written consent, were randomized into two treatment groups. One patient was excluded due to evidence of bacterial infection. Fifty-nine patients were treated with Dayquil/Nyquil (D/N group), while the other 60 patients had Actifed DM plus paracetamol (ADM/P group) for three days. On day 1 the patient's demographic data (sex, age, body weight, blood pressure, co-existing diseases/conditions, drug use, and allergy to any drugs), the most prominent symptoms and its duration were recorded. All patients were screened for bacterial infection by physical examination, complete blood count and sinus radiographs. The symptoms (nasal obstruction, rhinorrhea, sneezing, cough, sore throat, fever and headache) and signs (injected nasal mucosa, nasal discharge and pharyngeal discharge) were scored, based on 4-point scale (0 to 3), on days 1 and 4. Changing of the symptoms and QOL were recorded on the diary card. The patient's opinion and appreciation on the drugs (POD) was assessed on day 4. The effectiveness (the ability to lessen the symptoms and signs), QOL and POD between two treatments were compared. The demographic data between the two groups were similar. The four most common prominent symptoms of common colds in our series were cough (47.9%), sore throat (26.17%), rhinorrhea (8.4%) and headache (8.4%). However, both treatments were equally effective in lessening the symptoms (P = 0.426) and

  14. COMMON APPROACH ON WASTE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    ANDREESCU Nicoleta Alina

    2017-05-01

    Full Text Available The world population has doubled since the 60’s, now reaching 7 billion – it is estimated it will continue growing. If in more advanced economies, the population is starting to grow old and even reduce in numbers, in less developed countries, population numbers are registering a fast growth. Across the world, the ecosystems are exposed to critical levels of pollution in more and more complex combinations. Human activities, population growth and shifting patterns in consumer nature are the main factors that are at the base of thin ever-growing burden on our environment. Globalization means that the consumer and production patterns from a country or a region contribute to the pressures on the environment in totally different parts of the world. With the rise of environmental problems, the search for solutions also begun, such as methods and actions aimed to protect the environment and to lead to a better correlation between economic growth and the environment. The common goals of these endeavors from participating states was to come up with medium and long term regulations that would lead to successfully solving environmental issues. In this paper, we have analyzed the way in which countries started collaborating in the 1970’s at an international level in order to come up with a common policy that would have a positive impact on the environment. The European Union has come up with its own common policy, a policy that each member state must implement. In this context, Romania has developed its National Strategy for Waste Management, a program that Romania wishes to use to reduce the quantity of waste and better dispose of it.

  15. LHCb Data Management: consistency, integrity and coherence of data

    CERN Document Server

    Bargiotti, Marianne

    2007-01-01

    The Large Hadron Collider (LHC) at CERN will start operating in 2007. The LHCb experiment is preparing for the real data handling and analysis via a series of data challenges and production exercises. The aim of these activities is to demonstrate the readiness of the computing infrastructure based on WLCG (Worldwide LHC Computing Grid) technologies, to validate the computing model and to provide useful samples of data for detector and physics studies. DIRAC (Distributed Infrastructure with Remote Agent Control) is the gateway to WLCG. The Dirac Data Management System (DMS) relies on both WLCG Data Management services (LCG File Catalogues, Storage Resource Managers and File Transfer Service) and LHCb specific components (Bookkeeping Metadata File Catalogue). Although the Dirac DMS has been extensively used over the past years and has proved to achieve a high grade of maturity and reliability, the complexity of both the DMS and its interactions with numerous WLCG components as well as the instability of facilit...

  16. EU grid computing effort takes on malaria

    CERN Multimedia

    Lawrence, Stacy

    2006-01-01

    Malaria is the world's most common parasitic infection, affecting more thatn 500 million people annually and killing more than 1 million. In order to help combat malaria, CERN has launched a grid computing effort (1 page)

  17. Common questions about infectious mononucleosis.

    Science.gov (United States)

    Womack, Jason; Jimenez, Marissa

    2015-03-15

    Epstein-Barr is a ubiquitous virus that infects 95% of the world population at some point in life. Although Epstein-Barr virus (EBV) infections are often asymptomatic, some patients present with the clinical syndrome of infectious mononucleosis (IM). The syndrome most commonly occurs between 15 and 24 years of age. It should be suspected in patients presenting with sore throat, fever, tonsillar enlargement, fatigue, lymphadenopathy, pharyngeal inflammation, and palatal petechiae. A heterophile antibody test is the best initial test for diagnosis of EBV infection, with 71% to 90% accuracy for diagnosing IM. However, the test has a 25% false-negative rate in the first week of illness. IM is unlikely if the lymphocyte count is less than 4,000 mm3. The presence of EBV-specific immunoglobulin M antibodies confirms infection, but the test is more costly and results take longer than the heterophile antibody test. Symptomatic relief is the mainstay of treatment. Glucocorticoids and antivirals do not reduce the length or severity of illness. Splenic rupture is an uncommon complication of IM. Because physical activity within the first three weeks of illness may increase the risk of splenic rupture, athletic participation is not recommended during this time. Children are at the highest risk of airway obstruction, which is the most common cause of hospitalization from IM. Patients with immunosuppression are more likely to have fulminant EBV infection.

  18. Common ecology quantifies human insurgency.

    Science.gov (United States)

    Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F

    2009-12-17

    Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour.

  19. Network Governance of the Commons

    Directory of Open Access Journals (Sweden)

    Lars Gunnar Carlsson

    2007-11-01

    Full Text Available The survival of the commons is closely associated with the potential to find ways to strengthen contemporary management systems, making them more responsive to a number of complexities, like the dynamics of ecosystems and related, but often fragmented, institutions. A discussion on the desirability of finding ways to establish so-called cross-scale linkages has recently been vitalised in the literature. In the same vein, concepts like adaptive management, co-management and adaptive co-management have been discussed. In essence, these ways of organizing management incorporate an implicit assumption about the establishment of social networks and is more closely related to network governance and social network theory, than to political administrative hierarchy. However, so far, attempts to incorporate social network analysis (SNA in this literature have been rather few, and not particularly elaborate. In this paper, a framework for such an approach will be presented. The framework provides an analytical skeleton for the understanding of joint management and the establishment of cross-scale linkages. The relationships between structural network properties - like density, centrality and heterogeneity - and innovation in adaptive co-management systems are highlighted as important to consider when crafting institutions for natural resource management. The paper makes a theoretical and methodological contribution to the understanding of co-management, and thereby to the survival of the commons.

  20. Numerical analysis mathematics of scientific computing

    CERN Document Server

    Kincaid, David

    2009-01-01

    This book introduces students with diverse backgrounds to various types of mathematical analysis that are commonly needed in scientific computing. The subject of numerical analysis is treated from a mathematical point of view, offering a complete analysis of methods for scientific computing with appropriate motivations and careful proofs. In an engaging and informal style, the authors demonstrate that many computational procedures and intriguing questions of computer science arise from theorems and proofs. Algorithms are presented in pseudocode, so that students can immediately write computer

  1. Towards the Epidemiological Modeling of Computer Viruses

    Directory of Open Access Journals (Sweden)

    Xiaofan Yang

    2012-01-01

    Full Text Available Epidemic dynamics of computer viruses is an emerging discipline aiming to understand the way that computer viruses spread on networks. This paper is intended to establish a series of rational epidemic models of computer viruses. First, a close inspection of some common characteristics shared by all typical computer viruses clearly reveals the flaws of previous models. Then, a generic epidemic model of viruses, which is named as the SLBS model, is proposed. Finally, diverse generalizations of the SLBS model are suggested. We believe this work opens a door to the full understanding of how computer viruses prevail on the Internet.

  2. Computational neuroscience

    CERN Document Server

    Blackwell, Kim L

    2014-01-01

    Progress in Molecular Biology and Translational Science provides a forum for discussion of new discoveries, approaches, and ideas in molecular biology. It contains contributions from leaders in their fields and abundant references. This volume brings together different aspects of, and approaches to, molecular and multi-scale modeling, with applications to a diverse range of neurological diseases. Mathematical and computational modeling offers a powerful approach for examining the interaction between molecular pathways and ionic channels in producing neuron electrical activity. It is well accepted that non-linear interactions among diverse ionic channels can produce unexpected neuron behavior and hinder a deep understanding of how ion channel mutations bring about abnormal behavior and disease. Interactions with the diverse signaling pathways activated by G protein coupled receptors or calcium influx adds an additional level of complexity. Modeling is an approach to integrate myriad data sources into a cohesiv...

  3. Social Computing

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    The past decade has witnessed a momentous transformation in the way people interact with each other. Content is now co-produced, shared, classified, and rated by millions of people, while attention has become the ephemeral and valuable resource that everyone seeks to acquire. This talk will describe how social attention determines the production and consumption of content within both the scientific community and social media, how its dynamics can be used to predict the future and the role that social media plays in setting the public agenda. About the speaker Bernardo Huberman is a Senior HP Fellow and Director of the Social Computing Lab at Hewlett Packard Laboratories. He received his Ph.D. in Physics from the University of Pennsylvania, and is currently a Consulting Professor in the Department of Applied Physics at Stanford University. He originally worked in condensed matter physics, ranging from superionic conductors to two-dimensional superfluids, and made contributions to the theory of critical p...

  4. Computer Tree

    Directory of Open Access Journals (Sweden)

    Onur AĞAOĞLU

    2014-12-01

    Full Text Available It is crucial that gifted and talented students should be supported by different educational methods for their interests and skills. The science and arts centres (gifted centres provide the Supportive Education Program for these students with an interdisciplinary perspective. In line with the program, an ICT lesson entitled “Computer Tree” serves for identifying learner readiness levels, and defining the basic conceptual framework. A language teacher also contributes to the process, since it caters for the creative function of the basic linguistic skills. The teaching technique is applied for 9-11 aged student level. The lesson introduces an evaluation process including basic information, skills, and interests of the target group. Furthermore, it includes an observation process by way of peer assessment. The lesson is considered to be a good sample of planning for any subject, for the unpredicted convergence of visual and technical abilities with linguistic abilities.

  5. computer networks

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  6. Virtualization and cloud computing in dentistry.

    Science.gov (United States)

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  7. Common Β- Thalassaemia Mutations in

    Directory of Open Access Journals (Sweden)

    P Azarfam

    2005-01-01

    Full Text Available Introduction: β –Thalassaemia was first explained by Thomas Cooly as Cooly’s anaemia in 1925. The β- thalassaemias are hereditary autosomal disorders with decreased or absent β-globin chain synthesis. The most common genetic defects in β-thalassaemias are caused by point mutations, micro deletions or insertions within the β-globin gene. Material and Methods: In this research , 142 blood samples (64 from childrens hospital of Tabriz , 15 samples from Shahid Gazi hospital of Tabriz , 18 from Urumia and 45 samples from Aliasghar hospital of Ardebil were taken from thalassaemic patients (who were previously diagnosed .Then 117 non-familial samples were selected . The DNA of the lymphocytes of blood samples was extracted by boiling and Proteinase K- SDS procedure, and mutations were detected by ARMS-PCR methods. Results: From the results obtained, eleven most common mutations,most of which were Mediterranean mutations were detected as follows; IVS-I-110(G-A, IVS-I-1(G-A ،IVS-I-5(G-C ,Frameshift Codon 44 (-C,( codon5(-CT,IVS-1-6(T-C, IVS-I-25(-25bp del ,Frameshift 8.9 (+G ,IVS-II-1(G-A ,Codon 39(C-T, Codon 30(G-C the mutations of the samples were defined. The results showed that Frameshift 8.9 (+G, IVS-I-110 (G-A ,IVS-II-I(G-A, IVS-I-5(G-C, IVS-I-1(G-A , Frameshift Codon 44(-C , codon5(-CT , IVS-1-6(T-C , IVS-I-25(-25bp del with a frequency of 29.9%, 25.47%,17.83%, 7.00%, 6.36% , 6.63% , 3.8% , 2.5% , 0.63% represented the most common mutations in North - west Iran. No mutations in Codon 39(C-T and Codon 30(G-C were detected. Cunclusion: The frequency of the same mutations in patients from North - West of Iran seems to be different as compared to other regions like Turkey, Pakistan, Lebanon and Fars province of Iran. The pattern of mutations in this region is more or less the same as in the Mediterranean region, but different from South west Asia and East Asia.

  8. ELEMENTS OF COMPUTER MATHEMATICS,

    Science.gov (United States)

    MATHEMATICS , COMPUTERS), (*COMPUTERS, MATHEMATICS ), (*NUMERICAL ANALYSIS, COMPUTERS), TABLES(DATA), FUNCTIONS( MATHEMATICS ), EQUATIONS, INTEGRALS, DIFFERENTIAL EQUATIONS, POLYNOMIALS, LEAST SQUARES METHOD, HARMONIC ANALYSIS, USSR

  9. Linking computers for science

    CERN Document Server

    2005-01-01

    After the success of SETI@home, many other scientists have found computer power donated by the public to be a valuable resource - and sometimes the only possibility to achieve their goals. In July, representatives of several “public resource computing” projects came to CERN to discuss technical issues and R&D activities on the common computing platform they are using, BOINC. This photograph shows the LHC@home screen-saver which uses the BOINC platform: the dots represent protons and the position of the status bar indicates the progress of the calculations. This summer, CERN hosted the first “pangalactic workshop” on BOINC (Berkeley Open Interface for Network Computing). BOINC is modelled on SETI@home, which millions of people have downloaded to help search for signs of extraterrestrial intelligence in radio-astronomical data. BOINC provides a general-purpose framework for scientists to adapt their software to, so that the public can install and run it. An important part of BOINC is managing the...

  10. Corticosteroids for the common cold.

    Science.gov (United States)

    Hayward, Gail; Thompson, Matthew J; Perera, Rafael; Del Mar, Chris B; Glasziou, Paul P; Heneghan, Carl J

    2015-10-13

    The common cold is a frequent illness, which, although benign and self limiting, results in many consultations to primary care and considerable loss of school or work days. Current symptomatic treatments have limited benefit. Corticosteroids are an effective treatment in other upper respiratory tract infections and their anti-inflammatory effects may also be beneficial in the common cold. This updated review has included one additional study. To compare corticosteroids versus usual care for the common cold on measures of symptom resolution and improvement in children and adults. We searched Cochrane Central Register of Controlled Trials (CENTRAL 2015, Issue 4), which includes the Acute Respiratory Infections (ARI) Group's Specialised Register, the Database of Reviews of Effects (DARE) (2015, Issue 2), NHS Health Economics Database (2015, Issue 2), MEDLINE (1948 to May week 3, 2015) and EMBASE (January 2010 to May 2015). Randomised, double-blind, controlled trials comparing corticosteroids to placebo or to standard clinical management. Two review authors independently extracted data and assessed trial quality. We were unable to perform meta-analysis and instead present a narrative description of the available evidence. We included three trials (353 participants). Two trials compared intranasal corticosteroids to placebo and one trial compared intranasal corticosteroids to usual care; no trials studied oral corticosteroids. In the two placebo-controlled trials, no benefit of intranasal corticosteroids was demonstrated for duration or severity of symptoms. The risk of bias overall was low or unclear in these two trials. In a trial of 54 participants, the mean number of symptomatic days was 10.3 in the placebo group, compared to 10.7 in those using intranasal corticosteroids (P value = 0.72). A second trial of 199 participants reported no significant differences in the duration of symptoms. The single-blind trial in children aged two to 14 years, who were also

  11. Overfishing of the Common Snook

    Directory of Open Access Journals (Sweden)

    Allison Ashcroft

    2012-01-01

    Full Text Available The chief aim of this project is to determine if the populations of the common snook, Centropomus Undecimalis, in the Atlantic and Gulf coast are being affected by overfishing. This is established by evaluating the intrinsic rate of change for these populations and their carrying capacities. It turns out that the carrying capacity for the population of the Atlantic coast is approximately one million snook and its intrinsic rate is 0.00621, while the carrying capacity of the Gulf coast's population is 2.9 million snook and its intrinsic rate is 0.00165. The decline of both populations is most likely due to the overfishing; however Gulf coast's population of the snook is decreasing at a faster rate than in the Atlantic.

  12. Constructing Common Space From Fiction:

    DEFF Research Database (Denmark)

    van Haeren, Kristen Danielle

    The spaces we inhabit are in essence a construct; each one is a fabricated environment that obtains its significance through cultural, corporeal and experiential understandings that are various, dynamic and ever-changing. Such characteristics make it utterly impossible to depict space through...... and categorization of space result in an assemblage of trivial contents that informs us of nothing other than characteristics of physicality. This fixation with calculation and universal language silences that which surfaces from human experiences and lived-in perception; the interweaving of culture and nature......, of concrete and ephemeral, of self and the world, resulting in a depiction that is abstract and alienating. Modes of representing space must go beyond a standardized language of forms in order to allow it to become the constructed common ground that one orients, identifies and dwells within. The proposal here...

  13. Common questions in veterinary toxicology.

    Science.gov (United States)

    Bates, N; Rawson-Harris, P; Edwards, N

    2015-05-01

    Toxicology is a vast subject. Animals are exposed to numerous drugs, household products, plants, chemicals, pesticides and venomous animals. In addition to the individual toxicity of the various potential poisons, there is also the question of individual response and, more importantly, of species differences in toxicity. This review serves to address some of the common questions asked when dealing with animals with possible poisoning, providing evidence where available. The role of emetics, activated charcoal and lipid infusion in the management of poisoning in animals, the toxic dose of chocolate, grapes and dried fruit in dogs, the use of antidotes in paracetamol poisoning, timing of antidotal therapy in ethylene glycol toxicosis and whether lilies are toxic to dogs are discussed. © 2015 British Small Animal Veterinary Association.

  14. Categorizing entities by common role.

    Science.gov (United States)

    Goldwater, Micah B; Markman, Arthur B

    2011-04-01

    Many categories group together entities that play a common role across situations. For example, guest and host refer to complementary roles in visiting situations and, thus, are role-governed categories (A. B. Markman & Stilwell, Journal of Experiment & Theoretical Artificial Intelligence, 13, 329-358, 2001). However, categorizing an entity by role is one of many possible classification strategies. This article examines factors that promote role-governed categorization over thematic-relation-based categorization (Lin & Murphy, Journal of Experimental Psychology: General, 130, 3-28, 2001). In Experiments 1a and 1b, we demonstrate that the use of novel category labels facilitates role-governed categorization. In Experiments 2a and 2b, we demonstrate that analogical comparison facilitates role-governed categorization. In Experiments 1b and 2b, we show that these facilitatory factors induce a general sensitivity to role information, as opposed to only promoting role-governed categorization on an item-by-item basis.

  15. CLL: Common Leukemia; Uncommon Presentations.

    Science.gov (United States)

    Lad, Deepesh; Malhotra, Pankaj; Varma, Neelam; Sachdeva, Manupdesh Singh; Das, Ashim; Srinivasan, Radhika; Bal, Amanjit; Khadwal, Alka; Prakash, Gaurav; Suri, Vikas; Kumari, Savita; Jain, Sanjay; Varma, Subhash

    2016-09-01

    We report here a series of ten patients with uncommon presentations and associations of chronic lymphocytic leukemia (CLL) not reported hitherto or occasionally reported in literature. The first two cases describe unusual causes of abdominal distension in CLL and unusual sites infiltration by CLL. The next two cases illustrate occurrence of CLL in association with other hematological malignancies. Cases five and six describe unusual infections and their impact on CLL. Cases seven and eight depict associations of rare non-hematological autoimmune conditions with CLL. The last two cases describe transformation at unusual sites. This series of ten cases illustrates how a common leukemia like CLL can present in different forms and how despite so much progress in understanding of this leukemia so little is known of such presentations.

  16. Data challenges in ATLAS computing

    CERN Document Server

    Vaniachine, A

    2003-01-01

    ATLAS computing is steadily progressing towards a highly functional software suite, plus a World Wide computing model which gives all ATLAS equal and equal quality of access to ATLAS data. A key component in the period before the LHC is a series of Data Challenges of increasing scope and complexity. The goals of the ATLAS Data Challenges are the validation of the computing model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. We are committed to 'common solutions' and look forward to the LHC Computing Grid being the vehicle for providing these in an effective way. In close collaboration between the Grid and Data Challenge communities ATLAS is testing large-scale testbed prototypes around the world, deploying prototype components to integrate and test Grid software in a production environment, and running DC1 production at 39 'tier' centers in 18 countries on four continents.

  17. Engineering applications of soft computing

    CERN Document Server

    Díaz-Cortés, Margarita-Arimatea; Rojas, Raúl

    2017-01-01

    This book bridges the gap between Soft Computing techniques and their applications to complex engineering problems. In each chapter we endeavor to explain the basic ideas behind the proposed applications in an accessible format for readers who may not possess a background in some of the fields. Therefore, engineers or practitioners who are not familiar with Soft Computing methods will appreciate that the techniques discussed go beyond simple theoretical tools, since they have been adapted to solve significant problems that commonly arise in such areas. At the same time, the book will show members of the Soft Computing community how engineering problems are now being solved and handled with the help of intelligent approaches. Highlighting new applications and implementations of Soft Computing approaches in various engineering contexts, the book is divided into 12 chapters. Further, it has been structured so that each chapter can be read independently of the others.

  18. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    ATLAS sites are now able to store unique or primary copies of the datasets. ATLAS Distributed Computing is further evolving to speed up request processing by introducing network awareness, using machine learning and optimisation of the latencies during the execution of the full chain of tasks. The Event Service, a new workflow and job execution engine, is designed around check-pointing at the level of event processing to use opportunistic resources more efficiently. ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The exploitation of opportunistic resources was at an early stage throughout 2015, at the level of 10% of the total ATLAS computing power, but in the next few years it is expected to deliver much more. In addition, demonstrating the ability to use an opportunistic resource can lead to securing ATLAS allocations on the facility, hence the importance of this work goes beyond merely the initial CPU cycles gained. In this paper, we give an overview and compare the performance, development effort, flexibility and robustness of the various approaches.

  19. Safe Computing: An Overview of Viruses.

    Science.gov (United States)

    Wodarz, Nan

    2001-01-01

    A computer virus is a program that replicates itself, in conjunction with an additional program that can harm a computer system. Common viruses include boot-sector, macro, companion, overwriting, and multipartite. Viruses can be fast, slow, stealthy, and polymorphic. Anti-virus products are described. (MLH)

  20. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  1. Task-Driven Computing

    National Research Council Canada - National Science Library

    Wang, Zhenyu

    2000-01-01

    .... They will want to use the resources to perform computing tasks. Today's computing infrastructure does not support this model of computing very well because computers interact with users in terms of low level abstractions...

  2. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  3. The Common Sense of Copying

    Directory of Open Access Journals (Sweden)

    Daniel M. Stamm

    2010-09-01

    Full Text Available This essay provides a survey of two very significant phases in the history of Japanese education: 1 the founding of the modern system (1872-1890 with a focus on the pedagogical practices acquired from the United States during that period and 2 Japan’s performance on international tests of mathematics achievement. The first relies primarily on Benjamin Duke’s recently published book The History of Modern Japanese Education: Constructing the National School System, 1872-1890, and the second on a detailed comparison of ERA mathematics test scores of Japan and Singapore over a thirty year period. These two aspects provide clear evidence that, contrary to the assertions of some scholars, it is quite possible to transfer the practices in use in one culture to another, with great success. Noting the irony of the abandonment by the U.S. of the principles that have served Japan so well for almost 140 years, I suggest that we exercise the "Common Sense of Copying” ourselves.

  4. Longest Common Extensions via Fingerprinting

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Kristensen, Jesper

    2012-01-01

    query time, no extra space and no preprocessing achieves significantly better average case performance. We show a new algorithm, Fingerprint k , which for a parameter k, 1 ≤ k ≤ [log n], on a string of length n and alphabet size σ, gives O(k n1/k) query time using O(k n) space and O(k n + sort......(n,σ)) preprocessing time, where sort(n,σ) is the time it takes to sort n numbers from σ. Though this solution is asymptotically strictly worse than the asymptotically best previously known algorithms, it outperforms them in practice in average case and is almost as fast as the simple linear time algorithm. On worst....... The LCE problem can be solved in linear space with constant query time and a preprocessing of sorting complexity. There are two known approaches achieving these bounds, which use nearest common ancestors and range minimum queries, respectively. However, in practice a much simpler approach with linear...

  5. Longest Common Extensions in Trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gawrychowski, Pawel; Gørtz, Inge Li

    2015-01-01

    to trees and suggest a few applications of LCE in trees to tries and XML databases. Given a labeled and rooted tree T of size n, the goal is to preprocess T into a compact data structure that support the following LCE queries between subpaths and subtrees in T. Let v1, v2, w1, and w2 be nodes of T...... such that w1 and w2 are descendants of v1 and v2 respectively. - LCEPP(v1, w1, v2, w2): (path-path LCE) return the longest common prefix of the paths v1 ~→ w1 and v2 ~→ w2. - LCEPT(v1, w1, v2): (path-tree LCE) return maximal path-path LCE of the path v1 ~→ w1 and any path from v2 to a descendant leaf. - LCETT......(v1, v2): (tree-tree LCE) return a maximal path-path LCE of any pair of paths from v1 and v2 to descendant leaves. We present the first non-trivial bounds for supporting these queries. For LCEPP queries, we present a linear-space solution with O(log* n) query time. For LCEPT queries, we present...

  6. Longest common extensions in trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gawrychowski, Pawel; Gørtz, Inge Li

    2016-01-01

    to trees and suggest a few applications of LCE in trees to tries and XML databases. Given a labeled and rooted tree T of size n, the goal is to preprocess T into a compact data structure that support the following LCE queries between subpaths and subtrees in T. Let v1, v2, w1, and w2 be nodes of T...... such that w1 and w2 are descendants of v1 and v2 respectively. - LCEPP(v1, w1, v2, w2): (path-path LCE) return the longest common prefix of the paths v1 ~→ w1 and v2 ~→ w2. - LCEPT(v1, w1, v2): (path-tree LCE) return maximal path-path LCE of the path v1 ~→ w1 and any path from v2 to a descendant leaf. - LCETT......(v1, v2): (tree-tree LCE) return a maximal path-path LCE of any pair of paths from v1 and v2 to descendant leaves. We present the first non-trivial bounds for supporting these queries. For LCEPP queries, we present a linear-space solution with O(log* n) query time. For LCEPT queries, we present...

  7. Microscopy of Common Nail Cosmetics.

    Science.gov (United States)

    Zimmer, Katelyn A; Clay, Tiffany; Vidal, Claudia I; Chaudhry, Sofia; Hurley, Maria Y

    2017-11-01

    Nail clipping specimens are commonly submitted for the microscopic evaluation of nail disease; however, there may be missing clinical history regarding nail polish or other adornments present on the nail at the time of specimen retrieval. For this study, 6 types of nail cosmetics were chosen and applied to the nail plate of a volunteer. After a period of at least 24 hours, the nail plates with adornments and a control nail plate were clipped and placed in formalin. Specimens were processed using a standard nail protocol. All of the specimens, except the sticker appliqué, survived the fixation process. The glitter nail polish was the only specimen found to be polarizable. None of the specimens that survived fixation were found to be PAS-positive. Cosmetic nail enhancements are easily differentiated from the nail plate microscopically; nail cosmetics appear as a distinct layer of inorganic material lying atop the nail plate. There were 2 main microscopic patterns noted on the specimens: those with 2 layers and those with 3 layers.

  8. Computational technologies advanced topics

    CERN Document Server

    Vabishchevich, Petr N

    2015-01-01

    This book discusses questions of numerical solutions of applied problems on parallel computing systems. Nowadays, engineering and scientific computations are carried out on parallel computing systems, which provide parallel data processing on a few computing nodes. In constructing computational algorithms, mathematical problems are separated in relatively independent subproblems in order to solve them on a single computing node.

  9. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  10. Designing the Microbial Research Commons

    Energy Technology Data Exchange (ETDEWEB)

    Uhlir, Paul F. [Board on Research Data and Information Policy and Global Affairs, Washington, DC (United States)

    2011-10-01

    Recent decades have witnessed an ever-increasing range and volume of digital data. All elements of the pillars of science--whether observation, experiment, or theory and modeling--are being transformed by the continuous cycle of generation, dissemination, and use of factual information. This is even more so in terms of the re-using and re-purposing of digital scientific data beyond the original intent of the data collectors, often with dramatic results. We all know about the potential benefits and impacts of digital data, but we are also aware of the barriers, the challenges in maximizing the access, and use of such data. There is thus a need to think about how a data infrastructure can enhance capabilities for finding, using, and integrating information to accelerate discovery and innovation. How can we best implement an accessible, interoperable digital environment so that the data can be repeatedly used by a wide variety of users in different settings and with different applications? With this objective: to use the microbial communities and microbial data, literature, and the research materials themselves as a test case, the Board on Research Data and Information held an International Symposium on Designing the Microbial Research Commons at the National Academy of Sciences in Washington, DC on 8-9 October 2009. The symposium addressed topics such as models to lower the transaction costs and support access to and use of microbiological materials and digital resources from the perspective of publicly funded research, public-private interactions, and developing country concerns. The overall goal of the symposium was to stimulate more research and implementation of improved legal and institutional models for publicly funded research in microbiology.

  11. Coordinating towards a Common Good

    Science.gov (United States)

    Santos, Francisco C.; Pacheco, Jorge M.

    2010-09-01

    Throughout their life, humans often engage in collective endeavors ranging from family related issues to global warming. In all cases, the tragedy of the commons threatens the possibility of reaching the optimal solution associated with global cooperation, a scenario predicted by theory and demonstrated by many experiments. Using the toolbox of evolutionary game theory, I will address two important aspects of evolutionary dynamics that have been neglected so far in the context of public goods games and evolution of cooperation. On one hand, the fact that often there is a threshold above which a public good is reached [1, 2]. On the other hand, the fact that individuals often participate in several games, related to the their social context and pattern of social ties, defined by a social network [3, 4, 5]. In the first case, the existence of a threshold above which collective action is materialized dictates a rich pattern of evolutionary dynamics where the direction of natural selection can be inverted compared to standard expectations. Scenarios of defector dominance, pure coordination or coexistence may arise simultaneously. Both finite and infinite population models are analyzed. In networked games, cooperation blooms whenever the act of contributing is more important than the effort contributed. In particular, the heterogeneous nature of social networks naturally induces a symmetry breaking of the dilemmas of cooperation, as contributions made by cooperators may become contingent on the social context in which the individual is embedded. This diversity in context provides an advantage to cooperators, which is particularly strong when both wealth and social ties follow a power-law distribution, providing clues on the self-organization of social communities. Finally, in both situations, it can be shown that individuals no longer play a defection dominance dilemma, but effectively engage in a general N-person coordination game. Even if locally defection may seem

  12. Common aspirations of world women.

    Science.gov (United States)

    Huang, B

    1996-02-01

    The comments of the Director of Foreign Affairs for the China State Family Planning Commission reflect satisfaction with the achievements at the Fourth World Conference on Women held in Beijing. It is posited that the historic documents from the conference reflect the common aspirations of all women in the world for equality, development, and peace. The conference's focus on social development for women has been translated in China into a "vigorous" IEC campaign. China is developing integrated approaches to family planning in rural areas. The approach aims to help rural women to become economically independent before achieving equality within the family and society. A National Conference on Integrated Programs was held in Sichuan province. Examples of integrated programs in Sichuan, Jilin, and Jiangsu were described for conference participants. The example is given of how poor rural women in Deyang Prefecture, Sichuan province, have received credit for income generation and access to skill development and literacy classes. Continuous economic and social development are important for achieving "poverty eradication and the liberation of women." Sustainable development involves use of resources, environmental protection, the reasonable change in consumption patterns, and transitional changes in modes of production. The concept of reproductive health means Chinese family planning workers must meet higher standards. Future plans include intensifying the IEC program in meeting the comprehensive biological, psychological, and social reproductive health needs of women. Respect must be given to the fertility intentions and reproductive rights of wives and husbands. "In China, voluntary choice of childbearing should be guided by the fertility policy formulated by the government." Training of family planning workers should be intensified to include training in public health, reproductive theory, contraception, and the techniques of interpersonal communication. Some provinces

  13. Multi-Valued Computer Algebra

    OpenAIRE

    Faure, Christèle; Davenport, James,; Naciri, Hanane

    2000-01-01

    One of the main strengths of computer algebra is being able to solve a family of problems with one computation. In order to express not only one problem but a family of problems, one introduces some symbols which are in fact the parameters common to all the problems of the family. The user must be able to understand in which way these parameters affect the result when he looks at the answer. Otherwise it may lead to completely wrong calculations, which when used for numerical applications bri...

  14. Computer games and software engineering

    CERN Document Server

    Cooper, Kendra M L

    2015-01-01

    Computer games represent a significant software application domain for innovative research in software engineering techniques and technologies. Game developers, whether focusing on entertainment-market opportunities or game-based applications in non-entertainment domains, thus share a common interest with software engineers and developers on how to best engineer game software.Featuring contributions from leading experts in software engineering, the book provides a comprehensive introduction to computer game software development that includes its history as well as emerging research on the inte

  15. Approximate solutions of common fixed-point problems

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book presents results on the convergence behavior of algorithms which are known as vital tools for solving convex feasibility problems and common fixed point problems. The main goal for us in dealing with a known computational error is to find what approximate solution can be obtained and how many iterates one needs to find it. According to know results, these algorithms should converge to a solution. In this exposition, these algorithms are studied, taking into account computational errors which remain consistent in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution if computational errors are bounded from above by a small positive constant. Beginning with an introduction, this monograph moves on to study: · dynamic string-averaging methods for common fixed point problems in a Hilbert space · dynamic string methods for common fixed point problems in a metric space · dynamic string-averaging version of the proximal...

  16. Common Sense Initiative’s Recommendation on Cathode Ray Tube (CRT) Glass-to-Glass

    Science.gov (United States)

    From 1994 through 1998, EPA’s Common Sense Initiative (CSI) Computers and Electronics Subcommittee (CES) formed a workgroup to examine regulatory barriers to pollution prevention and electronic waste recycling.

  17. Where the Cloud Meets the Commons

    Science.gov (United States)

    Ipri, Tom

    2011-01-01

    Changes presented by cloud computing--shared computing services, applications, and storage available to end users via the Internet--have the potential to seriously alter how libraries provide services, not only remotely, but also within the physical library, specifically concerning challenges facing the typical desktop computing experience.…

  18. Common peroneal nerve entrapment with the communication ...

    African Journals Online (AJOL)

    Sciatic nerve divides into tibial nerve and common peroneal nerve at the level of superior angle of popliteal fossa and variations in its branching pattern are common. The most common nerve entrapment syndrome in the lower limbs is common peroneal nerve entrapment at fibular head. Invariably it can also be trapped in ...

  19. Common Cause Abduction : Its Scope and Limits

    NARCIS (Netherlands)

    Dziurosz-Serafinowicz, Patryk

    2012-01-01

    Patryk Dziurosz-Serafmowicz, Common Cause Abduction: Its Scope and Limits This article aims to analyze the scope and limits of common cause abduction which is a version of explanatory abduction based on Hans Reichenbach's Principle of the Common Cause. First, it is argued that common cause abduction

  20. AMRITA -- A computational facility

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, J.E. [California Inst. of Tech., CA (US); Quirk, J.J.

    1998-02-23

    Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

  1. CytoMCS: A Multiple Maximum Common Subgraph Detection Tool for Cytoscape

    DEFF Research Database (Denmark)

    Larsen, Simon; Baumbach, Jan

    2017-01-01

    Comparative analysis of biological networks is a major problem in computational integrative systems biology. By computing the maximum common edge subgraph between a set of networks, one is able to detect conserved substructures between them and quantify their topological similarity. To aid...

  2. Universal quantum computation with little entanglement.

    Science.gov (United States)

    Van den Nest, Maarten

    2013-02-08

    We show that universal quantum computation can be achieved in the standard pure-state circuit model while the entanglement entropy of every bipartition is small in each step of the computation. The entanglement entropy required for large-scale quantum computation even tends to zero. Moreover we show that the same conclusion applies to many entanglement measures commonly used in the literature. This includes e.g., the geometric measure, localizable entanglement, multipartite concurrence, squashed entanglement, witness-based measures, and more generally any entanglement measure which is continuous in a certain natural sense. These results demonstrate that many entanglement measures are unsuitable tools to assess the power of quantum computers.

  3. Adaptive Learning in the Tragedy of the Commons

    Directory of Open Access Journals (Sweden)

    Julian Andrés García

    2010-07-01

    Full Text Available The joint utilisation of a commonly owned resource often causes the resource to be overused, this is known as The tragedy of the Commons. This paper analyses the effects of adaptive learning in such kind of situations using genetic programming. In a game theoretical approach, the situation considers not only the strategic interaction among players, but also the dynamics of a changing enviroment linked strongly to the players' actions and payoffs. The results of an analytical game are used to formulate a simulation game for the commons, the a series of computational experiments are conducted, obtaining evolved game strategies that are examined in comparison with those predicted by the analytical model. The obtained results are similar to those predicted by classic game theory, but not always leading to a tragedy.

  4. Do collective actions clear common air?

    Energy Technology Data Exchange (ETDEWEB)

    Aakvik, A.; Tjoetta, S. [Univ. of Bergen (Norway). Dept. of Economics

    2007-07-01

    Success in managing global public goods and commons is important for future welfare. Examples of global public goods include global warming, maintenance of international macroeconomic stability, international trade rules, international political stability, humanitarian assistance, and knowledge. The list is far from exhaustive. Institutions to manage international public goods include international environmental agreements such as the Kyoto Protocol, World Bank, International Monetary Fund, World Trade Organization, and United Nations. Public goods crossing national jurisdictional borders add a dimension to Samuelson's (1954) general theory for public goods. Under the current international law, obligations may be imposed only on a sovereign state with its consent. Hence, multinational institutions and international agreements often have weak or even lack of explicit control and enforcement mechanism. Compliance with agreements is often hard to control and verify, and moreover there is seldom explicit sanction mechanism in these agreements. With this in mind, it is reasonable to question whether these institutions work. In this paper the authors want to address this question by evaluating specific international environmental agreements which share many of the same characteristics as most of the institutions managing global public goods and commons. They consider the effects of voluntary international environmental protocols on emissions using the 1985 Helsinki Protocol and the 1994 Oslo Protocol on the reduction of sulfur oxides. These protocols are voluntary in the meaning that they lack control and enforcement mechanism to handle non-compliance. Lack of control and enforcement mechanism raises the question whether these voluntary protocols have an effect or that they merely tend to codify what countries would have done anwyay. The analysis utilizes panel data from 30 European countries for the period 1960-2002. The authors divided these countries into &apos

  5. Cranial computed tomography in pediatrics

    Energy Technology Data Exchange (ETDEWEB)

    Boltshauser, E. (Zuerich Univ. (Switzerland). Kinderklinik)

    1984-01-01

    This paper deals mainly with methodical aspects (such as sedation, intravenous and intrathecal application of contrast media) and with common difficulties in interpretation of computed tomography images. The indications for cranial CT are discussed in respect to probable therapeutic consequences and expected diagnostic yield. In the view of the author CT is, as a rule, not required in assessing chronic headache, generalised epileptic convulsions, non-specific mental retardation and cerebral palsy.

  6. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  7. Further computer appreciation

    CERN Document Server

    Fry, T F

    2014-01-01

    Further Computer Appreciation is a comprehensive cover of the principles and aspects in computer appreciation. The book starts by describing the development of computers from the first to the third computer generations, to the development of processors and storage systems, up to the present position of computers and future trends. The text tackles the basic elements, concepts and functions of digital computers, computer arithmetic, input media and devices, and computer output. The basic central processor functions, data storage and the organization of data by classification of computer files,

  8. Democratizing Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Ryoo, Jean J.

    2015-01-01

    Computer science programs are too often identified with a narrow stratum of the student population, often white or Asian boys who have access to computers at home. But because computers play such a huge role in our world today, all students can benefit from the study of computer science and the opportunity to build skills related to computing. The…

  9. Information and computation

    OpenAIRE

    Vogler, Walter

    2000-01-01

    In this chapter, concepts related to information and computation are reviewed in the context of human computation. A brief introduction to information theory and different types of computation is given. Two examples of human computation systems, online social networks and Wikipedia, are used to illustrate how these can be described and compared in terms of information and computation.

  10. Characterizing Computers And Predicting Computing Times

    Science.gov (United States)

    Saavedra-Barrera, Rafael H.

    1991-01-01

    Improved method for evaluation and comparison of computers running same or different FORTRAN programs devised. Enables one to predict time necessary to run given "benchmark" or other standard program on given computer, in scalar mode and without optimization of codes generated by compiler. Such "benchmark" running times are principal measures used to characterize performances of computers; of interest to designers, manufacturers, programmers, and users.

  11. Pattern Of Refractive Errors Among Computer Users In A Nigerian ...

    African Journals Online (AJOL)

    Background: Refractive error is a common cause of ocular morbidity. Computer use is associated with eye strain which may be due to refractive errors. Objective: To ascertain the prevalence and pattern of refractive errors among computer users. Method: A cross-sectional survey of 560 computer users in Enugu urban, ...

  12. Science for common entrance physics : answers

    CERN Document Server

    Pickering, W R

    2015-01-01

    This book contains answers to all exercises featured in the accompanying textbook Science for Common Entrance: Physics , which covers every Level 1 and 2 topic in the ISEB 13+ Physics Common Entrance exam syllabus. - Clean, clear layout for easy marking. - Includes examples of high-scoring answers with diagrams and workings. - Suitable for ISEB 13+ Mathematics Common Entrance exams taken from Autumn 2017 onwards. Also available to purchase from the Galore Park website www.galorepark.co.uk :. - Science for Common Entrance: Physics. - Science for Common Entrance: Biology. - Science for Common En

  13. CALL and Less Commonly Taught Languages--Still a Way to Go

    Science.gov (United States)

    Ward, Monica

    2016-01-01

    Many Computer Assisted Language Learning (CALL) innovations mainly apply to the Most Commonly Taught Languages (MCTLs), especially English. Recent manifestations of CALL for MCTLs such as corpora, Mobile Assisted Language Learning (MALL) and Massively Open Online Courses (MOOCs) are found less frequently in the world of Less Commonly Taught…

  14. Common families across test series—how many do we need?

    Science.gov (United States)

    G.R. Johnson

    2004-01-01

    In order to compare families that are planted on different sites, many forest tree breeding programs include common families in their different series of trials. Computer simulation was used to examine how many common families were needed in each series of progeny trials in order to reliably compare families across series. Average gain and its associated variation...

  15. Computers and the landscape

    Science.gov (United States)

    Gary H. Elsner

    1979-01-01

    Computers can analyze and help to plan the visual aspects of large wildland landscapes. This paper categorizes and explains current computer methods available. It also contains a futuristic dialogue between a landscape architect and a computer.

  16. Computer Viruses: An Overview.

    Science.gov (United States)

    Marmion, Dan

    1990-01-01

    Discusses the early history and current proliferation of computer viruses that occur on Macintosh and DOS personal computers, mentions virus detection programs, and offers suggestions for how libraries can protect themselves and their users from damage by computer viruses. (LRW)

  17. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  18. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  19. Soft computing in computer and information science

    CERN Document Server

    Fray, Imed; Pejaś, Jerzy

    2015-01-01

    This book presents a carefully selected and reviewed collection of papers presented during the 19th Advanced Computer Systems conference ACS-2014. The Advanced Computer Systems conference concentrated from its beginning on methods and algorithms of artificial intelligence. Further future brought new areas of interest concerning technical informatics related to soft computing and some more technological aspects of computer science such as multimedia and computer graphics, software engineering, web systems, information security and safety or project management. These topics are represented in the present book under the categories Artificial Intelligence, Design of Information and Multimedia Systems, Information Technology Security and Software Technologies.

  20. More powerful biomolecular computers

    OpenAIRE

    Blasiak, Janusz; Krasinski, Tadeusz; Poplawski, Tomasz; Sakowski, Sebastian

    2011-01-01

    Biomolecular computers, along with quantum computers, may be a future alternative for traditional, silicon-based computers. Main advantages of biomolecular computers are massive parallel processing of data, expanded capacity of storing information and compatibility with living organisms (first attempts of using biomolecular computers in cancer therapy through blocking of improper genetic information are described in Benenson et al.(2004). However, biomolecular computers have several drawbacks...

  1. Computer Virus and Trends

    OpenAIRE

    Tutut Handayani; Soenarto Usna,Drs.MMSI

    2004-01-01

    Since its appearance the first time in the mid-1980s, computer virus has invited various controversies that still lasts to this day. Along with the development of computer systems technology, viruses komputerpun find new ways to spread itself through a variety of existing communications media. This paper discusses about some things related to computer viruses, namely: the definition and history of computer viruses; the basics of computer viruses; state of computer viruses at this time; and ...

  2. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  3. Validity of two methods to assess computer use: Self-report by questionnaire and computer use software

    NARCIS (Netherlands)

    Douwes, M.; Kraker, H.de; Blatter, B.M.

    2007-01-01

    A long duration of computer use is known to be positively associated with Work Related Upper Extremity Disorders (WRUED). Self-report by questionnaire is commonly used to assess a worker's duration of computer use. The aim of the present study was to assess the validity of self-report and computer

  4. Computing technology in the 1980's. [computers

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  5. Common cold - how to treat at home

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000466.htm Common cold - how to treat at home To use the ... Antibiotics are almost never needed to treat a common cold. Acetaminophen (Tylenol) and ibuprofen (Advil, Motrin) help lower ...

  6. Shoulder Pain and Common Shoulder Problems

    Science.gov (United States)

    .org Shoulder Pain and Common Shoulder Problems Page ( 1 ) What most people call the shoulder is really several joints that ... article explains some of the common causes of shoulder pain, as well as some general treatment options. Your ...

  7. What Are Some Common Signs of Pregnancy?

    Science.gov (United States)

    ... some common complications of pregnancy? What is a high-risk pregnancy? What infections can affect pregnancy? What is labor? ... some common complications of pregnancy? What is a high-risk pregnancy? What infections can affect pregnancy? What is labor? ...

  8. How Are Pelvic Floor Disorders Commonly Treated?

    Science.gov (United States)

    ... Share Facebook Twitter Pinterest Email Print How are pelvic floor disorders commonly treated? Many women do not need ... Treatment Nonsurgical treatments commonly used for PFDs include: Pelvic floor muscle training (PFMT). Also called Kegel (pronounced KEY- ...

  9. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  10. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  11. Quantum computational supremacy

    Science.gov (United States)

    Harrow, Aram W.; Montanaro, Ashley

    2017-09-01

    The field of quantum algorithms aims to find ways to speed up the solution of computational problems by using a quantum computer. A key milestone in this field will be when a universal quantum computer performs a computational task that is beyond the capability of any classical computer, an event known as quantum supremacy. This would be easier to achieve experimentally than full-scale quantum computing, but involves new theoretical challenges. Here we present the leading proposals to achieve quantum supremacy, and discuss how we can reliably compare the power of a classical computer to the power of a quantum computer.

  12. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  13. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  14. Common Badging and Access Control System (CBACS)

    Science.gov (United States)

    Dischinger, Portia

    2005-01-01

    This slide presentation presents NASA's Common Badging and Access Control System. NASA began a Smart Card implementation in January 2004. Following site surveys, it was determined that NASA's badging and access control systems required upgrades to common infrastructure in order to provide flexibly, usability, and return on investment prior to a smart card implantation. Common Badging and Access Control System (CBACS) provides the common infrastructure from which FIPS-201 compliant processes, systems, and credentials can be developed and used.

  15. Space Use in the Commons: Evaluating a Flexible Library Environment

    Directory of Open Access Journals (Sweden)

    Andrew D. Asher

    2017-06-01

    Full Text Available Abstract Objective – This article evaluates the usage and user experience of the Herman B Wells Library’s Learning Commons, a newly renovated technology and learning centre that provides services and spaces tailored to undergraduates’ academic needs at Indiana University Bloomington (IUB. Methods – A mixed-method research protocol combining time-lapse photography, unobtrusive observation, and random-sample surveys was employed to construct and visualize a representative usage and activity profile for the Learning Commons space. Results – Usage of the Learning Commons by particular student groups varied considerably from expectations based on student enrollments. In particular, business, first and second year students, and international students used the Learning Commons to a higher degree than expected, while humanities students used it to a much lower degree. While users were satisfied with the services provided and the overall atmosphere of the space, they also experienced the negative effects of insufficient space and facilities due to the space often operating at or near its capacity. Demand for collaboration rooms and computer workstations was particularly high, while additional evidence suggests that the Learning Commons furniture mix may not adequately match users’ needs. Conclusions – This study presents a unique approach to space use evaluation that enables researchers to collect and visualize representative observational data. This study demonstrates a model for quickly and reliably assessing space use for open-plan and learning-centred academic environments and for evaluating how well these learning spaces fulfill their institutional mission.

  16. Common Core: Teaching Optimum Topic Exploration (TOTE)

    Science.gov (United States)

    Karge, Belinda Dunnick; Moore, Roxane Kushner

    2015-01-01

    The Common Core has become a household term and yet many educators do not understand what it means. This article explains the historical perspectives of the Common Core and gives guidance to teachers in application of Teaching Optimum Topic Exploration (TOTE) necessary for full implementation of the Common Core State Standards. An effective…

  17. THE COMMON COLD—FACT AND FANCY

    Science.gov (United States)

    Jawetz, E.; Talbot, J. C.

    1950-01-01

    A great deal of folklore, superstition and emotional reaction is attached to the common cold, but established objective information is quite limited. The evidence concerning etiology, epidemiology, physiology, prevention and treatment of the common cold is briefly summarized and critically evaluated. There is disappointing lack of real progress in any of these aspects of the common cold problem. PMID:14778003

  18. Simplifying the ELA Common Core; Demystifying Curriculum

    Science.gov (United States)

    Schmoker, Mike; Jago, Carol

    2013-01-01

    The English Language Arts (ELA) Common Core State Standards ([CCSS], 2010) could have a transformational effect on American education. Though the process seems daunting, one can begin immediately integrating the essence of the ELA Common Core in every subject area. This article shows how one could implement the Common Core and create coherent,…

  19. Common Frame of Reference and social justice

    NARCIS (Netherlands)

    Hesselink, M.W.; Satyanarayana, R.

    2009-01-01

    The article "Common Frame of Reference and Social Justice" by Martijn W. Hesselink evaluates the Draft Common Frame of Reference (DCFR) of social justice. It discusses the important areas, namely a common frame of Reference in a broad sense, social justice and contract law, private law and

  20. 49 CFR 1185.5 - Common control.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Common control. 1185.5 Section 1185.5... OF TRANSPORTATION RULES OF PRACTICE INTERLOCKING OFFICERS § 1185.5 Common control. It shall not be... carriers if such carriers are operated under common control or management either: (a) Pursuant to approval...