WorldWideScience

Sample records for intel cluster tools

  1. Implementation of High-Order Multireference Coupled-Cluster Methods on Intel Many Integrated Core Architecture.

    Science.gov (United States)

    Aprà, E; Kowalski, K

    2016-03-08

    In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.

  2. Lawrence Livermore National Laboratory selects Intel Itanium 2 processors for world's most powerful Linux cluster

    CERN Multimedia

    2003-01-01

    "Intel Corporation, system manufacturer California Digital and the University of California at Lawrence Livermore National Laboratory (LLNL) today announced they are building one of the world's most powerful supercomputers. The supercomputer project, codenamed "Thunder," uses nearly 4,000 Intel® Itanium® 2 processors... is expected to be complete in January 2004" (1 page).

  3. Performance Evaluation of Multithreaded Geant4 Simulations Using an Intel Xeon Phi Cluster

    Directory of Open Access Journals (Sweden)

    P. Schweitzer

    2015-01-01

    Full Text Available The objective of this study is to evaluate the performances of Intel Xeon Phi hardware accelerators for Geant4 simulations, especially for multithreaded applications. We present the complete methodology to guide users for the compilation of their Geant4 applications on Phi processors. Then, we propose series of benchmarks to compare the performance of Xeon CPUs and Phi processors for a Geant4 example dedicated to the simulation of electron dose point kernels, the TestEm12 example. First, we compare a distributed execution of a sequential version of the Geant4 example on both architectures before evaluating the multithreaded version of the Geant4 example. If Phi processors demonstrated their ability to accelerate computing time (till a factor 3.83 when distributing sequential Geant4 simulations, we do not reach the same level of speedup when considering the multithreaded version of the Geant4 example.

  4. New compilers speed up applications for Intel-based systems; Intel Compilers pave the way for Intel's Hyper-threading technology

    CERN Multimedia

    2002-01-01

    "Intel Corporation today introduced updated tools to help software developers optimize applications for Intel's expanding family of architectures with key innovations such as Intel's Hyper Threading Technology (1 page).

  5. Unlock performance secrets of next-gen Intel hardware

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Intel® Xeon Phi Product. About the speaker Zakhar is a software architect in Intel SSG group. His current role is Parallel Studio architect with focus on SIMD vector parallelism assistance tools. Before it he was working as Intel Advisor XE software architect and software development team-lead. Before joining Intel he was...

  6. Theorem Proving in Intel Hardware Design

    Science.gov (United States)

    O'Leary, John

    2009-01-01

    For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.

  7. Accessing Intel FPGAs for Acceleration

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this presentation, we will discuss the latest tools and products from Intel that enables FPGAs to be deployed as Accelerators. We will first talk about the Acceleration Stack for Intel Xeon CPU with FPGAs which makes it easy to create, verify, and execute functions on the Intel Programmable Acceleration Card in a Data Center. We will then talk about the OpenCL flow which allows parallel software developers to create FPGA systems and deploy them using the OpenCL standard. Next, we will talk about the Intel High-Level Synthesis compiler which can convert C++ code into custom RTL code optimized for Intel FPGAs. Lastly, we will focus on the task of running Machine Learning inference on the FPGA leveraging some of the tools we discussed. About the speaker Karl Qi is Sr. Staff Applications Engineer, Technical Training. He has been with the Customer Training department at Altera/Intel for 8 years. Most recently, he is responsible for all training content relating to High-Level Design tools, including the OpenCL...

  8. Parallel solution of the time-dependent Ginzburg-Landau equations and other experiences using BlockComm-Chameleon and PCN on the IBM SP, Intel iPSC/860, and clusters of workstations

    International Nuclear Information System (INIS)

    Coskun, E.

    1995-09-01

    Time-dependent Ginzburg-Landau (TDGL) equations are considered for modeling a thin-film finite size superconductor placed under magnetic field. The problem then leads to the use of so-called natural boundary conditions. Computational domain is partitioned into subdomains and bond variables are used in obtaining the corresponding discrete system of equations. An efficient time-differencing method based on the Forward Euler method is developed. Finally, a variable strength magnetic field resulting in a vortex motion in Type II High T c superconducting films is introduced. The authors tackled the problem using two different state-of-the-art parallel computing tools: BlockComm/Chameleon and PCN. They had access to two high-performance distributed memory supercomputers: the Intel iPSC/860 and IBM SP1. They also tested the codes using, as a parallel computing environment, a cluster of Sun Sparc workstations

  9. Scientific Computing and Apple's Intel Transition

    CERN Document Server

    CERN. Geneva

    2006-01-01

    Intel's published processor roadmap and how it may affect the future of personal and scientific computing About the speaker: Eric Albert is Senior Software Engineer in Apple's Core Technologies group. During Mac OS X's transition to Intel processors he has worked on almost every part of the operating system, from the OS kernel and compiler tools to appli...

  10. Intel Galileo essentials

    CERN Document Server

    Grimmett, Richard

    2015-01-01

    This book is for anyone who has ever been curious about using the Intel Galileo to create electronics projects. Some programming background is useful, but if you know how to use a personal computer, with the aid of the step-by-step instructions in this book, you can construct complex electronics projects that use the Intel Galileo.

  11. Cluster development in the SA tooling industry

    Directory of Open Access Journals (Sweden)

    Von Leipzig, Konrad

    2015-11-01

    Full Text Available This paper explores the concept of clustering in general, analysing research and experiences in different countries and regions, and summarising factors leading to success or contributing to failure of specific cluster initiatives. Based on this, requirements for the establishment of clusters are summarised. Next, initiatives especially in the South African tool and die making (TDM industry are considered. Through a benchmarking approach, the strengths and weaknesses of individual local tool rooms are analysed, and conclusions are drawn particularly about South African characteristics of the industry. From these results, and from structured interviews with individual tool room owners, difficulties in the establishment of a South African tooling cluster are explored, and specific areas of concern are pointed out.

  12. Home automation with Intel Galileo

    CERN Document Server

    Dundar, Onur

    2015-01-01

    This book is for anyone who wants to learn Intel Galileo for home automation and cross-platform software development. No knowledge of programming with Intel Galileo is assumed, but knowledge of the C programming language is essential.

  13. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  14. Windows for Intel Macs

    CERN Document Server

    Ogasawara, Todd

    2008-01-01

    Even the most devoted Mac OS X user may need to use Windows XP, or may just be curious about XP and its applications. This Short Cut is a concise guide for OS X users who need to quickly get comfortable and become productive with Windows XP basics on their Macs. It covers: Security Networking ApplicationsMac users can easily install and use Windows thanks to Boot Camp and Parallels Desktop for Mac. Boot Camp lets an Intel-based Mac install and boot Windows XP on its own hard drive partition. Parallels Desktop for Mac uses virtualization technology to run Windows XP (or other operating systems

  15. Design tool for offshore wind farm clusters

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Giebel, Gregor; Waldl, Igor

    2015-01-01

    . The software includes wind farm wake models, energy yield models, inter-array and long cable and grid component models, grid code compliance and ancillary services models. The common score for evaluation in order to compare different layouts is levelized cost of energy (LCoE). The integrated DTOC software...... Research Alliance (EERA) and a number of industrial partners. The approach has been to develop a robust, efficient, easy to use and flexible tool, which integrates software relevant for planning offshore wind farms and wind farm clusters and supports the user with a clear optimization work flow...... is developed within the project using open interface standards and is now available as the commercial software product Wind&Economy....

  16. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  17. Cluster-based DBMS Management Tool with High-Availability

    Directory of Open Access Journals (Sweden)

    Jae-Woo Chang

    2005-02-01

    Full Text Available A management tool which is needed for monitoring and managing cluster-based DBMSs has been little studied. So, we design and implement a cluster-based DBMS management tool with high-availability that monitors the status of nodes in a cluster system as well as the status of DBMS instances in a node. The tool enables users to recognize a single virtual system image and provides them with the status of all the nodes and resources in the system by using a graphic user interface (GUI. By using a load balancer, our management tool can increase the performance of a cluster-based DBMS as well as can overcome the limitation of the existing parallel DBMSs.

  18. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  19. BioCluster: Tool for Identification and Clustering of Enterobacteriaceae Based on Biochemical Data

    Directory of Open Access Journals (Sweden)

    Ahmed Abdullah

    2015-06-01

    Full Text Available Presumptive identification of different Enterobacteriaceae species is routinely achieved based on biochemical properties. Traditional practice includes manual comparison of each biochemical property of the unknown sample with known reference samples and inference of its identity based on the maximum similarity pattern with the known samples. This process is labor-intensive, time-consuming, error-prone, and subjective. Therefore, automation of sorting and similarity in calculation would be advantageous. Here we present a MATLAB-based graphical user interface (GUI tool named BioCluster. This tool was designed for automated clustering and identification of Enterobacteriaceae based on biochemical test results. In this tool, we used two types of algorithms, i.e., traditional hierarchical clustering (HC and the Improved Hierarchical Clustering (IHC, a modified algorithm that was developed specifically for the clustering and identification of Enterobacteriaceae species. IHC takes into account the variability in result of 1–47 biochemical tests within this Enterobacteriaceae family. This tool also provides different options to optimize the clustering in a user-friendly way. Using computer-generated synthetic data and some real data, we have demonstrated that BioCluster has high accuracy in clustering and identifying enterobacterial species based on biochemical test data. This tool can be freely downloaded at http://microbialgen.du.ac.bd/biocluster/.

  20. Network Analysis Tools: from biological networks to clusters and pathways.

    Science.gov (United States)

    Brohée, Sylvain; Faust, Karoline; Lima-Mendez, Gipsi; Vanderstocken, Gilles; van Helden, Jacques

    2008-01-01

    Network Analysis Tools (NeAT) is a suite of computer tools that integrate various algorithms for the analysis of biological networks: comparison between graphs, between clusters, or between graphs and clusters; network randomization; analysis of degree distribution; network-based clustering and path finding. The tools are interconnected to enable a stepwise analysis of the network through a complete analytical workflow. In this protocol, we present a typical case of utilization, where the tasks above are combined to decipher a protein-protein interaction network retrieved from the STRING database. The results returned by NeAT are typically subnetworks, networks enriched with additional information (i.e., clusters or paths) or tables displaying statistics. Typical networks comprising several thousands of nodes and arcs can be analyzed within a few minutes. The complete protocol can be read and executed in approximately 1 h.

  1. Intel Xeon Phi coprocessor high performance programming

    CERN Document Server

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  2. CATCHprofiles: Clustering and Alignment Tool for ChIP Profiles

    DEFF Research Database (Denmark)

    G. G. Nielsen, Fiona; Galschiøt Markus, Kasper; Møllegaard Friborg, Rune

    2012-01-01

    IP-profiling data and detect potentially meaningful patterns, the areas of enrichment must be aligned and clustered, which is an algorithmically and computationally challenging task. We have developed CATCHprofiles, a novel tool for exhaustive pattern detection in ChIP profiling data. CATCHprofiles is built upon...... a computationally efficient implementation for the exhaustive alignment and hierarchical clustering of ChIP profiling data. The tool features a graphical interface for examination and browsing of the clustering results. CATCHprofiles requires no prior knowledge about functional sites, detects known binding patterns...... it an invaluable tool for explorative research based on ChIP profiling data. CATCHprofiles and the CATCH algorithm run on all platforms and is available for free through the CATCH website: http://catch.cmbi.ru.nl/. User support is available by subscribing to the mailing list catch-users@bioinformatics.org....

  3. Protein Alignment on the Intel Xeon Phi Coprocessor

    OpenAIRE

    Ramstad, Jorun

    2015-01-01

    There is an increasing need for sensitive, high perfomance sequence alignemnet tools. With the growing databases of scientificly analyzed protein sequences, more compute power is necessary. Specialized architectures arise, and a transition from serial to specialized implementationsis is required. This thesis is a study of whether Intel 60's cores Xeon Phi coprocessor is a suitable architecture for implementation of a sequence alignment tool. The performance relative to existing tools are eval...

  4. Time-efficient simulations of tight-binding electronic structures with Intel Xeon PhiTM many-core processors

    Science.gov (United States)

    Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam

    2016-12-01

    Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.

  5. Efficient Implementation of Many-body Quantum Chemical Methods on the Intel Xeon Phi Coprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Apra, Edoardo; Klemm, Michael; Kowalski, Karol

    2014-12-01

    This paper presents the implementation and performance of the highly accurate CCSD(T) quantum chemistry method on the Intel Xeon Phi coprocessor within the context of the NWChem computational chemistry package. The widespread use of highly correlated methods in electronic structure calculations is contingent upon the interplay between advances in theory and the possibility of utilizing the ever-growing computer power of emerging heterogeneous architectures. We discuss the design decisions of our implementation as well as the optimizations applied to the compute kernels and data transfers between host and coprocessor. We show the feasibility of adopting the Intel Many Integrated Core Architecture and the Intel Xeon Phi coprocessor for developing efficient computational chemistry modeling tools. Remarkable scalability is demonstrated by benchmarks. Our solution scales up to a total of 62560 cores with the concurrent utilization of Intel Xeon processors and Intel Xeon Phi coprocessors.

  6. Design tool for offshore wind farm cluster planning

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Madsen, Peter Hauge; Giebel, Gregor

    2015-01-01

    In the framework of the FP7 project EERA DTOC: Design Tool for Offshore wind farm Cluster, a new software supporting the planning of offshore wind farms was developed, based on state-of-the-art approaches from large scale wind potential to economic benchmarking. The model portfolio includes WAs......P, FUGA, WRF, Net-Op, LCoE model, CorWind, FarmFlow, EeFarm and grid code compliance calculations. The development is done by members from European Energy Research Alliance (EERA) and guided by several industrial partners. A commercial spin-off from the project is the tool ‘Wind & Economy’. The software...... by the software and several tests were performed. The calculations include the smoothing effect on produced energy between wind farms located in different regional wind zones and the short time scales relevant for assessing balancing power. The grid code compliance was tested for several cases and the results...

  7. Intel Corporation osaleb Eesti koolitusprogrammis / Raivo Juurak

    Index Scriptorium Estoniae

    Juurak, Raivo, 1949-

    2001-01-01

    Haridusministeeriumis tutvustati infotehnoloogiaalast koolitusprogrammi, milles osaleb maailma suuremaid arvutifirmasid Intel Corporation. Koolituskursuse käigus õpetatakse aineõpetajaid oma ainetundides interneti võimalusi kasutama. 50-tunnised kursused viiakse läbi kõigis maakondades

  8. Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations

    Science.gov (United States)

    Bernaschi, M.; Bisson, M.; Salvadore, F.

    2014-10-01

    We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.

  9. Towards Porting a Real-World Seismological Application to the Intel MIC Architecture

    OpenAIRE

    V. Weinberg

    2014-01-01

    This whitepaper aims to discuss first experiences with porting an MPI-based real-world geophysical application to the new Intel Many Integrated Core (MIC) architecture. The selected code SeisSol is an application written in Fortran that can be used to simulate earthquake rupture and radiating seismic wave propagation in complex 3-D heterogeneous materials. The PRACE prototype cluster EURORA at CINECA, Italy, was accessed to analyse the MPI-performance of SeisSol on Intel Xeon Phi on both sing...

  10. Clustering and Flow Conservation Monitoring Tool for Software Defined Networks

    Directory of Open Access Journals (Sweden)

    Jesús Antonio Puente Fernández

    2018-04-01

    Full Text Available Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN is a new concept of network architecture that provides the separation of control plane (controller and data plane (switches in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches.

  11. Clustering and Flow Conservation Monitoring Tool for Software Defined Networks.

    Science.gov (United States)

    Puente Fernández, Jesús Antonio; García Villalba, Luis Javier; Kim, Tai-Hoon

    2018-04-03

    Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches.

  12. XCluSim: a visual analytics tool for interactively comparing multiple clustering results of bioinformatics data

    Science.gov (United States)

    2015-01-01

    Background Though cluster analysis has become a routine analytic task for bioinformatics research, it is still arduous for researchers to assess the quality of a clustering result. To select the best clustering method and its parameters for a dataset, researchers have to run multiple clustering algorithms and compare them. However, such a comparison task with multiple clustering results is cognitively demanding and laborious. Results In this paper, we present XCluSim, a visual analytics tool that enables users to interactively compare multiple clustering results based on the Visual Information Seeking Mantra. We build a taxonomy for categorizing existing techniques of clustering results visualization in terms of the Gestalt principles of grouping. Using the taxonomy, we choose the most appropriate interactive visualizations for presenting individual clustering results from different types of clustering algorithms. The efficacy of XCluSim is shown through case studies with a bioinformatician. Conclusions Compared to other relevant tools, XCluSim enables users to compare multiple clustering results in a more scalable manner. Moreover, XCluSim supports diverse clustering algorithms and dedicated visualizations and interactions for different types of clustering results, allowing more effective exploration of details on demand. Through case studies with a bioinformatics researcher, we received positive feedback on the functionalities of XCluSim, including its ability to help identify stably clustered items across multiple clustering results. PMID:26328893

  13. Educational clusters as a tool ofpublic policy on the market of educational services

    OpenAIRE

    M. I. Vorona

    2016-01-01

    Due to the new challenges, the implementation of cluster technology can be considired to be one of the innovative and promising tools of national and regional economy’ competitiveness raise. The cluster approach can be used is in different areas, but most attention in thie article has been paid to the problems of industrial clusters. At the same time, educational clusters remain to be poorly implemented in practice and, consequently, they are less studied theoretically. The aim of the arti...

  14. Effective SIMD Vectorization for Intel Xeon Phi Coprocessors

    OpenAIRE

    Tian, Xinmin; Saito, Hideki; Preis, Serguei V.; Garcia, Eric N.; Kozhukhov, Sergey S.; Masten, Matt; Cherkasov, Aleksei G.; Panchenko, Nikolay

    2015-01-01

    Efficiently exploiting SIMD vector units is one of the most important aspects in achieving high performance of the application code running on Intel Xeon Phi coprocessors. In this paper, we present several effective SIMD vectorization techniques such as less-than-full-vector loop vectorization, Intel MIC specific alignment optimization, and small matrix transpose/multiplication 2D vectorization implemented in the Intel C/C++ and Fortran production compilers for Intel Xeon Phi coprocessors. A ...

  15. Game-Based Experiential Learning in Online Management Information Systems Classes Using Intel's IT Manager 3

    Science.gov (United States)

    Bliemel, Michael; Ali-Hassan, Hossam

    2014-01-01

    For several years, we used Intel's flash-based game "IT Manager 3: Unseen Forces" as an experiential learning tool, where students had to act as a manager making real-time prioritization decisions about repairing computer problems, training and upgrading systems with better technologies as well as managing increasing numbers of technical…

  16. Cluster Analysis as an Analytical Tool of Population Policy

    Directory of Open Access Journals (Sweden)

    Oksana Mikhaylovna Shubat

    2017-12-01

    Full Text Available The predicted negative trends in Russian demography (falling birth rates, population decline actualize the need to strengthen measures of family and population policy. Our research purpose is to identify groups of Russian regions with similar characteristics in the family sphere using cluster analysis. The findings should make an important contribution to the field of family policy. We used hierarchical cluster analysis based on the Ward method and the Euclidean distance for segmentation of Russian regions. Clustering is based on four variables, which allowed assessing the family institution in the region. The authors used the data of Federal State Statistics Service from 2010 to 2015. Clustering and profiling of each segment has allowed forming a model of Russian regions depending on the features of the family institution in these regions. The authors revealed four clusters grouping regions with similar problems in the family sphere. This segmentation makes it possible to develop the most relevant family policy measures in each group of regions. Thus, the analysis has shown a high degree of differentiation of the family institution in the regions. This suggests that a unified approach to population problems’ solving is far from being effective. To achieve greater results in the implementation of family policy, a differentiated approach is needed. Methods of multidimensional data classification can be successfully applied as a relevant analytical toolkit. Further research could develop the adaptation of multidimensional classification methods to the analysis of the population problems in Russian regions. In particular, the algorithms of nonparametric cluster analysis may be of relevance in future studies.

  17. CERN welcomes Intel Science Fair winners

    CERN Multimedia

    Katarina Anthony

    2012-01-01

    This June, CERN welcomed twelve gifted young scientists aged 15-18 for a week-long visit of the Laboratory. These talented students were the winners of a special award co-funded by CERN and Intel, given yearly at the Intel International Science and Engineering Fair (ISEF).   The CERN award winners at the Intel ISEF 2012 Special Awards Ceremony. © Society for Science & the Public (SSP). The CERN award was set up back in 2009 as an opportunity to bring some of the best and brightest young minds to the Laboratory. The award winners are selected from among 1,500 talented students participating in ISEF – the world's largest pre-university science competition, in which students compete for more than €3 million in awards. “CERN gave an award – which was obviously this trip – to students studying physics, maths, electrical engineering and computer science,” says Benjamin Craig Bartlett, 17, from South Carolina, USA, wh...

  18. Cluster analysis as a prediction tool for pregnancy outcomes.

    Science.gov (United States)

    Banjari, Ines; Kenjerić, Daniela; Šolić, Krešimir; Mandić, Milena L

    2015-03-01

    Considering specific physiology changes during gestation and thinking of pregnancy as a "critical window", classification of pregnant women at early pregnancy can be considered as crucial. The paper demonstrates the use of a method based on an approach from intelligent data mining, cluster analysis. Cluster analysis method is a statistical method which makes possible to group individuals based on sets of identifying variables. The method was chosen in order to determine possibility for classification of pregnant women at early pregnancy to analyze unknown correlations between different variables so that the certain outcomes could be predicted. 222 pregnant women from two general obstetric offices' were recruited. The main orient was set on characteristics of these pregnant women: their age, pre-pregnancy body mass index (BMI) and haemoglobin value. Cluster analysis gained a 94.1% classification accuracy rate with three branch- es or groups of pregnant women showing statistically significant correlations with pregnancy outcomes. The results are showing that pregnant women both of older age and higher pre-pregnancy BMI have a significantly higher incidence of delivering baby of higher birth weight but they gain significantly less weight during pregnancy. Their babies are also longer, and these women have significantly higher probability for complications during pregnancy (gestosis) and higher probability of induced or caesarean delivery. We can conclude that the cluster analysis method can appropriately classify pregnant women at early pregnancy to predict certain outcomes.

  19. Intel Xeon Phi accelerated Weather Research and Forecasting (WRF) Goddard microphysics scheme

    Science.gov (United States)

    Mielikainen, J.; Huang, B.; Huang, A. H.-L.

    2014-12-01

    The Weather Research and Forecasting (WRF) model is a numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. The WRF development is a done in collaboration around the globe. Furthermore, the WRF is used by academic atmospheric scientists, weather forecasters at the operational centers and so on. The WRF contains several physics components. The most time consuming one is the microphysics. One microphysics scheme is the Goddard cloud microphysics scheme. It is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the Goddard scheme code. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU does. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is one familiar to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discussed in this paper. The results show that the optimizations improved performance of Goddard microphysics scheme on Xeon Phi 7120P by a factor of 4.7×. In addition, the optimizations reduced the Goddard microphysics scheme's share of the total WRF processing time from 20.0 to 7.5%. Furthermore, the same optimizations

  20. Experience with Intel's Many Integrated Core Architecture in ATLAS Software

    CERN Document Server

    Fleischmann, S; The ATLAS collaboration; Lavrijsen, W; Neumann, M; Vitillo, R

    2014-01-01

    Intel recently released the first commercial boards of its Many Integrated Core (MIC) Architecture. MIC is Intel's solution for the domain of throughput computing, currently dominated by general purpose programming on graphics processors (GPGPU). MIC allows the use of the more familiar x86 programming model and supports standard technologies such as OpenMP, MPI, and Intel's Threading Building Blocks. This should make it possible to develop for both throughput and latency devices using a single code base.\

  1. Experience with Intel's Many Integrated Core Architecture in ATLAS Software

    CERN Document Server

    Fleischmann, S; The ATLAS collaboration; Lavrijsen, W; Neumann, M; Vitillo, R

    2013-01-01

    Intel recently released the first commercial boards of its Many Integrated Core (MIC) Architecture. MIC is Intel's solution for the domain of throughput computing, currently dominated by general purpose programming on graphics processors (GPGPU). MIC allows the use of the more familiar x86 programming model and supports standard technologies such as OpenMP, MPI, and Intel's Threading Building Blocks. This should make it possible to develop for both throughput and latency devices using a single code base.\

  2. Optically-Selected Cluster Catalogs As a Precision Cosmology Tool

    Energy Technology Data Exchange (ETDEWEB)

    Rozo, Eduardo; /Ohio State U. /Chicago U. /KICP, Chicago; Wechsler, Risa H.; /KICP, Chicago /KIPAC, Menlo Park; Koester, Benjamin P.; /Michigan U. /Chicago U., Astron.; Evrard, August E.; McKay, Timothy A.; /Michigan U.

    2007-03-26

    We introduce a framework for describing the halo selection function of optical cluster finders. We treat the problem as being separable into a term that describes the intrinsic galaxy content of a halo (the Halo Occupation Distribution, or HOD) and a term that captures the effects of projection and selection by the particular cluster finding algorithm. Using mock galaxy catalogs tuned to reproduce the luminosity dependent correlation function and the empirical color-density relation measured in the SDSS, we characterize the maxBCG algorithm applied by Koester et al. to the SDSS galaxy catalog. We define and calibrate measures of completeness and purity for this algorithm, and demonstrate successful recovery of the underlying cosmology and HOD when applied to the mock catalogs. We identify principal components--combinations of cosmology and HOD parameters--that are recovered by survey counts as a function of richness, and demonstrate that percent-level accuracies are possible in the first two components, if the selection function can be understood to {approx} 15% accuracy.

  3. Cluster Flow: A user-friendly bioinformatics workflow tool [version 1; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Philip Ewels

    2016-12-01

    Full Text Available Pipeline tools are becoming increasingly important within the field of bioinformatics. Using a pipeline manager to manage and run workflows comprised of multiple tools reduces workload and makes analysis results more reproducible. Existing tools require significant work to install and get running, typically needing pipeline scripts to be written from scratch before running any analysis. We present Cluster Flow, a simple and flexible bioinformatics pipeline tool designed to be quick and easy to install. Cluster Flow comes with 40 modules for common NGS processing steps, ready to work out of the box. Pipelines are assembled using these modules with a simple syntax that can be easily modified as required. Core helper functions automate many common NGS procedures, making running pipelines simple. Cluster Flow is available with an GNU GPLv3 license on GitHub. Documentation, examples and an online demo are available at http://clusterflow.io.

  4. Clusters of galaxies as tools in observational cosmology : results from x-ray analysis

    International Nuclear Information System (INIS)

    Weratschnig, J.M.

    2009-01-01

    Clusters of galaxies are the largest gravitationally bound structures in the universe. They can be used as ideal tools to study large scale structure formation (e.g. when studying merger clusters) and provide highly interesting environments to analyse several characteristic interaction processes (like ram pressure stripping of galaxies, magnetic fields). In this dissertation thesis, we have studied several clusters of galaxies using X-ray observations. To obtain scientific results, we have applied different data reduction and analysis methods. With a combination of morphological and spectral analysis, the merger cluster Abell 514 was studied in much detail. It has a highly interesting morphology and shows signs for an ongoing merger as well as a shock. using a new method to detect substructure, we have analysed several clusters to determine whether any substructure is present in the X-ray image. This hints towards a real structure in the distribution of the intra-cluster medium (ICM) and is evidence for ongoing mergers. The results from this analysis are extensively used with the cluster of galaxies Abell S1136. Here, we study the ICM distribution and compare its structure with the spatial distribution of star forming galaxies. Cluster magnetic fields are another important topic of my thesis. They can be studied in Radio observations, which can be put into relation with results from X-ray observations. using observational data from several clusters, we could support the theory that cluster magnetic fields are frozen into the ICM. (author)

  5. Trusted Computing Technologies, Intel Trusted Execution Technology.

    Energy Technology Data Exchange (ETDEWEB)

    Guise, Max Joseph; Wendt, Jeremy Daniel

    2011-01-01

    We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorized users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.

  6. Radiation Failures in Intel 14nm Microprocessors

    Science.gov (United States)

    Bossev, Dobrin P.; Duncan, Adam R.; Gadlage, Matthew J.; Roach, Austin H.; Kay, Matthew J.; Szabo, Carl; Berger, Tammy J.; York, Darin A.; Williams, Aaron; LaBel, K.; hide

    2016-01-01

    In this study the 14 nm Intel Broadwell 5th generation core series 5005U-i3 and 5200U-i5 was mounted on Dell Inspiron laptops, MSI Cubi and Gigabyte Brix barebones and tested with Windows 8 and CentOS7 at idle. Heavy-ion-induced hard- and catastrophic failures do not appear to be related to the Intel 14nm Tri-Gate FinFET process. They originate from a small (9 m 140 m) area on the 32nm planar PCH die (not the CPU) as initially speculated. The hard failures seem to be due to a SEE but the exact physical mechanism has yet to be identified. Some possibilities include latch-ups, charge ion trapping or implantation, ion channels, or a combination of those (in biased conditions). The mechanism of the catastrophic failures seems related to the presence of electric power (1.05V core voltage). The 1064 nm laser mimics ionization radiation and induces soft- and hard failures as a direct result of electron-hole pair production, not heat. The 14nm FinFET processes continue to look promising for space radiation environments.

  7. An INTEL 8080 microprocessor development system

    International Nuclear Information System (INIS)

    Horne, P.J.

    1977-01-01

    The INTEL 8080 has become one of the two most widely used microprocessors at CERN, the other being the MOTOROLA 6800. Even thouth this is the case, there have been, to date, only rudimentary facilities available for aiding the development of application programs for this microprocessor. An ideal development system is one which has a sophisticated editing and filing system, an assembler/compiler, and access to the microprocessor application. In many instances access to a PROM programmer is also required, as the application may utilize only PROMs for program storage. With these thoughts in mind, an INTEL 8080 microprocessor development system was implemented in the Proton Synchrotron (PS) Division. This system utilizes a PDP 11/45 as the editing and file-handling machine, and an MSC 8/MOD 80 microcomputer for assembling, PROM programming and debugging user programs at run time. The two machines are linked by an existing CAMAC crate system which will also provide the means of access to microprocessor applications in CAMAC and the interface of the development system to any other application. (Auth.)

  8. Investigating the Use of the Intel Xeon Phi for Event Reconstruction

    Science.gov (United States)

    Sherman, Keegan; Gilfoyle, Gerard

    2014-09-01

    The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC

  9. Effective SIMD Vectorization for Intel Xeon Phi Coprocessors

    Directory of Open Access Journals (Sweden)

    Xinmin Tian

    2015-01-01

    Full Text Available Efficiently exploiting SIMD vector units is one of the most important aspects in achieving high performance of the application code running on Intel Xeon Phi coprocessors. In this paper, we present several effective SIMD vectorization techniques such as less-than-full-vector loop vectorization, Intel MIC specific alignment optimization, and small matrix transpose/multiplication 2D vectorization implemented in the Intel C/C++ and Fortran production compilers for Intel Xeon Phi coprocessors. A set of workloads from several application domains is employed to conduct the performance study of our SIMD vectorization techniques. The performance results show that we achieved up to 12.5x performance gain on the Intel Xeon Phi coprocessor. We also demonstrate a 2000x performance speedup from the seamless integration of SIMD vectorization and parallelization.

  10. Performance tuning Weather Research and Forecasting (WRF) Goddard longwave radiative transfer scheme on Intel Xeon Phi

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2015-10-01

    Next-generation mesoscale numerical weather prediction system, the Weather Research and Forecasting (WRF) model, is a designed for dual use for forecasting and research. WRF offers multiple physics options that can be combined in any way. One of the physics options is radiance computation. The major source for energy for the earth's climate is solar radiation. Thus, it is imperative to accurately model horizontal and vertical distribution of the heating. Goddard solar radiative transfer model includes the absorption duo to water vapor,ozone, ozygen, carbon dioxide, clouds and aerosols. The model computes the interactions among the absorption and scattering by clouds, aerosols, molecules and surface. Finally, fluxes are integrated over the entire longwave spectrum.In this paper, we present our results of optimizing the Goddard longwave radiative transfer scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The optimizations improved the performance of the original Goddard longwave radiative transfer scheme on Xeon Phi 7120P by a factor of 2.2x. Furthermore, the same optimizations improved the performance of the Goddard longwave radiative transfer scheme on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 2.1x compared to the original Goddard longwave radiative transfer scheme code.

  11. Clustering Algorithm As A Planning Support Tool For Rural Electrification Optimization

    Directory of Open Access Journals (Sweden)

    Ronaldo Pornillosa Parreno Jr

    2015-08-01

    Full Text Available Abstract In this study clustering algorithm was developed to optimize electrification plans by screening and grouping potential customers to be supplied with electricity. The algorithm provided adifferent approach in clustering problem which combines conceptual and distance-based clustering algorithmsto analyze potential clusters using spanning tree with the shortest possible edge weight and creating final cluster trees based on the test of inconsistency for the edges. The clustering criteria consists of commonly used distance measure with the addition of household information as basis for the ability to pay ATP value. The combination of these two parameters resulted to a more significant and realistic clusters since distance measure alone could not take the effect of the household characteristics in screening the most sensible groupings of households. In addition the implications of varying geographical features were incorporated in the algorithm by using routing index across the locations of the households. This new approach of connecting the households in an area was applied in an actual case study of one village or barangay that was not yet energized. The results of clustering algorithm generated cluster trees which could becomethetheoretical basis for power utilities to plan the initial network arrangement of electrification. Scenario analysis conducted on the two strategies of clustering the households provideddifferent alternatives for the optimization of the cost of electrification. Futhermorethe benefits associated with the two strategies formulated from the two scenarios was evaluated using benefit cost ratio BC to determine which is more economically advantageous. The results of the study showed that clustering algorithm proved to be effective in solving electrification optimization problem and serves its purpose as a planning support tool which can facilitate electrification in rural areas and achieve cost-effectiveness.

  12. Modulated modularity clustering as an exploratory tool for functional genomic inference.

    Directory of Open Access Journals (Sweden)

    Eric A Stone

    2009-05-01

    Full Text Available In recent years, the advent of high-throughput assays, coupled with their diminishing cost, has facilitated a systems approach to biology. As a consequence, massive amounts of data are currently being generated, requiring efficient methodology aimed at the reduction of scale. Whole-genome transcriptional profiling is a standard component of systems-level analyses, and to reduce scale and improve inference clustering genes is common. Since clustering is often the first step toward generating hypotheses, cluster quality is critical. Conversely, because the validation of cluster-driven hypotheses is indirect, it is critical that quality clusters not be obtained by subjective means. In this paper, we present a new objective-based clustering method and demonstrate that it yields high-quality results. Our method, modulated modularity clustering (MMC, seeks community structure in graphical data. MMC modulates the connection strengths of edges in a weighted graph to maximize an objective function (called modularity that quantifies community structure. The result of this maximization is a clustering through which tightly-connected groups of vertices emerge. Our application is to systems genetics, and we quantitatively compare MMC both to the hierarchical clustering method most commonly employed and to three popular spectral clustering approaches. We further validate MMC through analyses of human and Drosophila melanogaster expression data, demonstrating that the clusters we obtain are biologically meaningful. We show MMC to be effective and suitable to applications of large scale. In light of these features, we advocate MMC as a standard tool for exploration and hypothesis generation.

  13. Topological clustering as a tool for planning water quality monitoring in water distribution networks

    DEFF Research Database (Denmark)

    Kirstein, Jonas Kjeld; Albrechtsen, Hans-Jørgen; Rygaard, Martin

    2015-01-01

    ) identify steady clusters for a part of the network where an actual contamination has occurred; (2) analyze this event by the use of mesh diagrams; and (3) analyze the use of mesh diagrams as a decision support tool for planning water quality monitoring. Initially, the network model was divided...... into strongly and weakly connected clusters for selected time periods and mesh diagrams were used for analysing cluster connections in the Nørrebro district. Here, areas of particular interest for water quality monitoring were identified by including user-information about consumption rates and consumers...... particular sensitive towards water quality deterioration. The analysis revealed sampling locations within steady clusters, which increased samples' comparability over time. Furthermore, the method provided a simplified overview of water movement in complex distribution networks, and could assist...

  14. On the blind use of statistical tools in the analysis of globular cluster stars

    Science.gov (United States)

    D'Antona, Francesca; Caloi, Vittoria; Tailo, Marco

    2018-04-01

    As with most data analysis methods, the Bayesian method must be handled with care. We show that its application to determine stellar evolution parameters within globular clusters can lead to paradoxical results if used without the necessary precautions. This is a cautionary tale on the use of statistical tools for big data analysis.

  15. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  16. Peac – A set of tools to quickly enable Proof on a cluster

    International Nuclear Information System (INIS)

    Ganis, G; Vala, M

    2012-01-01

    With advent of the analysis phase of Lhcdata-processing, interest in Proof technology has considerably increased. While setting up a simple Proof cluster for basic usage is reasonably straightforward, exploiting the several new functionalities added in recent times may be complicated. Peac, standing for Proof Enabled Analysis Cluster, is a set of tools aiming to facilitate the setup and management of a Proof cluster. Peac is based on the experience made by setting up Proof for the Alice analysis facilities. It allows to easily build and configure Root and the additional software needed on the cluster, and may serve as distributor of binaries via Xrootd. Peac uses Proof-On-Demand (PoD) for resource management (start, stop or daemons). Finally, Peac sets-up and configures dataset management (using the Afdsmgrd daemon), as well as cluster monitoring (machine status and Proof query summaries) using MonAlisa. In this respect, a MonAlisa page has been dedicated to Peac users, so that a cluster managed by Peac can be automatically monitored. In this paper we present and describe the status and main components of Peac and show details about its usage.

  17. Analysis of Intel IA-64 Processor Support for Secure Systems

    National Research Council Canada - National Science Library

    Unalmis, Bugra

    2001-01-01

    .... Systems could be constructed for which serious security threats would be eliminated. This thesis explores the Intel IA-64 processor's hardware support and its relationship to software for building a secure system...

  18. MILC staggered conjugate gradient performance on Intel KNL

    OpenAIRE

    DeTar, Carleton; Doerfler, Douglas; Gottlieb, Steven; Jha, Ashish; Kalamkar, Dhiraj; Li, Ruizi; Toussaint, Doug

    2016-01-01

    We review our work done to optimize the staggered conjugate gradient (CG) algorithm in the MILC code for use with the Intel Knights Landing (KNL) architecture. KNL is the second gener- ation Intel Xeon Phi processor. It is capable of massive thread parallelism, data parallelism, and high on-board memory bandwidth and is being adopted in supercomputing centers for scientific research. The CG solver consumes the majority of time in production running, so we have spent most of our effort on it. ...

  19. [Intel random number generator-based true random number generator].

    Science.gov (United States)

    Huang, Feng; Shen, Hong

    2004-09-01

    To establish a true random number generator on the basis of certain Intel chips. The random numbers were acquired by programming using Microsoft Visual C++ 6.0 via register reading from the random number generator (RNG) unit of an Intel 815 chipset-based computer with Intel Security Driver (ISD). We tested the generator with 500 random numbers in NIST FIPS 140-1 and X(2) R-Squared test, and the result showed that the random number it generated satisfied the demand of independence and uniform distribution. We also compared the random numbers generated by Intel RNG-based true random number generator and those from the random number table statistically, by using the same amount of 7500 random numbers in the same value domain, which showed that the SD, SE and CV of Intel RNG-based random number generator were less than those of the random number table. The result of u test of two CVs revealed no significant difference between the two methods. Intel RNG-based random number generator can produce high-quality random numbers with good independence and uniform distribution, and solves some problems with random number table in acquisition of the random numbers.

  20. Cluster tool for in situ processing and comprehensive characteriza tion of thin films at high temperatures.

    Science.gov (United States)

    Wenisch, Robert; Lungwitz, Frank; Hanf, Daniel; Heller, Rene; Zscharschuch, Jens; Hübner, René; von Borany, Johannes; Abrasonis, Gintautas; Gemming, Sibylle; Escobar-Galindo, Ramon; Krause, Matthias

    2018-05-31

    A new cluster tool for in situ real-time processing and depth-resolved compositional, structural and optical characterization of thin films at temperatures from -100 to 800 °C is described. The implemented techniques comprise magnetron sputtering, ion irradiation, Rutherford backscattering spectrometry, Raman spectroscopy and spectroscopic ellipsometry. The capability of the cluster tool is demonstrated for a layer stack MgO/ amorphous Si (~60 nm)/ Ag (~30 nm), deposited at room temperature and crystallized with partial layer exchange by heating up to 650°C. Its initial and final composition, stacking order and structure were monitored in situ in real time and a reaction progress was defined as a function of time and temperature.

  1. Performance of Artificial Intelligence Workloads on the Intel Core 2 Duo Series Desktop Processors

    OpenAIRE

    Abdul Kareem PARCHUR; Kuppangari Krishna RAO; Fazal NOORBASHA; Ram Asaray SINGH

    2010-01-01

    As the processor architecture becomes more advanced, Intel introduced its Intel Core 2 Duo series processors. Performance impact on Intel Core 2 Duo processors are analyzed using SPEC CPU INT 2006 performance numbers. This paper studied the behavior of Artificial Intelligence (AI) benchmarks on Intel Core 2 Duo series processors. Moreover, we estimated the task completion time (TCT) @1 GHz, @2 GHz and @3 GHz Intel Core 2 Duo series processors frequency. Our results show the performance scalab...

  2. Revisiting Intel Xeon Phi optimization of Thompson cloud microphysics scheme in Weather Research and Forecasting (WRF) model

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2015-10-01

    The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.

  3. Comparative VME Performance Tests for MEN A20 Intel-L865 and RIO-3 PPC-LynxOS platforms

    CERN Document Server

    Andersen, M; CERN. Geneva. BE Department

    2009-01-01

    This benchmark note presents test results from reading values over VME using different methods and different sizes of data registers, running on two different platforms Intel-L865 and PPC-LynxOS. We find that the PowerPC is a factor 3 faster in accessing an array of contiguous VME memory locations. Block transfer and DMA read accesses are also tested and compared with conventional single access reads.

  4. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  5. Server consolidation for heterogeneous computer clusters using Colored Petri Nets and CPN Tools

    Directory of Open Access Journals (Sweden)

    Issam Al-Azzoni

    2015-10-01

    Full Text Available In this paper, we present a new approach to server consolidation in heterogeneous computer clusters using Colored Petri Nets (CPNs. Server consolidation aims to reduce energy costs and improve resource utilization by reducing the number of servers necessary to run the existing virtual machines in the cluster. It exploits the emerging technology of live migration which allows migrating virtual machines between servers without stopping their provided services. Server consolidation approaches attempt to find migration plans that aim to minimize the necessary size of the cluster. Our approach finds plans which not only minimize the overall number of used servers, but also minimize the total data migration overhead. The latter objective is not taken into consideration by other approaches and heuristics. We explore the use of CPN Tools in analyzing the state spaces of the CPNs. Since the state space of the CPN model can grow exponentially with the size of the cluster, we examine different techniques to generate and analyze the state space in order to find good plans to server consolidation within acceptable time and computing power.

  6. Cluster analysis as a tool of guests segmentation by the degree of their demand

    Directory of Open Access Journals (Sweden)

    Damijan Mumel

    2002-01-01

    Full Text Available Authors demonstrate the use of cluster analysis in finding out (ascertaining the homogenity/heterogenity of guests as to the degree of their demand. The degree of guests’ demand is defined according to the importance of perceived service quality components measured by SERVQUAL, which was adopted and adapted, according to the specifics of health spa industry in Slovenia. Goals of the article are: (a the identification of the profile of importance of general health spa service quality components, and (b the identification of groups of guests (segments according to the degree of their demand in the research in 1991 compared with 1999. Cluster analysis serves as useful tool for guest segmentation since it reveals the existence of important differences in the structure of guests in the year 1991 compared with the year 1999. The results serve as a useful database for management in health spas.

  7. piRNA analysis framework from small RNA-Seq data by a novel cluster prediction tool - PILFER.

    Science.gov (United States)

    Ray, Rishav; Pandey, Priyanka

    2017-12-19

    With the increasing number of studies focusing on PIWI-interacting RNA (piRNAs), it is now pertinent to develop efficient tools dedicated towards piRNA analysis. We have developed a novel cluster prediction tool called PILFER (PIrna cLuster FindER), which can accurately predict piRNA clusters from small RNA sequencing data. PILFER is an open source, easy to use tool, and can be executed even on a personal computer with minimum resources. It uses a sliding-window mechanism by integrating the expression of the reads along with the spatial information to predict the piRNA clusters. We have additionally defined a piRNA analysis pipeline incorporating PILFER to detect and annotate piRNAs and their clusters from raw small RNA sequencing data and implemented it on publicly available data from healthy germline and somatic tissues. We compared PILFER with other existing piRNA cluster prediction tools and found it to be statistically more accurate and superior in many aspects such as the robustness of PILFER clusters is higher and memory efficiency is more. Overall, PILFER provides a fast and accurate solution to piRNA cluster prediction. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. GraphCrunch 2: Software tool for network modeling, alignment and clustering.

    Science.gov (United States)

    Kuchaiev, Oleksii; Stevanović, Aleksandar; Hayes, Wayne; Pržulj, Nataša

    2011-01-19

    Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI) data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL") for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other existing tool. Finally, GraphCruch 2 implements an

  9. GraphCrunch 2: Software tool for network modeling, alignment and clustering

    Directory of Open Access Journals (Sweden)

    Hayes Wayne

    2011-01-01

    Full Text Available Abstract Background Recent advancements in experimental biotechnology have produced large amounts of protein-protein interaction (PPI data. The topology of PPI networks is believed to have a strong link to their function. Hence, the abundance of PPI data for many organisms stimulates the development of computational techniques for the modeling, comparison, alignment, and clustering of networks. In addition, finding representative models for PPI networks will improve our understanding of the cell just as a model of gravity has helped us understand planetary motion. To decide if a model is representative, we need quantitative comparisons of model networks to real ones. However, exact network comparison is computationally intractable and therefore several heuristics have been used instead. Some of these heuristics are easily computable "network properties," such as the degree distribution, or the clustering coefficient. An important special case of network comparison is the network alignment problem. Analogous to sequence alignment, this problem asks to find the "best" mapping between regions in two networks. It is expected that network alignment might have as strong an impact on our understanding of biology as sequence alignment has had. Topology-based clustering of nodes in PPI networks is another example of an important network analysis problem that can uncover relationships between interaction patterns and phenotype. Results We introduce the GraphCrunch 2 software tool, which addresses these problems. It is a significant extension of GraphCrunch which implements the most popular random network models and compares them with the data networks with respect to many network properties. Also, GraphCrunch 2 implements the GRAph ALigner algorithm ("GRAAL" for purely topological network alignment. GRAAL can align any pair of networks and exposes large, dense, contiguous regions of topological and functional similarities far larger than any other

  10. Parallelization of particle transport using Intel® TBB

    International Nuclear Information System (INIS)

    Apostolakis, J; Brun, R; Carminati, F; Gheata, A; Wenzel, S; Belogurov, S; Ovcharenko, E

    2014-01-01

    One of the current challenges in HEP computing is the development of particle propagation algorithms capable of efficiently use all performance aspects of modern computing devices. The Geant-Vector project at CERN has recently introduced an approach in this direction. This paper describes the implementation of a similar workflow using the Intel(r) Threading Building Blocks (Intel(r) TBB) library. This approach is intended to overcome the potential bottleneck of having a single dispatcher on many-core architectures and to result in better scalability compared to the initial pthreads-based version.

  11. Intel Legend and CERN would build up high speed Internet

    CERN Multimedia

    2002-01-01

    Intel, Legend and China Education and Research Network jointly announced on the 25th of April that they will be cooperating with each other to build up the new generation high speed internet, over the next three years (1/2 page).

  12. Communication overhead on the Intel iPSC-860 hypercube

    Science.gov (United States)

    Bokhari, Shahid H.

    1990-01-01

    Experiments were conducted on the Intel iPSC-860 hypercube in order to evaluate the overhead of interprocessor communication. It is demonstrated that: (1) contrary to popular belief, the distance between two communicating processors has a significant impact on communication time, (2) edge contention can increase communication time by a factor of more than 7, and (3) node contention has no measurable impact.

  13. Connecting Effective Instruction and Technology. Intel-elebration: Safari.

    Science.gov (United States)

    Burton, Larry D.; Prest, Sharon

    Intel-ebration is an attempt to integrate the following research-based instructional frameworks and strategies: (1) dimensions of learning; (2) multiple intelligences; (3) thematic instruction; (4) cooperative learning; (5) project-based learning; and (6) instructional technology. This paper presents a thematic unit on safari, using the…

  14. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov (United States)

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  15. Internet of Things with Intel Galileo

    CERN Document Server

    de Sousa, Miguel

    2015-01-01

    This book employs an incremental, step-by-step approach to get you familiarized with everything from the basic terms, board components, and development environments to developing real projects. Each project will demonstrate how to use specific board components and tools. Both Galileo and Galileo Gen 2 are covered in this book.

  16. Roofline Analysis in the Intel® Advisor to Deliver Optimized Performance for applications on Intel® Xeon Phi™ Processor

    OpenAIRE

    Koskela, TS; Lobet, M

    2017-01-01

    In this session we show, in two case studies, how the roofline feature of Intel Advisor has been utilized to optimize the performance of kernels of the XGC1 and PICSAR codes in preparation for Intel Knights Landing architecture. The impact of the implemented optimizations and the benefits of using the automatic roofline feature of Intel Advisor to study performance of large applications will be presented. This demonstrates an effective optimization strategy that has enabled these science appl...

  17. Physics development of web-based tools for use in hardware clusters doing lattice physics

    International Nuclear Information System (INIS)

    Dreher, P.; Akers, W.; Chen, J.; Chen, Y.; Watson, C.

    2002-01-01

    Jefferson Lab and MIT are developing a set of web-based tools within the Lattice Hadron Physics Collaboration to allow lattice QCD theorists to treat the computational facilities located at the two sites as a single meta-facility. The prototype Lattice Portal provides researchers the ability to submit jobs to the cluster, browse data caches, and transfer files between cache and off-line storage. The user can view the configuration of the PBS servers and to monitor both the status of all batch queues as well as the jobs in each queue. Work is starting on expanding the present system to include job submissions at the meta-facility level (shared queue), as well as multi-site file transfers and enhanced policy-based data management capabilities

  18. Physics development of web-based tools for use in hardware clusters doing lattice physics

    International Nuclear Information System (INIS)

    Dreher, P.; Akers, Walt; Jian-ping Chen; Chen, Y.; William, A. Watson III

    2001-01-01

    Jefferson Lab and MIT are developing a set of web-based tools within the Lattice Hadron Physics Collaboration to allow lattice QCD theorists to treat the computational facilities located at the two sites as a single meta-facility. The prototype Lattice Portal provides researchers the ability to submit jobs to the cluster, browse data caches, and transfer files between cache and off-line storage. The user can view the configuration of the PBS servers and to monitor both the status of all batch queues as well as the jobs in each queue. Work is starting on expanding the present system to include job submissions at the meta-facility level (shared queue), as well as multi-site file transfers and enhanced policy-based data management capabilities

  19. The concept of cluster- villages as planning tool in the rural districts of Denmark

    DEFF Research Database (Denmark)

    Laursen, Lea Louise Holst; Møller, Jørgen

    on economies of scale, or the decentralised model based on proximity. In the developments and debate relating to these matters, strategic and visionary planning is back in the municipal arena as the only tool capable of handling the many different challenges facing the municipalities. Mellem disse...... and uses each other’s strengths, as well as developing the individual village in addition to the specific potentials of that village. In recent years, rural Denmark has been undergoing a sweeping and very noticeable process of adjustment, Development in municipal service provision plays a particular...... to forskellige positioner ser vi en ny mulighed for landsbyudvikling, som vi kalder Clustervillages. In order to investigate the potentials and possibilities of the cluster-village concept the paper will seek to unfold the concept strategically; looking into the benefits of such concept. Further, the paper seeks...

  20. Physics development of web-based tools for use in hardware clusters doing lattice physics

    Energy Technology Data Exchange (ETDEWEB)

    Dreher, P.; Akers, W.; Chen, J.; Chen, Y.; Watson, C

    2002-03-01

    Jefferson Lab and MIT are developing a set of web-based tools within the Lattice Hadron Physics Collaboration to allow lattice QCD theorists to treat the computational facilities located at the two sites as a single meta-facility. The prototype Lattice Portal provides researchers the ability to submit jobs to the cluster, browse data caches, and transfer files between cache and off-line storage. The user can view the configuration of the PBS servers and to monitor both the status of all batch queues as well as the jobs in each queue. Work is starting on expanding the present system to include job submissions at the meta-facility level (shared queue), as well as multi-site file transfers and enhanced policy-based data management capabilities.

  1. EERA-DTOC Project: Design Tools for Offshore Wind Farm Clusters; Proyecto EERA-DTOC: herramientas para el diseno de clusters de Parques Eolicos Marinos

    Energy Technology Data Exchange (ETDEWEB)

    Palomares, A. M.

    2015-07-01

    In the EERA-DTOC Project an integrated and validated software design tool for the optimization of offshore wind farms and wind farm clusters has been developed. The CIEMAT contribution to this project has change the view on mesoscale wind forecasting models, which were not so far considered capable of modeling wind farm scale phenomena. It has been shown the ability of the WRF model to simulate the wakes caused by the wind turbines on the downwind ones (inter-turbine wakes within a wind farm) as well as the wakes between wind farms within a cluster. (Author)

  2. ASPECT: A spectra clustering tool for exploration of large spectral surveys

    Science.gov (United States)

    in der Au, A.; Meusinger, H.; Schalldach, P. F.; Newholm, M.

    2012-11-01

    Context. Analysing the empirical output from large surveys is an important challenge in contemporary science. Difficulties arise, in particular, when the database is huge and the properties of the object types to be selected are poorly constrained a priori. Aims: We present the novel, semi-automated clustering tool ASPECT for analysing voluminous archives of spectra. Methods: The heart of the program is a neural network in the form of a Kohonen self-organizing map. The resulting map is designed as an icon map suitable for the inspection by eye. The visual analysis is supported by the option to blend in individual object properties such as redshift, apparent magnitude, or signal-to-noise ratio. In addition, the package provides several tools for the selection of special spectral types, e.g. local difference maps which reflect the deviations of all spectra from one given input spectrum (real or artificial). Results: ASPECT is able to produce a two-dimensional topological map of a huge number of spectra. The software package enables the user to browse and navigate through a huge data pool and helps them to gain an insight into underlying relationships between the spectra and other physical properties and to get the big picture of the entire data set. We demonstrate the capability of ASPECT by clustering the entire data pool of ~6 × 105 spectra from the Data Release 4 of the Sloan Digital Sky Survey (SDSS). To illustrate the results regarding quality and completeness we track objects from existing catalogues of quasars and carbon stars, respectively, and connect the SDSS spectra with morphological information from the GalaxyZoo project. Code is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/547/A115

  3. clusters

    Indian Academy of Sciences (India)

    2017-09-27

    Sep 27, 2017 ... Author for correspondence (zh4403701@126.com). MS received 15 ... lic clusters using density functional theory (DFT)-GGA of the DMOL3 package. ... In the process of geometric optimization, con- vergence thresholds ..... and Postgraduate Research & Practice Innovation Program of. Jiangsu Province ...

  4. clusters

    Indian Academy of Sciences (India)

    environmental as well as technical problems during fuel gas utilization. ... adsorption on some alloys of Pd, namely PdAu, PdAg ... ried out on small neutral and charged Au24,26,27, Cu,28 ... study of Zanti et al.29 on Pdn (n = 1–9) clusters.

  5. Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors

    Energy Technology Data Exchange (ETDEWEB)

    Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep

    2014-12-01

    The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.

  6. MILC staggered conjugate gradient performance on Intel KNL

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ruiz [Indiana Univ., Bloomington, IN (United States). Dept. of Physics; Detar, Carleton [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Physics and Astronomy; Doerfler, Douglas W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Gottlieb, Steven [Indiana Univ., Bloomington, IN (United States). Dept. of Physics; Jha, Asish [Intel Corp., Hillsboro, OR (United States). Sofware and Services Group; Kalamkar, Dhiraj [Intel Labs., Bangalore (India). Parallel Computing Lab.; Toussaint, Doug [Univ. of Arizona, Tucson, AZ (United States). Physics Dept.

    2016-11-03

    We review our work done to optimize the staggered conjugate gradient (CG) algorithm in the MILC code for use with the Intel Knights Landing (KNL) architecture. KNL is the second gener- ation Intel Xeon Phi processor. It is capable of massive thread parallelism, data parallelism, and high on-board memory bandwidth and is being adopted in supercomputing centers for scientific research. The CG solver consumes the majority of time in production running, so we have spent most of our effort on it. We compare performance of an MPI+OpenMP baseline version of the MILC code with a version incorporating the QPhiX staggered CG solver, for both one-node and multi-node runs.

  7. Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors

    Science.gov (United States)

    Yi, Hongsuk

    2014-03-01

    Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.

  8. Full cycle trigonometric function on Intel Quartus II Verilog

    Science.gov (United States)

    Mustapha, Muhazam; Zulkarnain, Nur Antasha

    2018-02-01

    This paper discusses about an improvement of a previous research on hardware based trigonometric calculations. Tangent function will also be implemented to get a complete set. The functions have been simulated using Quartus II where the result will be compared to the previous work. The number of bits has also been extended for each trigonometric function. The design is based on RTL due to its resource efficient nature. At earlier stage, a technology independent test bench simulation was conducted on ModelSim due to its convenience in capturing simulation data so that accuracy information can be obtained. On second stage, Intel/Altera Quartus II will be used to simulate on technology dependent platform, particularly on the one belonging to Intel/Altera itself. Real data on no. logic elements used and propagation delay have also been obtained.

  9. Porting FEASTFLOW to the Intel Xeon Phi: Lessons Learned

    OpenAIRE

    Georgios Goumas

    2014-01-01

    In this paper we report our experiences in porting the FEASTFLOW software infrastructure to the Intel Xeon Phi coprocessor. Our efforts involved both the evaluation of programming models including OpenCL, POSIX threads and OpenMP and typical optimization strategies like parallelization and vectorization. Since the straightforward porting process of the already existing OpenCL version of the code encountered performance problems that require further analysis, we focused our efforts on the impl...

  10. Staggered Dslash Performance on Intel Xeon Phi Architecture

    OpenAIRE

    Li, Ruizi; Gottlieb, Steven

    2014-01-01

    The conjugate gradient (CG) algorithm is among the most essential and time consuming parts of lattice calculations with staggered quarks. We test the performance of CG and dslash, the key step in the CG algorithm, on the Intel Xeon Phi, also known as the Many Integrated Core (MIC) architecture. We try different parallelization strategies using MPI, OpenMP, and the vector processing units (VPUs).

  11. Intel·ligència emocional a maternal

    OpenAIRE

    Missé Cortina, Jordi

    2015-01-01

    Inclusió d'activitats d'intel·ligència emocional a maternal A i B per al treball de l'adquisició de valors com l'autoestima, el respecte, la tolerància, etc. Inclusión de actividades de inteligencia emocional en maternal A y B para el trabajo de la adquisición de valores como la autoestima, el respeto, la tolerancia, etc. Practicum for the Psychology program on Educational Psychology.

  12. Intel Many Integrated Core (MIC) architecture optimization strategies for a memory-bound Weather Research and Forecasting (WRF) Goddard microphysics scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Goddard cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The WRF is a widely used weather prediction system in the world. It development is a done in collaborative around the globe. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the code of this important part of WRF. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU do. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 4.7x. Furthermore, the same optimizations improved performance on a dual socket Intel Xeon E5-2670 system by a factor of 2.8x compared to the original code.

  13. The Whisper Relaxation Sounder onboard Cluster: A Powerful Tool for Space Plasma Diagnosis around the Earth

    International Nuclear Information System (INIS)

    Trotignon, J.G.; Decreau, P.M.E.; Rauch, J.L.; LeGuirriec, E.; Canu, P.; Darrouzet, F.

    2001-01-01

    The WHISPER relaxation sounder that is onboard the four CLUSTER spacecraft has for main scientific objectives to monitor the natural waves in the 2 kHz - 80 kHz frequency range and, mostly, to determine the total plasma density from the solar wind down to the Earth's plasmasphere. To fulfil these objectives, the WHISPER uses the two long double sphere antennae of the Electric Field and Wave experiment as transmitting and receiving sensors. In its active working mode, the WHISPER works according to principles that have been worked out for topside sounding. A radio wave transmitter sends an almost monochromatic and short wave train. A few milliseconds after, a receiver listens to the surrounding plasma response. Strong and long lasting echoes are actually received whenever the transmitting frequencies coincide with characteristic plasma frequencies. Provided that these echoes, also called resonances, may be identified, the WHISPER relaxation sounder becomes a reliable and powerful tool for plasma diagnosis. When the transmitter is off, the WHISPER behaves like a passive receiver, allowing natural waves to be monitored. The paper aims mainly at the resonance identification process description and the WHISPER capabilities and performance highlighting. (author)

  14. Performance of Artificial Intelligence Workloads on the Intel Core 2 Duo Series Desktop Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available As the processor architecture becomes more advanced, Intel introduced its Intel Core 2 Duo series processors. Performance impact on Intel Core 2 Duo processors are analyzed using SPEC CPU INT 2006 performance numbers. This paper studied the behavior of Artificial Intelligence (AI benchmarks on Intel Core 2 Duo series processors. Moreover, we estimated the task completion time (TCT @1 GHz, @2 GHz and @3 GHz Intel Core 2 Duo series processors frequency. Our results show the performance scalability in Intel Core 2 Duo series processors. Even though AI benchmarks have similar execution time, they have dissimilar characteristics which are identified using principal component analysis and dendogram. As the processor frequency increased from 1.8 GHz to 3.167 GHz the execution time is decreased by ~370 sec for AI workloads. In the case of Physics/Quantum Computing programs it was ~940 sec.

  15. Quasi-free experiments as a tool for the study of 6Li cluster structure

    International Nuclear Information System (INIS)

    Lattuada, M.; Riggi, F.; Spitaleri, C.; Vinciguerra, D.

    1984-01-01

    The value of the α-d clustering probability in 6 Li deduced from quasi-free experiments may be influenced by the choice of the inter-cluster wave function. Several functional forms usually taken to describe the relative motion of the two clusters have been examined. The effect of the choice of the intercluster wave function on the information deduced by analysing quasi-free data in the plane-wave impulse approximation was investigated

  16. New spectroscopic tool for cluster science: Nonexponential laser fluence dependence of photofragmentation

    International Nuclear Information System (INIS)

    Haberland, H.; Issendorff, B.v.

    1996-01-01

    The photodestruction of Hg 7 ++ and Hg 9 ++ has been measured as a function of photon flux. A polarization dependent deviation from a purely exponential intensity decrease was observed in both cases. This effect, which in essence is an alignment phenomenon, can be used to characterize dissociating electronic transitions of molecules and clusters. For the clusters studied it is due to a one-dimensional transition dipole moment having a fixed direction within the cluster. The effect is expected to play a role in many photoabsorption experiments where molecule/cluster ionization or fragmentation is studied under high photon fluxes. copyright 1996 The American Physical Society

  17. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    OpenAIRE

    Abdul Kareem PARCHUR; Ram Asaray SINGH

    2012-01-01

    High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310). The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many ke...

  18. OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation

    Science.gov (United States)

    Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun

    2017-11-01

    six different trap symmetries: axially and radially symmetric traps in 3d, circularly symmetric traps in 2d, fully isotropic (spherically symmetric) and fully anisotropic traps in 2d and 3d, as well as 1d traps, where no spatial symmetry is considered. Solution method: We employ the split-step Crank-Nicolson algorithm to discretize the time-dependent GP equation in space and time. The discretized equation is then solved by imaginary- or real-time propagation, employing adequately small space and time steps, to yield the solution of stationary and non-stationary problems, respectively. Reasons for the new version: Previously published Fortran programs [1,2] have now become popular tools [3] for solving the GP equation. These programs have been translated to the C programming language [4] and later extended to the more complex scenario of dipolar atoms [5]. Now virtually all computers have multi-core processors and some have motherboards with more than one physical computer processing unit (CPU), which may increase the number of available CPU cores on a single computer to several tens. The C programs have been adopted to be very fast on such multi-core modern computers using general-purpose graphic processing units (GPGPU) with Nvidia CUDA and computer clusters using Message Passing Interface (MPI) [6]. Nevertheless, previously developed Fortran programs are also commonly used for scientific computation and most of them use a single CPU core at a time in modern multi-core laptops, desktops, and workstations. Unless the Fortran programs are made aware and capable of making efficient use of the available CPU cores, the solution of even a realistic dynamical 1d problem, not to mention the more complicated 2d and 3d problems, could be time consuming using the Fortran programs. Previously, we published auto-parallel Fortran programs [2] suitable for Intel (but not GNU) compiler for solving the GP equation. Hence, a need for the full OpenMP version of the Fortran programs to

  19. Does the Intel Xeon Phi processor fit HEP workloads?

    Science.gov (United States)

    Nowak, A.; Bitzes, G.; Dotti, A.; Lazzaro, A.; Jarp, S.; Szostek, P.; Valsan, L.; Botezatu, M.; Leduc, J.

    2014-06-01

    This paper summarizes the five years of CERN openlab's efforts focused on the Intel Xeon Phi co-processor, from the time of its inception to public release. We consider the architecture of the device vis a vis the characteristics of HEP software and identify key opportunities for HEP processing, as well as scaling limitations. We report on improvements and speedups linked to parallelization and vectorization on benchmarks involving software frameworks such as Geant4 and ROOT. Finally, we extrapolate current software and hardware trends and project them onto accelerators of the future, with the specifics of offline and online HEP processing in mind.

  20. Does the Intel Xeon Phi processor fit HEP workloads?

    International Nuclear Information System (INIS)

    Nowak, A; Bitzes, G; Dotti, A; Lazzaro, A; Jarp, S; Szostek, P; Valsan, L; Botezatu, M; Leduc, J

    2014-01-01

    This paper summarizes the five years of CERN openlab's efforts focused on the Intel Xeon Phi co-processor, from the time of its inception to public release. We consider the architecture of the device vis a vis the characteristics of HEP software and identify key opportunities for HEP processing, as well as scaling limitations. We report on improvements and speedups linked to parallelization and vectorization on benchmarks involving software frameworks such as Geant4 and ROOT. Finally, we extrapolate current software and hardware trends and project them onto accelerators of the future, with the specifics of offline and online HEP processing in mind.

  1. ICARES: a real-time automated detection tool for clusters of infectious diseases in the Netherlands.

    NARCIS (Netherlands)

    Groeneveld, Geert H; Dalhuijsen, Anton; Kara-Zaïtri, Chakib; Hamilton, Bob; de Waal, Margot W; van Dissel, Jaap T; van Steenbergen, Jim E

    2017-01-01

    Clusters of infectious diseases are frequently detected late. Real-time, detailed information about an evolving cluster and possible associated conditions is essential for local policy makers, travelers planning to visit the area, and the local population. This is currently illustrated in the Zika

  2. The "p"-Median Model as a Tool for Clustering Psychological Data

    Science.gov (United States)

    Kohn, Hans-Friedrich; Steinley, Douglas; Brusco, Michael J.

    2010-01-01

    The "p"-median clustering model represents a combinatorial approach to partition data sets into disjoint, nonhierarchical groups. Object classes are constructed around "exemplars", that is, manifest objects in the data set, with the remaining instances assigned to their closest cluster centers. Effective, state-of-the-art implementations of…

  3. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    Science.gov (United States)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy

  4. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS on Intel Xeon Phi processors

    Directory of Open Access Journals (Sweden)

    H. Wang

    2017-08-01

    Full Text Available The Global Nested Air Quality Prediction Modeling System (GNAQPMS is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS, which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL. Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC, KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1 updating the pure Message Passing Interface (MPI parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2 fully employing the 512 bit wide vector processing units (VPUs on the KNL platform; (3 reducing unnecessary memory access to improve cache efficiency; (4 reducing the thread local storage (TLS in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5 changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined

  5. Computer-Based Driving in Dementia Decision Tool With Mail Support: Cluster Randomized Controlled Trial.

    Science.gov (United States)

    Rapoport, Mark J; Zucchero Sarracini, Carla; Kiss, Alex; Lee, Linda; Byszewski, Anna; Seitz, Dallas P; Vrkljan, Brenda; Molnar, Frank; Herrmann, Nathan; Tang-Wai, David F; Frank, Christopher; Henry, Blair; Pimlott, Nicholas; Masellis, Mario; Naglie, Gary

    2018-05-25

    Physicians often find significant challenges in assessing automobile driving in persons with mild cognitive impairment and mild dementia and deciding when to report to transportation administrators. Care must be taken to balance the safety of patients and other road users with potential negative effects of issuing such reports. The aim of this study was to assess whether a computer-based Driving in Dementia Decision Tool (DD-DT) increased appropriate reporting of patients with mild dementia or mild cognitive impairment to transportation administrators. The study used a parallel-group cluster nonblinded randomized controlled trial design to test a multifaceted knowledge translation intervention. The intervention included a computer-based decision support system activated by the physician-user, which provides a recommendation about whether to report patients with mild dementia or mild cognitive impairment to transportation administrators, based on an algorithm derived from earlier work. The intervention also included a mailed educational package and Web-based specialized reporting forms. Specialists and family physicians with expertise in dementia or care of the elderly were stratified by sex and randomized to either use the DD-DT or a control version of the tool that required identical data input as the intervention group, but instead generated a generic reminder about the reporting legislation in Ontario, Canada. The trial ran from September 9, 2014 to January 29, 2016, and the primary outcome was the number of reports made to the transportation administrators concordant with the algorithm. A total of 69 participating physicians were randomized, and 36 of these used the DD-DT; 20 of the 35 randomized to the intervention group used DD-DT with 114 patients, and 16 of the 34 randomized to the control group used it with 103 patients. The proportion of all assessed patients reported to the transportation administrators concordant with recommendation did not differ

  6. Exploring synchrotron radiation capabilities: The ALS-Intel CRADA

    International Nuclear Information System (INIS)

    Gozzo, F.; Cossy-Favre, A.; Padmore, H.

    1997-01-01

    Synchrotron radiation spectroscopy and spectromicroscopy were applied, at the Advanced Light Source, to the analysis of materials and problems of interest to the commercial semiconductor industry. The authors discuss some of the results obtained at the ALS using existing capabilities, in particular the small spot ultra-ESCA instrument on beamline 7.0 and the AMS (Applied Material Science) endstation on beamline 9.3.2. The continuing trend towards smaller feature size and increased performance for semiconductor components has driven the semiconductor industry to invest in the development of sophisticated and complex instrumentation for the characterization of microstructures. Among the crucial milestones established by the Semiconductor Industry Association are the needs for high quality, defect free and extremely clean silicon wafers, very thin gate oxides, lithographies near 0.1 micron and advanced material interconnect structures. The requirements of future generations cannot be met with current industrial technologies. The purpose of the ALS-Intel CRADA (Cooperative Research And Development Agreement) is to explore, compare and improve the utility of synchrotron-based techniques for practical analysis of substrates of interest to semiconductor chip manufacturing. The first phase of the CRADA project consisted in exploring existing ALS capabilities and techniques on some problems of interest. Some of the preliminary results obtained on Intel samples are discussed here

  7. Evaluation of the Intel Westmere-EX server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2011-01-01

    One year after the arrival of the Intel Xeon 7500 systems (“Nehalem-EX”), CERN openlab is presenting a set of benchmark results obtained when running on the new Xeon E7-4870 Processors, representing the “Westmere-EX” family. A modern 4-socket, 40-core system is confronted with the previous generation of expandable (“EX”) platforms, represented by a 4-socket, 32-core Intel Xeon X7560 based system – both being “top of the line” systems. Benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores via Symmetric MultiThreading (SMT), the cache sizes available, the configured memory topology, as well as the power configuration if throughput per watt is to be measured. As in previous activities, we have tried to do a good job of comparing like with like. In a “top of the line” comparison based on the HEPSPEC06 benchmark, the “We...

  8. Global synchronization algorithms for the Intel iPSC/860

    Science.gov (United States)

    Seidel, Steven R.; Davis, Mark A.

    1992-01-01

    In a distributed memory multicomputer that has no global clock, global processor synchronization can only be achieved through software. Global synchronization algorithms are used in tridiagonal systems solvers, CFD codes, sequence comparison algorithms, and sorting algorithms. They are also useful for event simulation, debugging, and for solving mutual exclusion problems. For the Intel iPSC/860 in particular, global synchronization can be used to ensure the most effective use of the communication network for operations such as the shift, where each processor in a one-dimensional array or ring concurrently sends a message to its right (or left) neighbor. Three global synchronization algorithms are considered for the iPSC/860: the gysnc() primitive provided by Intel, the PICL primitive sync0(), and a new recursive doubling synchronization (RDS) algorithm. The performance of these algorithms is compared to the performance predicted by communication models of both the long and forced message protocols. Measurements of the cost of shift operations preceded by global synchronization show that the RDS algorithm always synchronizes the nodes more precisely and costs only slightly more than the other two algorithms.

  9. Homo-FRET imaging as a tool to quantify protein and lipid clustering.

    Science.gov (United States)

    Bader, Arjen N; Hoetzl, Sandra; Hofman, Erik G; Voortman, Jarno; van Bergen en Henegouwen, Paul M P; van Meer, Gerrit; Gerritsen, Hans C

    2011-02-25

    Homo-FRET, Förster resonance energy transfer between identical fluorophores, can be conveniently measured by observing its effect on the fluorescence anisotropy. This review aims to summarize the possibilities of fluorescence anisotropy imaging techniques to investigate clustering of identical proteins and lipids. Homo-FRET imaging has the ability to determine distances between fluorophores. In addition it can be employed to quantify cluster sizes as well as cluster size distributions. The interpretation of homo-FRET signals is complicated by the fact that both the mutual orientations of the fluorophores and the number of fluorophores per cluster affect the fluorescence anisotropy in a similar way. The properties of the fluorescence probes are very important. Taking these properties into account is critical for the correct interpretation of homo-FRET signals in protein- and lipid-clustering studies. This is be exemplified by studies on the clustering of the lipid raft markers GPI and K-ras, as well as for EGF receptor clustering in the plasma membrane. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Roofline Analysis in the Intel® Advisor to Deliver Optimized Performance for applications on Intel® Xeon Phi™ Processor

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Tuomas S.; Lobet, Mathieu; Deslippe, Jack; Matveev, Zakhar

    2017-05-23

    In this session we show, in two case studies, how the roofline feature of Intel Advisor has been utilized to optimize the performance of kernels of the XGC1 and PICSAR codes in preparation for Intel Knights Landing architecture. The impact of the implemented optimizations and the benefits of using the automatic roofline feature of Intel Advisor to study performance of large applications will be presented. This demonstrates an effective optimization strategy that has enabled these science applications to achieve up to 4.6 times speed-up and prepare for future exascale architectures. # Goal/Relevance of Session The roofline model [1,2] is a powerful tool for analyzing the performance of applications with respect to the theoretical peak achievable on a given computer architecture. It allows one to graphically represent the performance of an application in terms of operational intensity, i.e. the ratio of flops performed and bytes moved from memory in order to guide optimization efforts. Given the scale and complexity of modern science applications, it can often be a tedious task for the user to perform the analysis on the level of functions or loops to identify where performance gains can be made. With new Intel tools, it is now possible to automate this task, as well as base the estimates of peak performance on measurements rather than vendor specifications. The goal of this session is to demonstrate how the roofline feature of Intel Advisor can be used to balance memory vs. computation related optimization efforts and effectively identify performance bottlenecks. A series of typical optimization techniques: cache blocking, structure refactoring, data alignment, and vectorization illustrated by the kernel cases will be addressed. # Description of the codes ## XGC1 The XGC1 code [3] is a magnetic fusion Particle-In-Cell code that uses an unstructured mesh for its Poisson solver that allows it to accurately resolve the edge plasma of a magnetic fusion device. After

  11. X ray emission: a tool and a probe for laser - clusters interaction

    International Nuclear Information System (INIS)

    Prigent, Ch.

    2004-12-01

    In intense laser-cluster interaction, the experimental results show a strong energetic coupling between radiation and matter. We have measured absolute X-ray yields and charge state distributions under well control conditions as a function of physical parameters governing the interaction; namely laser intensity, pulse duration, wavelength or polarization state of the laser light, the size and the species of the clusters (Ar, Kr, Xe). We have highlighted, for the first time, an intensity threshold in the X-ray production very low (∼ 2.10 14 W/cm 2 for a pulse duration of 300 fs) which can results from an effect of the dynamical polarisation of clusters in an intense electric field. A weak dependence with the wavelength (400 nm / 800 nm) on the absolute X-ray yields has been found. Moreover, we have observed a saturation of the X-ray emission probability below a critical cluster size. (author)

  12. Automated Parallel Computing Tools for Multicore Machines and Clusters, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to improve productivity of high performance computing for applications on multicore computers and clusters. These machines built from one or more chips...

  13. 75 FR 48338 - Intel Corporation; Analysis of Proposed Consent Order to Aid Public Comment

    Science.gov (United States)

    2010-08-10

    ... product road maps, its compilers, and product benchmarking (Sections VI, VII, and VIII). The Proposed... alleges that Intel's failure to fully disclose the changes it made to its compilers and libraries... benchmarking organizations the effects of its compiler redesign on non-Intel CPUs. Several benchmarking...

  14. Analysis OpenMP performance of AMD and Intel architecture for breaking waves simulation using MPS

    Science.gov (United States)

    Alamsyah, M. N. A.; Utomo, A.; Gunawan, P. H.

    2018-03-01

    Simulation of breaking waves by using Navier-Stokes equation via moving particle semi-implicit method (MPS) over close domain is given. The results show the parallel computing on multicore architecture using OpenMP platform can reduce the computational time almost half of the serial time. Here, the comparison using two computer architectures (AMD and Intel) are performed. The results using Intel architecture is shown better than AMD architecture in CPU time. However, in efficiency, the computer with AMD architecture gives slightly higher than the Intel. For the simulation by 1512 number of particles, the CPU time using Intel and AMD are 12662.47 and 28282.30 respectively. Moreover, the efficiency using similar number of particles, AMD obtains 50.09 % and Intel up to 49.42 %.

  15. Photometric redshifts as a tool for studying the Coma cluster galaxy populations

    Science.gov (United States)

    Adami, C.; Ilbert, O.; Pelló, R.; Cuillandre, J. C.; Durret, F.; Mazure, A.; Picat, J. P.; Ulmer, M. P.

    2008-12-01

    Aims: We apply photometric redshift techniques to an investigation of the Coma cluster galaxy luminosity function (GLF) at faint magnitudes, in particular in the u* band where basically no studies are presently available at these magnitudes. Methods: Cluster members were selected based on probability distribution function from photometric redshift calculations applied to deep u^*, B, V, R, I images covering a region of almost 1 deg2 (completeness limit R ~ 24). In the area covered only by the u* image, the GLF was also derived after a statistical background subtraction. Results: Global and local GLFs in the B, V, R, and I bands obtained with photometric redshift selection are consistent with our previous results based on a statistical background subtraction. The GLF in the u* band shows an increase in the faint end slope towards the outer regions of the cluster. The analysis of the multicolor type spatial distribution reveals that late type galaxies are distributed in clumps in the cluster outskirts, where X-ray substructures are also detected and where the GLF in the u* band is steeper. Conclusions: We can reproduce the GLFs computed with classical statistical subtraction methods by applying a photometric redshift technique. The u* GLF slope is steeper in the cluster outskirts, varying from α ~ -1 in the cluster center to α ~ -2 in the cluster periphery. The concentrations of faint late type galaxies in the cluster outskirts could explain these very steep slopes, assuming a short burst of star formation in these galaxies when entering the cluster. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is also partly based on data products produced at

  16. Cost/Performance Ratio Achieved by Using a Commodity-Based Cluster

    Science.gov (United States)

    Lopez, Isaac

    2001-01-01

    Researchers at the NASA Glenn Research Center acquired a commodity cluster based on Intel Corporation processors to compare its performance with a traditional UNIX cluster in the execution of aeropropulsion applications. Since the cost differential of the clusters was significant, a cost/performance ratio was calculated. After executing a propulsion application on both clusters, the researchers demonstrated a 9.4 cost/performance ratio in favor of the Intel-based cluster. These researchers utilize the Aeroshark cluster as one of the primary testbeds for developing NPSS parallel application codes and system software. The Aero-shark cluster provides 64 Intel Pentium II 400-MHz processors, housed in 32 nodes. Recently, APNASA - a code developed by a Government/industry team for the design and analysis of turbomachinery systems was used for a simulation on Glenn's Aeroshark cluster.

  17. Photo fragmentation dynamics of small argon clusters and biological molecular: new tools by trapping and vectorial correlation

    International Nuclear Information System (INIS)

    Lepere, V.

    2006-09-01

    The present work concerns the building up of a complex set-up whose aim being the investigation of the photo fragmentation of ionised clusters and biological molecules. This new tool is based on the association of several techniques. Two ion sources are available: clusters produced in a supersonic beam are ionised by 70 eV electrons while ions of biological interest are produced in an 'electro-spray'. Ro-vibrational cooling is achieved in a 'Zajfman' electrostatic ion trap. The lifetime of ions can also be measured using the trap. Two types of lasers are used to excite the ionised species: the femtosecond laser available at the ELYSE facilities and a nanosecond laser. Both lasers have a repetition rate of 1 kHz. The neutral and ionised fragments are detected in coincidence using a sophisticated detection system allowing time and localisation of the various fragments to be determined. With such a tool, I was able to investigate in details the fragmentation dynamics of ionised clusters and bio-molecules. The first experiments deal with the measurement of the lifetime of the Ar 2+ dimer II(1/2)u metastable state. The relative population of this state was also determined. The Ar 2+ and Ar 3+ photo-fragmentation was then studied and electronic transitions responsible for their dissociation identified. The detailed analysis of our data allowed to distinguish the various fragmentation mechanisms. Finally, a preliminary investigation of the protonated tryptamine fragmentation is presented. (author)

  18. 75 FR 21353 - Intel Corporation, Fab 20 Division, Including On-Site Leased Workers From Volt Technical...

    Science.gov (United States)

    2010-04-23

    ... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-73,642] Intel Corporation, Fab 20... of Intel Corporation, Fab 20 Division, including on-site leased workers of Volt Technical Resources... Precision, Inc. were employed on-site at the Hillsboro, Oregon location of Intel Corporation, Fab 20...

  19. Clustering of Cochlear Oscillations in Frequency Plateaus as a Tool to Investigate SOAE Generation

    DEFF Research Database (Denmark)

    Epp, Bastian; Wit, Hero; van Dijk, Pim

    2016-01-01

    of coupled oscillators (OAM) [7] are also found in a transmission line model (TLM) which is able to generate realistic SOAEs [2] and if these frequency plateaus can be used to explain the formation of SOAEs. The simulations showed a clustering of oscillators along the simulated basilar membrane Both, the OAM...

  20. Homo-FRET Imaging as a tool to quantify protein and lipid clustering

    NARCIS (Netherlands)

    Bader, A.N.; Hoetzl, S.; Hofman, E.G.; Voortman, J.; van Bergen en Henegouwen, P.M.P.; van Meer, G.; Gerritsen, H.C.

    2010-01-01

    Homo-FRET, Förster resonance energy transfer between identical fluorophores, can be conveniently measured by observing its effect on the fluorescence anisotropy. This review aims to summarize the possibilities of fluorescence anisotropy imaging techniques to investigate clustering of identical

  1. Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-04-20

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes the FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.

  2. The Multimorbidity Cluster Analysis Tool: Identifying Combinations and Permutations of Multiple Chronic Diseases Using a Record-Level Computational Analysis

    Directory of Open Access Journals (Sweden)

    Kathryn Nicholson

    2017-12-01

    Full Text Available Introduction: Multimorbidity, or the co-occurrence of multiple chronic health conditions within an individual, is an increasingly dominant presence and burden in modern health care systems.  To fully capture its complexity, further research is needed to uncover the patterns and consequences of these co-occurring health states.  As such, the Multimorbidity Cluster Analysis Tool and the accompanying Multimorbidity Cluster Analysis Toolkit have been created to allow researchers to identify distinct clusters that exist within a sample of participants or patients living with multimorbidity.  Development: The Tool and Toolkit were developed at Western University in London, Ontario, Canada.  This open-access computational program (JAVA code and executable file was developed and tested to support an analysis of thousands of individual records and up to 100 disease diagnoses or categories.  Application: The computational program can be adapted to the methodological elements of a research project, including type of data, type of chronic disease reporting, measurement of multimorbidity, sample size and research setting.  The computational program will identify all existing, and mutually exclusive, combinations and permutations within the dataset.  An application of this computational program is provided as an example, in which more than 75,000 individual records and 20 chronic disease categories resulted in the detection of 10,411 unique combinations and 24,647 unique permutations among female and male patients.  Discussion: The Tool and Toolkit are now available for use by researchers interested in exploring the complexities of multimorbidity.  Its careful use, and the comparison between results, will be valuable additions to the nuanced understanding of multimorbidity.

  3. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2012-08-01

    Full Text Available High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310. The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many key application areas in modern generation. The scaling of performance in two major series of Intel Xeon processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310 has been analyzed using the performance numbers of 12 CPU2006 integer benchmarks, performance numbers that exhibit significant differences in performance. The results and analysis can be used by performance engineers, scientists and developers to better understand the performance scaling in modern generation processors.

  4. A comparison of SuperLU solvers on the intel MIC architecture

    Science.gov (United States)

    Tuncel, Mehmet; Duran, Ahmet; Celebi, M. Serdar; Akaydin, Bora; Topkaya, Figen O.

    2016-10-01

    In many science and engineering applications, problems may result in solving a sparse linear system AX=B. For example, SuperLU_MCDT, a linear solver, was used for the large penta-diagonal matrices for 2D problems and hepta-diagonal matrices for 3D problems, coming from the incompressible blood flow simulation (see [1]). It is important to test the status and potential improvements of state-of-the-art solvers on new technologies. In this work, sequential, multithreaded and distributed versions of SuperLU solvers (see [2]) are examined on the Intel Xeon Phi coprocessors using offload programming model at the EURORA cluster of CINECA in Italy. We consider a portfolio of test matrices containing patterned matrices from UFMM ([3]) and randomly located matrices. This architecture can benefit from high parallelism and large vectors. We find that the sequential SuperLU benefited up to 45 % performance improvement from the offload programming depending on the sparse matrix type and the size of transferred and processed data.

  5. Educational clusters as a tool ofpublic policy on the market of educational services

    Directory of Open Access Journals (Sweden)

    M. I. Vorona

    2016-08-01

    Due to this, the innovative educational cluster has been determined as a voluntary association of geographically close interacting entities, educational institutions, government, banking and private sector, innovative enterprises/organizations infrastructure. Such interaction is characterized by the production of competitive educational, cultural, social services, the availability of the agreed development strategy aimed at the interests of each participant and the region being a territory of cluster’s localization.

  6. Parallel variable selection of molecular dynamics clusters as a tool for calculation of spectroscopic properties

    Czech Academy of Sciences Publication Activity Database

    Kessler, Jiří; Dračínský, Martin; Bouř, Petr

    2013-01-01

    Roč. 34, č. 5 (2013), s. 366-371 ISSN 0192-8651 R&D Projects: GA ČR GAP208/11/0105; GA MŠk(CZ) LH11033 Grant - others:GA MŠk(CZ) LM2010005 Institutional support: RVO:61388963 Keywords : molecular dynamics * clusters * density functional theory * Raman optical activity * NMR Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.601, year: 2013

  7. Cluster-transfer reactions with radioactive beams: a spectroscopic tool for neutron-rich nuclei

    CERN Document Server

    AUTHOR|(CDS)2086156; Raabe, Riccardo; Bracco, Angela

    In this thesis work, an exploratory experiment to investigate cluster-transfer reactions with radioactive beams in inverse kinematics is presented. The aim of the experiment was to test the potential of cluster-transfer reactions at the Coulomb barrier, as a possible mean to perform $\\gamma$ spectroscopy studies of exotic neutron-rich nuclei at medium-high energies and spins. The experiment was performed at ISOLDE (CERN), employing the heavy-ion reaction $^{98}$Rb + $^{7}$Li at 2.85 MeV/A. Cluster-transfer reaction channels were studied through particle-$\\gamma$ coincidence measurements, using the MINIBALL Ge array coupled to the charged particle Si detectors T-REX. Sr, Y and Zr neutron-rich nuclei with A $\\approx$ 100 were populated by either triton- or $\\alpha$ transfer from $^{7}$Li to the beam nuclei and the emitted complementary charged fragment was detected in coincidence with the $\\gamma$ cascade of the residues, after few neutrons evaporation. The measured $\\gamma$ spectra were studied in detail and t...

  8. Performance of a plasma fluid code on the Intel parallel computers

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Drake, J.B.; Leboeuf, J.N.; Liewer, P.

    1992-01-01

    One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchtone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchtone Delta machine, the corresponding improvement factor is 16. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel σ machine gives an improvement factor close to 64 over the single-processor CRAY-2

  9. Performance of a plasma fluid code on the Intel parallel computers

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Drake, J.B.; Leboeuf, J.N.; Liewer, P.

    1992-01-01

    One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchtone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchtone Delta machine, the corresponding improvement factor is 16. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel (sigma) machine gives an improvement factor close to 64 over the single-processor CRAY-2. 12 refs

  10. Performance of a plasma fluid code on the Intel parallel computers

    Science.gov (United States)

    Lynch, V. E.; Carreras, B. A.; Drake, J. B.; Leboeuf, J. N.; Liewer, P.

    1992-01-01

    One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchtone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchtone Delta machine, the corresponding improvement factor is 16. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel (sigma) machine gives an improvement factor close to 64 over the single-processor CRAY-2.

  11. Computation cluster for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S.

    2010-01-01

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  12. Computation cluster for Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S. [Dep. Of Nuclear Physics and Technology, Faculty of Electrical Engineering and Information, Technology, Slovak Technical University, Ilkovicova 3, 81219 Bratislava (Slovakia)

    2010-07-01

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  13. Applications Performance on NAS Intel Paragon XP/S - 15#

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)

    1994-01-01

    The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran

  14. Single event effect testing of the Intel 80386 family and the 80486 microprocessor

    International Nuclear Information System (INIS)

    Moran, A.; LaBel, K.; Gates, M.; Seidleck, C.; McGraw, R.; Broida, M.; Firer, J.; Sprehn, S.

    1996-01-01

    The authors present single event effect test results for the Intel 80386 microprocessor, the 80387 coprocessor, the 82380 peripheral device, and on the 80486 microprocessor. Both single event upset and latchup conditions were monitored

  15. CAMSHIFT Tracker Design Experiments With Intel OpenCV and SAI

    National Research Council Canada - National Science Library

    Francois, Alexandre R

    2004-01-01

    ... (including multi-modal) systems, must be specifically addressed. This report describes design and implementation experiments for CAMSHIFT-based tracking systems using Intel's Open Computer Vision library and SAI...

  16. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    Science.gov (United States)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  17. Performance optimization of Qbox and WEST on Intel Knights Landing

    Science.gov (United States)

    Zheng, Huihuo; Knight, Christopher; Galli, Giulia; Govoni, Marco; Gygi, Francois

    We present the optimization of electronic structure codes Qbox and WEST targeting the Intel®Xeon Phi™processor, codenamed Knights Landing (KNL). Qbox is an ab-initio molecular dynamics code based on plane wave density functional theory (DFT) and WEST is a post-DFT code for excited state calculations within many-body perturbation theory. Both Qbox and WEST employ highly scalable algorithms which enable accurate large-scale electronic structure calculations on leadership class supercomputer platforms beyond 100,000 cores, such as Mira and Theta at the Argonne Leadership Computing Facility. In this work, features of the KNL architecture (e.g. hierarchical memory) are explored to achieve higher performance in key algorithms of the Qbox and WEST codes and to develop a road-map for further development targeting next-generation computing architectures. In particular, the optimizations of the Qbox and WEST codes on the KNL platform will target efficient large-scale electronic structure calculations of nanostructured materials exhibiting complex structures and prediction of their electronic and thermal properties for use in solar and thermal energy conversion device. This work was supported by MICCoM, as part of Comp. Mats. Sci. Program funded by the U.S. DOE, Office of Sci., BES, MSE Division. This research used resources of the ALCF, which is a DOE Office of Sci. User Facility under Contract DE-AC02-06CH11357.

  18. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00014247; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea

    2017-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with detai...

  19. Multi-threaded ATLAS Simulation on Intel Knights Landing Processors

    CERN Document Server

    Farrell, Steven; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles

    2016-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), will be delivered to its users in two phases with the first phase online now and the second phase expected in mid-2016. Cori Phase 2 will be based on the KNL architecture and will contain over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a great use-case for the KNL architecture and supercomputers like Cori. Simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this presentation we will give an overview of the ATLAS simulation application with details on its multi-thr...

  20. Evaluation of the Intel Sandy Bridge-EP server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2012-01-01

    In this paper we report on a set of benchmark results recently obtained by CERN openlab when comparing an 8-core “Sandy Bridge-EP” processor with Intel’s previous microarchitecture, the “Westmere-EP”. The Intel marketing names for these processors are “Xeon E5-2600 processor series” and “Xeon 5600 processor series”, respectively. Both processors are produced in a 32nm process, and both platforms are dual-socket servers. Multiple benchmarks were used to get a good understanding of the performance of the new processor. We used both industry-standard benchmarks, such as SPEC2006, and specific High Energy Physics benchmarks, representing both simulation of physics detectors and data analysis of physics events. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores ...

  1. Evaluation of the Intel Nehalem-EX server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2010-01-01

    In this paper we report on a set of benchmark results recently obtained by the CERN openlab by comparing the 4-socket, 32-core Intel Xeon X7560 server with the previous generation 4-socket server, based on the Xeon X7460 processor. The Xeon X7560 processor represents a major change in many respects, especially the memory sub-system, so it was important to make multiple comparisons. In most benchmarks the two 4-socket servers were compared. It should be underlined that both servers represent the “top of the line” in terms of frequency. However, in some cases, it was important to compare systems that integrated the latest processor features, such as QPI links, Symmetric multithreading and over-clocking via Turbo mode, and in such situations the X7560 server was compared to a dual socket L5520 based system with an identical frequency of 2.26 GHz. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following ...

  2. Effect of Deep Cryogenic treatment on AISI A8 Tool steel & Development of Wear Mechanism maps using Fuzzy Clustering

    Science.gov (United States)

    Pillai, Nandakumar; Karthikeyan, R., Dr.

    2018-04-01

    Tool steels are widely classified according to their constituents and type of thermal treatments carried out to obtain its properties. Viking a special purpose tool steel coming under AISI A8 cold working steel classification is widely used for heavy duty blanking and forming operations. The optimum combination of wear resistance and toughness as well as ease of machinability in pre-treated condition makes this material accepted in heavy cutting and non cutting tool manufacture. Air or vacuum hardening is recommended as the normal treatment procedure to obtain the desired mechanical and tribological properties for steels under this category. In this study, we are incorporating a deep cryogenic phase within the conventional treatment cycle both before and after tempering. The thermal treatments at sub zero temperatures up to -195°C using cryogenic chamber with liquid nitrogen as medium was conducted. Micro structural changes in its microstructure and the corresponding improvement in the tribological and physical properties are analyzed. The cryogenic treatment leads to more conversion of retained austenite to martensite and also formation of fine secondary carbides. The microstructure is studied using the micrographs taken using optical microscopy. The wear tests are conducted on DUCOM tribometer for different combinations of speed and load under normal temperature. The wear rates and coefficient of friction obtained from these experiments are used to developed wear mechanism maps with the help of fuzzy c means clustering and probabilistic neural network models. Fuzzy C means clustering is an effective algorithm to group data of similar patterns. The wear mechanisms obtained from the computationally developed maps are then compared with the SEM photographs taken and the improvement in properties due to this additional cryogenic treatment is validated.

  3. Performance Characterization of Multi-threaded Graph Processing Applications on Intel Many-Integrated-Core Architecture

    OpenAIRE

    Liu, Xu; Chen, Langshi; Firoz, Jesun S.; Qiu, Judy; Jiang, Lei

    2017-01-01

    Intel Xeon Phi many-integrated-core (MIC) architectures usher in a new era of terascale integration. Among emerging killer applications, parallel graph processing has been a critical technique to analyze connected data. In this paper, we empirically evaluate various computing platforms including an Intel Xeon E5 CPU, a Nvidia Geforce GTX1070 GPU and an Xeon Phi 7210 processor codenamed Knights Landing (KNL) in the domain of parallel graph processing. We show that the KNL gains encouraging per...

  4. Adaptation of MPDATA Heterogeneous Stencil Computation to Intel Xeon Phi Coprocessor

    Directory of Open Access Journals (Sweden)

    Lukasz Szustak

    2015-01-01

    Full Text Available The multidimensional positive definite advection transport algorithm (MPDATA belongs to the group of nonoscillatory forward-in-time algorithms and performs a sequence of stencil computations. MPDATA is one of the major parts of the dynamic core of the EULAG geophysical model. In this work, we outline an approach to adaptation of the 3D MPDATA algorithm to the Intel MIC architecture. In order to utilize available computing resources, we propose the (3 + 1D decomposition of MPDATA heterogeneous stencil computations. This approach is based on combination of the loop tiling and fusion techniques. It allows us to ease memory/communication bounds and better exploit the theoretical floating point efficiency of target computing platforms. An important method of improving the efficiency of the (3 + 1D decomposition is partitioning of available cores/threads into work teams. It permits for reducing inter-cache communication overheads. This method also increases opportunities for the efficient distribution of MPDATA computation onto available resources of the Intel MIC architecture, as well as Intel CPUs. We discuss preliminary performance results obtained on two hybrid platforms, containing two CPUs and Intel Xeon Phi. The top-of-the-line Intel Xeon Phi 7120P gives the best performance results, and executes MPDATA almost 2 times faster than two Intel Xeon E5-2697v2 CPUs.

  5. Defining and Controlling the Heterogeneity of a Cluster: the Wrekavoc Tool

    OpenAIRE

    Canon , Louis-Claude; Dubuisson , Olivier; Gustedt , Jens; Jeannot , Emmanuel

    2010-01-01

    International audience; The experimental validation and the testing of solutions that are designed for heterogeneous environments is challenging. We introduce Wrekavoc as an accurate tool for this purpose: it runs unmodified applications on emulated multisite heterogeneous platforms. Its principal technique consists in downgrading the performance of the platform characteristics in a prescribed way. The platform characteristics include the compute nodes themselves (CPU and memory) and the inte...

  6. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    Science.gov (United States)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  7. Principal Component and Cluster Analysis as a Tool in the Assessment of Tomato Hybrids and Cultivars

    Directory of Open Access Journals (Sweden)

    G. Evgenidis

    2011-01-01

    Full Text Available Determination of germplasm diversity and genetic relationships among breeding materials is an invaluable aid in crop improvement strategies. This study assessed the breeding value of tomato source material. Two commercial hybrids along with an experimental hybrid and four cultivars were assessed with cluster and principal component analyses based on morphophysiological data, yield and quality, stability of performance, heterosis, and combining abilities. The assessment of commercial hybrids revealed a related origin and subsequently does not support the identification of promising offspring in their crossing. The assessment of the cultivars discriminated them according to origin and evolutionary and selection effects. On the Principal Component 1, the largest group with positive loading included, yield components, heterosis, general and specific combining ability, whereas the largest negative loading was obtained by qualitative and descriptive traits. The Principal Component 2 revealed two smaller groups, a positive one with phenotypic traits and a negative one with tolerance to inbreeding. Stability of performance was loaded positively and/or negatively. In conclusion, combing ability, yield components, and heterosis provided a mechanism for ensuring continued improvement in plant selection programs.

  8. Robust segmentation of medical images using competitive hop field neural network as a clustering tool

    International Nuclear Information System (INIS)

    Golparvar Roozbahani, R.; Ghassemian, M. H.; Sharafat, A. R.

    2001-01-01

    This paper presents the application of competitive Hop field neural network for medical images segmentation. Our proposed approach consists of Two steps: 1) translating segmentation of the given medical image into an optimization problem, and 2) solving this problem by a version of Hop field network known as competitive Hop field neural network. Segmentation is considered as a clustering problem and its validity criterion is based on both intra set distance and inter set distance. The algorithm proposed in this paper is based on gray level features only. This leads to near optimal solutions if both intra set distance and inter set distance are considered at the same time. If only one of these distances is considered, the result of segmentation process by competitive Hop field neural network will be far from optimal solution and incorrect even for very simple cases. Furthermore, sometimes the algorithm receives at unacceptable states. Both these problems may be solved by contributing both in tera distance and inter distances in the segmentation (optimization) process. The performance of the proposed algorithm is tested on both phantom and real medical images. The promising results and the robustness of algorithm to system noises show near optimal solutions

  9. Magnesium isotopes: a tool to understand self-enrichment in globular clusters

    Science.gov (United States)

    Ventura, P.; D'Antona, F.; Imbriani, G.; Di Criscienzo, M.; Dell'Agli, F.; Tailo, M.

    2018-06-01

    A critical issue in the asymptotic giant branch (AGB) self-enrichment scenario for the formation of multiple populations in globular clusters (GCs) is the inability to reproduce the magnesium isotopic ratios, despite the model in principle can account for the depletion of magnesium. In this work, we analyse how the uncertainties on the various p-capture cross sections affect the results related to the magnesium content of the ejecta of AGB stars. The observed distribution of the magnesium isotopes and of the overall Mg-Al trend in M13 and NGC 6752 are successfully reproduced when the proton-capture rate by 25Mg at the temperatures ˜100 MK, in particular the 25Mg(p, γ)26Alm channel, is enhanced by a factor ˜3 with respect to the most recent experimental determinations. This assumption also allows us to reproduce the full extent of the Mg spread and the Mg-Si anticorrelation observed in NGC 2419. The uncertainties in the rate of the 25Mg(p, γ)26Alm reaction at the temperatures of interest here leave space for our assumption and we suggest that new experimental measurements are needed to settle this problem. We also discuss the competitive model based on the supermassive star nucleosynthesis.

  10. Cluster-cluster clustering

    International Nuclear Information System (INIS)

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.; Yale Univ., New Haven, CT; California Univ., Santa Barbara; Cambridge Univ., England; Sussex Univ., Brighton, England)

    1985-01-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references

  11. Web-based Quality Control Tool used to validate CERES products on a cluster of Linux servers

    Science.gov (United States)

    Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Mlynczak, P.; Mitrescu, C.; Doelling, D.

    2014-12-01

    There have been a few popular desktop tools used in the Earth Science community to validate science data. Because of the limitation on the capacity of desktop hardware such as disk space and CPUs, those softwares are not able to display large amount of data from files.This poster will talk about an in-house developed web-based software built on a cluster of Linux servers. That allows users to take advantage of a few Linux servers working in parallel to generate hundreds images in a short period of time. The poster will demonstrate:(1) The hardware and software architecture is used to provide high throughput of images. (2) The software structure that can incorporate new products and new requirement quickly. (3) The user interface about how users can manipulate the data and users can control how the images are displayed.

  12. Balancing Contention and Synchronization on the Intel Paragon

    Science.gov (United States)

    Bokhari, Shahid H.; Nicol, David M.

    1996-01-01

    The Intel Paragon is a mesh-connected distributed memory parallel computer. It uses an oblivious and deterministic message routing algorithm: this permits us to develop highly optimized schedules for frequently needed communication patterns. The complete exchange is one such pattern. Several approaches are available for carrying it out on the mesh. We study an algorithm developed by Scott. This algorithm assumes that a communication link can carry one message at a time and that a node can only transmit one message at a time. It requires global synchronization to enforce a schedule of transmissions. Unfortunately global synchronization has substantial overhead on the Paragon. At the same time the powerful interconnection mechanism of this machine permits 2 or 3 messages to share a communication link with minor overhead. It can also overlap multiple message transmission from the same node to some extent. We develop a generalization of Scott's algorithm that executes complete exchange with a prescribed contention. Schedules that incur greater contention require fewer synchronization steps. This permits us to tradeoff contention against synchronization overhead. We describe the performance of this algorithm and compare it with Scott's original algorithm as well as with a naive algorithm that does not take interconnection structure into account. The Bounded contention algorithm is always better than Scott's algorithm and outperforms the naive algorithm for all but the smallest message sizes. The naive algorithm fails to work on meshes larger than 12 x 12. These results show that due consideration of processor interconnect and machine performance parameters is necessary to obtain peak performance from the Paragon and its successor mesh machines.

  13. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2017-07-31

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  14. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Yokota, Rio; Keyes, David E.

    2017-01-01

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  15. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Grout, Ray W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-09

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved here through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.

  16. Pain management in cancer center inpatients: a cluster randomized trial to evaluate a systematic integrated approach—The Edinburgh Pain Assessment and Management Tool

    OpenAIRE

    Fallon, M; Walker, J; Colvin, L; Rodriguez, A; Murray, G; Sharpe, M

    2018-01-01

    Purpose Pain is suboptimally managed in patients with cancer. We aimed to compare the effect of a policy of adding a clinician-delivered bedside pain assessment and management tool (Edinburgh Pain Assessment and management Tool [EPAT]) to usual care (UC) versus UC alone on pain outcomes. Patients and Methods In a two-arm, parallel group, cluster randomized (1:1) trial, we observed pain outcomes in 19 cancer centers in the United Kingdom and then randomly assigned the centers to eithe...

  17. Predicting the mean cycle time as a function of throughput and product mix for cluster tool workstations using EPT-based aggregate modeling

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Herk, van J.; Rooda, J.E.

    2009-01-01

    Predicting the mean cycle time as a function of throughput and product mix is helpful in making the production planning for cluster tools. To predict the mean cycle time, detailed simulation models may be used. However, detailed models require much development time, and it may not be possible to

  18. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  19. Evaluating the transport layer of the ALFA framework for the Intel® Xeon Phi™ Coprocessor

    Science.gov (United States)

    Santogidis, Aram; Hirstius, Andreas; Lalis, Spyros

    2015-12-01

    The ALFA framework supports the software development of major High Energy Physics experiments. As part of our research effort to optimize the transport layer of ALFA, we focus on profiling its data transfer performance for inter-node communication on the Intel Xeon Phi Coprocessor. In this article we present the collected performance measurements with the related analysis of the results. The optimization opportunities that are discovered, help us to formulate the future plans of enabling high performance data transfer for ALFA on the Intel Xeon Phi architecture.

  20. Using Intel's Knight Landing Processor to Accelerate Global Nested Air Quality Prediction Modeling System (GNAQPMS) Model

    Science.gov (United States)

    Wang, H.; Chen, H.; Chen, X.; Wu, Q.; Wang, Z.

    2016-12-01

    The Global Nested Air Quality Prediction Modeling System for Hg (GNAQPMS-Hg) is a global chemical transport model coupled Hg transport module to investigate the mercury pollution. In this study, we present our work of transplanting the GNAQPMS model on Intel Xeon Phi processor, Knights Landing (KNL) to accelerate the model. KNL is the second-generation product adopting Many Integrated Core Architecture (MIC) architecture. Compared with the first generation Knight Corner (KNC), KNL has more new hardware features, that it can be used as unique processor as well as coprocessor with other CPU. According to the Vtune tool, the high overhead modules in GNAQPMS model have been addressed, including CBMZ gas chemistry, advection and convection module, and wet deposition module. These high overhead modules were accelerated by optimizing code and using new techniques of KNL. The following optimized measures was done: 1) Changing the pure MPI parallel mode to hybrid parallel mode with MPI and OpenMP; 2.Vectorizing the code to using the 512-bit wide vector computation unit. 3. Reducing unnecessary memory access and calculation. 4. Reducing Thread Local Storage (TLS) for common variables with each OpenMP thread in CBMZ. 5. Changing the way of global communication from files writing and reading to MPI functions. After optimization, the performance of GNAQPMS is greatly increased both on CPU and KNL platform, the single-node test showed that optimized version has 2.6x speedup on two sockets CPU platform and 3.3x speedup on one socket KNL platform compared with the baseline version code, which means the KNL has 1.29x speedup when compared with 2 sockets CPU platform.

  1. HIV-TRACE (Transmission Cluster Engine): a tool for large scale molecular epidemiology of HIV-1 and other rapidly evolving pathogens.

    Science.gov (United States)

    Kosakovsky Pond, Sergei L; Weaver, Steven; Leigh Brown, Andrew J; Wertheim, Joel O

    2018-01-31

    In modern applications of molecular epidemiology, genetic sequence data are routinely used to identify clusters of transmission in rapidly evolving pathogens, most notably HIV-1. Traditional 'shoeleather' epidemiology infers transmission clusters by tracing chains of partners sharing epidemiological connections (e.g., sexual contact). Here, we present a computational tool for identifying a molecular transmission analog of such clusters: HIV-TRACE (TRAnsmission Cluster Engine). HIV-TRACE implements an approach inspired by traditional epidemiology, by identifying chains of partners whose viral genetic relatedness imply direct or indirect epidemiological connections. Molecular transmission clusters are constructed using codon-aware pairwise alignment to a reference sequence followed by pairwise genetic distance estimation among all sequences. This approach is computationally tractable and is capable of identifying HIV-1 transmission clusters in large surveillance databases comprising tens or hundreds of thousands of sequences in near real time, i.e., on the order of minutes to hours. HIV-TRACE is available at www.hivtrace.org and from github.com/veg/hivtrace, along with the accompanying result visualization module from github.com/veg/hivtrace-viz. Importantly, the approach underlying HIV-TRACE is not limited to the study of HIV-1 and can be applied to study outbreaks and epidemics of other rapidly evolving pathogens. © The Author 2018. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Accelerating the Pace of Protein Functional Annotation With Intel Xeon Phi Coprocessors.

    Science.gov (United States)

    Feinstein, Wei P; Moreno, Juana; Jarrell, Mark; Brylinski, Michal

    2015-06-01

    Intel Xeon Phi is a new addition to the family of powerful parallel accelerators. The range of its potential applications in computationally driven research is broad; however, at present, the repository of scientific codes is still relatively limited. In this study, we describe the development and benchmarking of a parallel version of eFindSite, a structural bioinformatics algorithm for the prediction of ligand-binding sites in proteins. Implemented for the Intel Xeon Phi platform, the parallelization of the structure alignment portion of eFindSite using pragma-based OpenMP brings about the desired performance improvements, which scale well with the number of computing cores. Compared to a serial version, the parallel code runs 11.8 and 10.1 times faster on the CPU and the coprocessor, respectively; when both resources are utilized simultaneously, the speedup is 17.6. For example, ligand-binding predictions for 501 benchmarking proteins are completed in 2.1 hours on a single Stampede node equipped with the Intel Xeon Phi card compared to 3.1 hours without the accelerator and 36.8 hours required by a serial version. In addition to the satisfactory parallel performance, porting existing scientific codes to the Intel Xeon Phi architecture is relatively straightforward with a short development time due to the support of common parallel programming models by the coprocessor. The parallel version of eFindSite is freely available to the academic community at www.brylinski.org/efindsite.

  3. Extension of the AMBER molecular dynamics software to Intel's Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Needham, Perri J.; Bhuiyan, Ashraf; Walker, Ross C.

    2016-04-01

    We present an implementation of explicit solvent particle mesh Ewald (PME) classical molecular dynamics (MD) within the PMEMD molecular dynamics engine, that forms part of the AMBER v14 MD software package, that makes use of Intel Xeon Phi coprocessors by offloading portions of the PME direct summation and neighbor list build to the coprocessor. We refer to this implementation as pmemd MIC offload and in this paper present the technical details of the algorithm, including basic models for MPI and OpenMP configuration, and analyze the resultant performance. The algorithm provides the best performance improvement for large systems (>400,000 atoms), achieving a ∼35% performance improvement for satellite tobacco mosaic virus (1,067,095 atoms) when 2 Intel E5-2697 v2 processors (2 ×12 cores, 30M cache, 2.7 GHz) are coupled to an Intel Xeon Phi coprocessor (Model 7120P-1.238/1.333 GHz, 61 cores). The implementation utilizes a two-fold decomposition strategy: spatial decomposition using an MPI library and thread-based decomposition using OpenMP. We also present compiler optimization settings that improve the performance on Intel Xeon processors, while retaining simulation accuracy.

  4. Newsgroups, Activist Publics, and Corporate Apologia: The Case of Intel and Its Pentium Chip.

    Science.gov (United States)

    Hearit, Keith Michael

    1999-01-01

    Applies J. Grunig's theory of publics to the phenomenon of Internet newsgroups using the case of the flawed Intel Pentium chip. Argues that technology facilitates the rapid movement of publics from the theoretical construct stage to the active stage. Illustrates some of the difficulties companies face in establishing their identity in cyberspace.…

  5. Why K-12 IT Managers and Administrators Are Embracing the Intel-Based Mac

    Science.gov (United States)

    Technology & Learning, 2007

    2007-01-01

    Over the past year, Apple has dramatically increased its share of the school computer marketplace--especially in the category of notebook computers. A recent study conducted by Grunwald Associates and Rockman et al. reports that one of the major reasons for this growth is Apple's introduction of the Intel processor to the entire line of Mac…

  6. Parallel Density-Based Clustering for Discovery of Ionospheric Phenomena

    Science.gov (United States)

    Pankratius, V.; Gowanlock, M.; Blair, D. M.

    2015-12-01

    Ionospheric total electron content maps derived from global networks of dual-frequency GPS receivers can reveal a plethora of ionospheric features in real-time and are key to space weather studies and natural hazard monitoring. However, growing data volumes from expanding sensor networks are making manual exploratory studies challenging. As the community is heading towards Big Data ionospheric science, automation and Computer-Aided Discovery become indispensable tools for scientists. One problem of machine learning methods is that they require domain-specific adaptations in order to be effective and useful for scientists. Addressing this problem, our Computer-Aided Discovery approach allows scientists to express various physical models as well as perturbation ranges for parameters. The search space is explored through an automated system and parallel processing of batched workloads, which finds corresponding matches and similarities in empirical data. We discuss density-based clustering as a particular method we employ in this process. Specifically, we adapt Density-Based Spatial Clustering of Applications with Noise (DBSCAN). This algorithm groups geospatial data points based on density. Clusters of points can be of arbitrary shape, and the number of clusters is not predetermined by the algorithm; only two input parameters need to be specified: (1) a distance threshold, (2) a minimum number of points within that threshold. We discuss an implementation of DBSCAN for batched workloads that is amenable to parallelization on manycore architectures such as Intel's Xeon Phi accelerator with 60+ general-purpose cores. This manycore parallelization can cluster large volumes of ionospheric total electronic content data quickly. Potential applications for cluster detection include the visualization, tracing, and examination of traveling ionospheric disturbances or other propagating phenomena. Acknowledgments. We acknowledge support from NSF ACI-1442997 (PI V. Pankratius).

  7. Cluster as a Tool to Increase the Competitiveness and Innovative Activity of Enterprises of the Defense Industry Complex

    Directory of Open Access Journals (Sweden)

    Katrina B. Dobrova

    2017-01-01

    Full Text Available Purpose: the main goal of the publication is to make a comprehensive study of the possible application of the cluster approach to improve the competitiveness and innovation activity of enterprises of the defense industry complex.Methods: the methodology of the research is based on the collection and analysis of initial data and information, the article uses a systematic approach to the study of socio-economic processes and phenomena. The research is based on modern theory of competition, innovation, as well as the modern paradigm of cluster development of the economy. In preparing the study, practical materials from Corporation “Rostec”.Results: the article gives the notion of cluster, the prospects for the use of the cluster approach to enhance competitiveness and innovation enterprises of the military-industrial complex. It is noted that the activation of interaction with the “civil sector” is particularly relevant in the context of the reduction of the state defense order, and the theory and practice of cluster management offers a number of forms of cluster interaction between the enterprises of the defense industry and the civil sector. It is emphasized that the development of cluster mechanisms can solve a number of problems related to the insufficient financial stability of defense industry enterprises in the context of a reduction in the state defense order, low innovation activity and the lack of developed models of interaction with small innovative enterprises. Ultimately, the use of cluster mechanisms in the development of defense enterprises is intended to enhance the competitiveness of the complex, both nationally and globally. It is stated that the existing clusters are not able to fully solve a number of specific tasks related to the diversification of integrated defense industry structures. Attention is drawn to the fact that existing clusters are not able to fully solve a number of specific tasks related to the

  8. Country clustering applied to the water and sanitation sector: a new tool with potential applications in research and policy.

    Science.gov (United States)

    Onda, Kyle; Crocker, Jonny; Kayser, Georgia Lyn; Bartram, Jamie

    2014-03-01

    The fields of global health and international development commonly cluster countries by geography and income to target resources and describe progress. For any given sector of interest, a range of relevant indicators can serve as a more appropriate basis for classification. We create a new typology of country clusters specific to the water and sanitation (WatSan) sector based on similarities across multiple WatSan-related indicators. After a literature review and consultation with experts in the WatSan sector, nine indicators were selected. Indicator selection was based on relevance to and suggested influence on national water and sanitation service delivery, and to maximize data availability across as many countries as possible. A hierarchical clustering method and a gap statistic analysis were used to group countries into a natural number of relevant clusters. Two stages of clustering resulted in five clusters, representing 156 countries or 6.75 billion people. The five clusters were not well explained by income or geography, and were distinct from existing country clusters used in international development. Analysis of these five clusters revealed that they were more compact and well separated than United Nations and World Bank country clusters. This analysis and resulting country typology suggest that previous geography- or income-based country groupings can be improved upon for applications in the WatSan sector by utilizing globally available WatSan-related indicators. Potential applications include guiding and discussing research, informing policy, improving resource targeting, describing sector progress, and identifying critical knowledge gaps in the WatSan sector. Copyright © 2013 Elsevier GmbH. All rights reserved.

  9. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    Science.gov (United States)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  10. Quantum Chemical Calculations Using Accelerators: Migrating Matrix Operations to the NVIDIA Kepler GPU and the Intel Xeon Phi.

    Science.gov (United States)

    Leang, Sarom S; Rendell, Alistair P; Gordon, Mark S

    2014-03-11

    Increasingly, modern computer systems comprise a multicore general-purpose processor augmented with a number of special purpose devices or accelerators connected via an external interface such as a PCI bus. The NVIDIA Kepler Graphical Processing Unit (GPU) and the Intel Phi are two examples of such accelerators. Accelerators offer peak performances that can be well above those of the host processor. How to exploit this heterogeneous environment for legacy application codes is not, however, straightforward. This paper considers how matrix operations in typical quantum chemical calculations can be migrated to the GPU and Phi systems. Double precision general matrix multiply operations are endemic in electronic structure calculations, especially methods that include electron correlation, such as density functional theory, second order perturbation theory, and coupled cluster theory. The use of approaches that automatically determine whether to use the host or an accelerator, based on problem size, is explored, with computations that are occurring on the accelerator and/or the host. For data-transfers over PCI-e, the GPU provides the best overall performance for data sizes up to 4096 MB with consistent upload and download rates between 5-5.6 GB/s and 5.4-6.3 GB/s, respectively. The GPU outperforms the Phi for both square and nonsquare matrix multiplications.

  11. Jets from jets: re-clustering as a tool for large radius jet reconstruction and grooming at the LHC

    Science.gov (United States)

    Nachman, Benjamin; Nef, Pascal; Schwartzman, Ariel; Swiatlowski, Maximilian; Wanotayaroj, Chaowaroj

    2015-02-01

    Jets with a large radius R ≳ 1 and grooming algorithms are widely used to fully capture the decay products of boosted heavy particles at the Large Hadron Collider (LHC). Unlike most discriminating variables used in such studies, the jet radius is usually not optimized for specific physics scenarios. This is because every jet configuration must be calibrated, insitu, to account for detector response and other experimental effects. One solution to enhance the availability of large- R jet configurations used by the LHC experiments is jet re-clustering. Jet re-clustering introduces an intermediate scale r groomed jets. Jet re-clustering has the benefit that no additional large-R calibration is necessary, allowing the re-clustered large radius parameter to be optimized in the context of specific precision measurements or searches for new physics.

  12. Jets from jets: re-clustering as a tool for large radius jet reconstruction and grooming at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Nachman, Benjamin; Nef, Pascal; Schwartzman, Ariel; Swiatlowski, Maximilian [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States); Wanotayaroj, Chaowaroj [Center for High Energy Physics, University of Oregon,1371 E. 13th Ave, Eugene, OR 97403 (United States)

    2015-02-12

    Jets with a large radius R≳1 and grooming algorithms are widely used to fully capture the decay products of boosted heavy particles at the Large Hadron Collider (LHC). Unlike most discriminating variables used in such studies, the jet radius is usually not optimized for specific physics scenarios. This is because every jet configuration must be calibrated, insitu, to account for detector response and other experimental effects. One solution to enhance the availability of large-R jet configurations used by the LHC experiments is jet re-clustering. Jet re-clustering introduces an intermediate scale rclustering configurations and show that re-clustered large radius jets have essentially the same jet mass performance as large radius groomed jets. Jet re-clustering has the benefit that no additional large-R calibration is necessary, allowing the re-clustered large radius parameter to be optimized in the context of specific precision measurements or searches for new physics.

  13. Jets from jets: re-clustering as a tool for large radius jet reconstruction and grooming at the LHC

    International Nuclear Information System (INIS)

    Nachman, Benjamin; Nef, Pascal; Schwartzman, Ariel; Swiatlowski, Maximilian; Wanotayaroj, Chaowaroj

    2015-01-01

    Jets with a large radius R≳1 and grooming algorithms are widely used to fully capture the decay products of boosted heavy particles at the Large Hadron Collider (LHC). Unlike most discriminating variables used in such studies, the jet radius is usually not optimized for specific physics scenarios. This is because every jet configuration must be calibrated, insitu, to account for detector response and other experimental effects. One solution to enhance the availability of large-R jet configurations used by the LHC experiments is jet re-clustering. Jet re-clustering introduces an intermediate scale rclustering configurations and show that re-clustered large radius jets have essentially the same jet mass performance as large radius groomed jets. Jet re-clustering has the benefit that no additional large-R calibration is necessary, allowing the re-clustered large radius parameter to be optimized in the context of specific precision measurements or searches for new physics.

  14. Evaluation of CHO Benchmarks on the Arria 10 FPGA using Intel FPGA SDK for OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-05-23

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes the FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. Benchmarking of OpenCL-based framework is an effective way for analyzing the performance of system by studying the execution of the benchmark applications. CHO is a suite of benchmark applications that provides support for OpenCL [1]. The authors presented CHO as an OpenCL port of the CHStone benchmark. Using Altera OpenCL (AOCL) compiler to synthesize the benchmark applications, they listed the resource usage and performance of each kernel that can be successfully synthesized by the compiler. In this report, we evaluate the resource usage and performance of the CHO benchmark applications using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board that features an Arria 10 FPGA device. The focus of the report is to have a better understanding of the resource usage and performance of the kernel implementations using Arria-10 FPGA devices compared to Stratix-5 FPGA devices. In addition, we also gain knowledge about the limitations of the current compiler when it fails to synthesize a benchmark

  15. Blocked All-Pairs Shortest Paths Algorithm on Intel Xeon Phi KNL Processor: A Case Study

    OpenAIRE

    Rucci, Enzo; De Giusti, Armando Eduardo; Naiouf, Marcelo

    2017-01-01

    Manycores are consolidating in HPC community as a way of improving performance while keeping power efficiency. Knights Landing is the recently released second generation of Intel Xeon Phi architec- ture.While optimizing applications on CPUs, GPUs and first Xeon Phi’s has been largely studied in the last years, the new features in Knights Landing processors require the revision of programming and optimization techniques for these devices. In this work, we selected the Floyd-Warshall algorithm ...

  16. Benchmarking Data Analysis and Machine Learning Applications on the Intel KNL Many-Core Processor

    OpenAIRE

    Byun, Chansup; Kepner, Jeremy; Arcand, William; Bestor, David; Bergeron, Bill; Gadepally, Vijay; Houle, Michael; Hubbell, Matthew; Jones, Michael; Klein, Anna; Michaleas, Peter; Milechin, Lauren; Mullen, Julie; Prout, Andrew; Rosa, Antonio

    2017-01-01

    Knights Landing (KNL) is the code name for the second-generation Intel Xeon Phi product family. KNL has generated significant interest in the data analysis and machine learning communities because its new many-core architecture targets both of these workloads. The KNL many-core vector processor design enables it to exploit much higher levels of parallelism. At the Lincoln Laboratory Supercomputing Center (LLSC), the majority of users are running data analysis applications such as MATLAB and O...

  17. Accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) model on Intel Xeon Phi processors

    OpenAIRE

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junming; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-01-01

    The GNAQPMS model is the global version of the Nested Air Quality Prediction Modelling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present our work of porting and optimizing the GNAQPMS model on the second generation Intel Xeon Phi processor codename “Knights Landing” (KNL). Compared with the first generation Xeon Phi coprocessor, KNL introduced many new hardware features such as a boo...

  18. Applying the roofline performance model to the intel xeon phi knights landing processor

    OpenAIRE

    Doerfler, D; Deslippe, J; Williams, S; Oliker, L; Cook, B; Kurth, T; Lobet, M; Malas, T; Vay, JL; Vincenti, H

    2016-01-01

    � Springer International Publishing AG 2016. The Roofline Performance Model is a visually intuitive method used to bound the sustained peak floating-point performance of any given arithmetic kernel on any given processor architecture. In the Roofline, performance is nominally measured in floating-point operations per second as a function of arithmetic intensity (operations per byte of data). In this study we determine the Roofline for the Intel Knights Landing (KNL) processor, determining t...

  19. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™

    OpenAIRE

    Gomes, Jeremias M.; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H.

    2015-01-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP’s irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high perfo...

  20. Performance Engineering for a Medical Imaging Application on the Intel Xeon Phi Accelerator

    OpenAIRE

    Hofmann, Johannes; Treibig, Jan; Hager, Georg; Wellein, Gerhard

    2013-01-01

    We examine the Xeon Phi, which is based on Intel's Many Integrated Cores architecture, for its suitability to run the FDK algorithm--the most commonly used algorithm to perform the 3D image reconstruction in cone-beam computed tomography. We study the challenges of efficiently parallelizing the application and means to enable sensible data sharing between threads despite the lack of a shared last level cache. Apart from parallelization, SIMD vectorization is critical for good performance on t...

  1. DBPQL: A view-oriented query language for the Intel Data Base Processor

    Science.gov (United States)

    Fishwick, P. A.

    1983-01-01

    An interactive query language (BDPQL) for the Intel Data Base Processor (DBP) is defined. DBPQL includes a parser generator package which permits the analyst to easily create and manipulate the query statement syntax and semantics. The prototype language, DBPQL, includes trace and performance commands to aid the analyst when implementing new commands and analyzing the execution characteristics of the DBP. The DBPQL grammar file and associated key procedures are included as an appendix to this report.

  2. Autonomous controller (JCAM 10) for CAMAC crate with 8080 (INTEL) microprocessor

    International Nuclear Information System (INIS)

    Gallice, P.; Mathis, M.

    1975-01-01

    The CAMAC crate autonomous controller JCAM-10 is designed around an INTEL 8080 microprocessor in association with a 5K RAM and 4K REPROM memory. The concept of the module is described, in which data transfers between CAMAC modules and the memory are optimised from software point of view as well as from execution time. In fact, the JCAM-10 is a microcomputer with a set of 1000 peripheral units represented by the CAMAC modules commercially available

  3. Practical Implementation of Lattice QCD Simulation on Intel Xeon Phi Knights Landing

    OpenAIRE

    Kanamori, Issaku; Matsufuru, Hideo

    2017-01-01

    We investigate implementation of lattice Quantum Chromodynamics (QCD) code on the Intel Xeon Phi Knights Landing (KNL). The most time consuming part of the numerical simulations of lattice QCD is a solver of linear equation for a large sparse matrix that represents the strong interaction among quarks. To establish widely applicable prescriptions, we examine rather general methods for the SIMD architecture of KNL, such as using intrinsics and manual prefetching, to the matrix multiplication an...

  4. Acceleration of Blender Cycles Path-Tracing Engine Using Intel Many Integrated Core Architecture

    OpenAIRE

    Jaroš , Milan; Říha , Lubomír; Strakoš , Petr; Karásek , Tomáš; Vašatová , Alena; Jarošová , Marta; Kozubek , Tomáš

    2015-01-01

    Part 2: Algorithms; International audience; This paper describes the acceleration of the most computationally intensive kernels of the Blender rendering engine, Blender Cycles, using Intel Many Integrated Core architecture (MIC). The proposed parallelization, which uses OpenMP technology, also improves the performance of the rendering engine when running on multi-core CPUs and multi-socket servers. Although the GPU acceleration is already implemented in Cycles, its functionality is limited. O...

  5. Mesa de coordenadas cartesianas (x,y para la perforación de materiales por medio de un microcontrolador 8051 de intel

    Directory of Open Access Journals (Sweden)

    Omar Yesid Flórez-Prada

    2001-01-01

    Full Text Available In our environment we are surrounded by a number of electronic systems that perform automatic operations according to a number of parameters previously programmed by the operator. This paper presents the prototype of a table of two coordinates (Cartesian plane (X, Y, which uses a development system based on the 8051 microcontroller INTEL (R (computer system, making the system function sending the respective control commands to locate the tool at different points of the work area of the table, the points are previously programmed by the operator, interacting with the keyboard. To make the movements of the table (X, Y, actuator devices responsible for carrying out a linear movement that moves the tool to the specified distance are used.

  6. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  7. Implementation of an Agent-Based Parallel Tissue Modelling Framework for the Intel MIC Architecture

    Directory of Open Access Journals (Sweden)

    Maciej Cytowski

    2017-01-01

    Full Text Available Timothy is a novel large scale modelling framework that allows simulating of biological processes involving different cellular colonies growing and interacting with variable environment. Timothy was designed for execution on massively parallel High Performance Computing (HPC systems. The high parallel scalability of the implementation allows for simulations of up to 109 individual cells (i.e., simulations at tissue spatial scales of up to 1 cm3 in size. With the recent advancements of the Timothy model, it has become critical to ensure appropriate performance level on emerging HPC architectures. For instance, the introduction of blood vessels supplying nutrients to the tissue is a very important step towards realistic simulations of complex biological processes, but it greatly increased the computational complexity of the model. In this paper, we describe the process of modernization of the application in order to achieve high computational performance on HPC hybrid systems based on modern Intel® MIC architecture. Experimental results on the Intel Xeon Phi™ coprocessor x100 and the Intel Xeon Phi processor x200 are presented.

  8. Experience with Intel's many integrated core architecture in ATLAS software

    International Nuclear Information System (INIS)

    Fleischmann, S; Neumann, M; Kama, S; Lavrijsen, W; Vitillo, R

    2014-01-01

    Intel recently released the first commercial boards of its Many Integrated Core (MIC) Architecture. MIC is Intel's solution for the domain of throughput computing, currently dominated by general purpose programming on graphics processors (GPGPU). MIC allows the use of the more familiar x86 programming model and supports standard technologies such as OpenMP, MPI, and Intel's Threading Building Blocks (TBB). This should make it possible to develop for both throughput and latency devices using a single code base. In ATLAS Software, track reconstruction has been shown to be a good candidate for throughput computing on GPGPU devices. In addition, the newly proposed offline parallel event-processing framework, GaudiHive, uses TBB for task scheduling. The MIC is thus, in principle, a good fit for this domain. In this paper, we report our experiences of porting to and optimizing ATLAS tracking algorithms for the MIC, comparing the programmability and relative cost/performance of the MIC against those of current GPGPUs and latency-optimized CPUs.

  9. Scaling deep learning workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing

    Energy Technology Data Exchange (ETDEWEB)

    Gawande, Nitin A.; Landwehr, Joshua B.; Daily, Jeffrey A.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.

    2017-08-24

    Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors --- including NVIDIA, Intel, AMD, and IBM --- have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating large DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path or Cray Aries. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, AlexNet, GoogLeNet, and ResNet50 topologies using the Cifar10 and ImageNet datasets. The workloads are vendor-optimized for each architecture. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and the KNL can be competitive in performance/watt. We find that NVLink facilitates scaling efficiency on GPUs. However, its importance is heavily dependent on neural network architecture. Furthermore, for weak-scaling --- sometimes encouraged by restricted GPU memory --- NVLink is less important.

  10. Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing

    Energy Technology Data Exchange (ETDEWEB)

    Gawande, Nitin A.; Landwehr, Joshua B.; Daily, Jeffrey A.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.

    2017-07-03

    Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors --- including NVIDIA, Intel, AMD and IBM --- have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, CaffeNet, AlexNet and GoogleNet topologies using the Cifar10 and ImageNet datasets. The workloads are vendor optimized for each architecture. GPUs provide the highest overall raw performance. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and KNL can be competitive when considering performance/watt. Furthermore, NVLink is critical to GPU scaling.

  11. I-HASTREAM : density-based hierarchical clustering of big data streams and its application to big graph analytics tools

    NARCIS (Netherlands)

    Hassani, M.; Spaus, P.; Cuzzocrea, A.; Seidl, T.

    2016-01-01

    Big Data Streams are very popular at now, as stirred-up by a plethora of modern applications such as sensor networks, scientific computing tools, Web intelligence, social network analysis and mining tools, and so forth. Here, the main research issue consists in how to effectively and efficiently

  12. MPIGeneNet: Parallel Calculation of Gene Co-Expression Networks on Multicore Clusters.

    Science.gov (United States)

    Gonzalez-Dominguez, Jorge; Martin, Maria J

    2017-10-10

    In this work we present MPIGeneNet, a parallel tool that applies Pearson's correlation and Random Matrix Theory to construct gene co-expression networks. It is based on the state-of-the-art sequential tool RMTGeneNet, which provides networks with high robustness and sensitivity at the expenses of relatively long runtimes for large scale input datasets. MPIGeneNet returns the same results as RMTGeneNet but improves the memory management, reduces the I/O cost, and accelerates the two most computationally demanding steps of co-expression network construction by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on two different systems using three typical input datasets shows that MPIGeneNet is significantly faster than RMTGeneNet. As an example, our tool is up to 175.41 times faster on a cluster with eight nodes, each one containing two 12-core Intel Haswell processors. Source code of MPIGeneNet, as well as a reference manual, are available at https://sourceforge.net/projects/mpigenenet/.

  13. A Cluster Randomized-Controlled Trial of the Impact of the Tools of the Mind Curriculum on Self-Regulation in Canadian Preschoolers.

    Science.gov (United States)

    Solomon, Tracy; Plamondon, Andre; O'Hara, Arland; Finch, Heather; Goco, Geraldine; Chaban, Peter; Huggins, Lorrie; Ferguson, Bruce; Tannock, Rosemary

    2017-01-01

    Early self-regulation predicts school readiness, academic success, and quality of life in adulthood. Its development in the preschool years is rapid and also malleable. Thus, preschool curricula that promote the development of self-regulation may help set children on a more positive developmental trajectory. We conducted a cluster-randomized controlled trial of the Tools of the Mind preschool curriculum, a program that targets self-regulation through imaginative play and self-regulatory language (Tools; clinical trials identifier NCT02462733). Previous research with Tools is limited, with mixed evidence of its effectiveness. Moreover, it is unclear whether it would benefit all preschoolers or primarily those with poorly developed cognitive capacities (e.g., language, executive function, attention). The study goals were to ascertain whether the Tools program leads to greater gains in self-regulation compared to Playing to Learn (YMCA PTL), another play based program that does not target self-regulation specifically, and whether the effects were moderated by children's initial language and hyperactivity/inattention. Two hundred and sixty 3- to 4-year-olds attending 20 largely urban daycares were randomly assigned, at the site level, to receive either Tools or YMCA PTL (the business-as-usual curriculum) for 15 months. We assessed self-regulation at pre-, mid and post intervention, using two executive function tasks, and two questionnaires regarding behavior at home and at school, to capture development in cognitive as well as socio-emotional aspects of self-regulation. Fidelity data showed that only the teachers at the Tools sites implemented Tools, and did so with reasonable success. We found that children who received Tools made greater gains on a behavioral measure of executive function than their YMCA PTL peers, but the difference was significant only for those children whose parents rated them high in hyperactivity/inattention initially. The effect of Tools did

  14. A Cluster Randomized-Controlled Trial of the Impact of the Tools of the Mind Curriculum on Self-Regulation in Canadian Preschoolers

    Directory of Open Access Journals (Sweden)

    Tracy Solomon

    2018-01-01

    Full Text Available Early self-regulation predicts school readiness, academic success, and quality of life in adulthood. Its development in the preschool years is rapid and also malleable. Thus, preschool curricula that promote the development of self-regulation may help set children on a more positive developmental trajectory. We conducted a cluster-randomized controlled trial of the Tools of the Mind preschool curriculum, a program that targets self-regulation through imaginative play and self-regulatory language (Tools; clinical trials identifier NCT02462733. Previous research with Tools is limited, with mixed evidence of its effectiveness. Moreover, it is unclear whether it would benefit all preschoolers or primarily those with poorly developed cognitive capacities (e.g., language, executive function, attention. The study goals were to ascertain whether the Tools program leads to greater gains in self-regulation compared to Playing to Learn (YMCA PTL, another play based program that does not target self-regulation specifically, and whether the effects were moderated by children’s initial language and hyperactivity/inattention. Two hundred and sixty 3- to 4-year-olds attending 20 largely urban daycares were randomly assigned, at the site level, to receive either Tools or YMCA PTL (the business-as-usual curriculum for 15 months. We assessed self-regulation at pre-, mid and post intervention, using two executive function tasks, and two questionnaires regarding behavior at home and at school, to capture development in cognitive as well as socio-emotional aspects of self-regulation. Fidelity data showed that only the teachers at the Tools sites implemented Tools, and did so with reasonable success. We found that children who received Tools made greater gains on a behavioral measure of executive function than their YMCA PTL peers, but the difference was significant only for those children whose parents rated them high in hyperactivity/inattention initially. The

  15. Cluster ion beam facilities

    International Nuclear Information System (INIS)

    Popok, V.N.; Prasalovich, S.V.; Odzhaev, V.B.; Campbell, E.E.B.

    2001-01-01

    A brief state-of-the-art review in the field of cluster-surface interactions is presented. Ionised cluster beams could become a powerful and versatile tool for the modification and processing of surfaces as an alternative to ion implantation and ion assisted deposition. The main effects of cluster-surface collisions and possible applications of cluster ion beams are discussed. The outlooks of the Cluster Implantation and Deposition Apparatus (CIDA) being developed in Guteborg University are shown

  16. Fuzzy Clustering

    DEFF Research Database (Denmark)

    Berks, G.; Keyserlingk, Diedrich Graf von; Jantzen, Jan

    2000-01-01

    A symptom is a condition indicating the presence of a disease, especially, when regarded as an aid in diagnosis.Symptoms are the smallest units indicating the existence of a disease. A syndrome on the other hand is an aggregate, set or cluster of concurrent symptoms which together indicate...... and clustering are the basic concerns in medicine. Classification depends on definitions of the classes and their required degree of participant of the elements in the cases' symptoms. In medicine imprecise conditions are the rule and therefore fuzzy methods are much more suitable than crisp ones. Fuzzy c......-mean clustering is an easy and well improved tool, which has been applied in many medical fields. We used c-mean fuzzy clustering after feature extraction from an aphasia database. Factor analysis was applied on a correlation matrix of 26 symptoms of language disorders and led to five factors. The factors...

  17. Clustering biomass-based technologies towards zero emissions - a tool how the Earth's resources can be shifted back to sustainability

    International Nuclear Information System (INIS)

    Gravitis, J.; Pauli, G.

    2001-01-01

    The Zero Emissions Research Initiative (ZERI) was founded on the fundamental concept that, in order to achieve environmentally sustainable development, industries must maximize the use of available raw materials and utilize their own wastes and by-products to the fullest extent possible so as to eliminate all emissions into the air, water and soil. Research focuses on what are considered to be four central components of zero emissions biobased industries: (I) integrated biosystems, (II) materials separation technologies, (III) biorefinery, and (IV) zero emissions systems design. In this way, industries may be organized into clusters within one single system, or in interdependent sets of industries. (authors)

  18. Evaluation of vectorization potential of Graph500 on Intel's Xeon Phi

    OpenAIRE

    Stanic, Milan; Palomar, Oscar; Ratkovic, Ivan; Duric, Milovan; Unsal, Osman; Cristal, Adrian; Valero, Mateo

    2014-01-01

    Graph500 is a data intensive application for high performance computing and it is an increasingly important workload because graphs are a core part of most analytic applications. So far there is no work that examines if Graph500 is suitable for vectorization mostly due a lack of vector memory instructions for irregular memory accesses. The Xeon Phi is a massively parallel processor recently released by Intel with new features such as a wide 512-bit vector unit and vector scatter/gather instru...

  19. Performance Analysis of an Astrophysical Simulation Code on the Intel Xeon Phi Architecture

    OpenAIRE

    Noormofidi, Vahid; Atlas, Susan R.; Duan, Huaiyu

    2015-01-01

    We have developed the astrophysical simulation code XFLAT to study neutrino oscillations in supernovae. XFLAT is designed to utilize multiple levels of parallelism through MPI, OpenMP, and SIMD instructions (vectorization). It can run on both CPU and Xeon Phi co-processors based on the Intel Many Integrated Core Architecture (MIC). We analyze the performance of XFLAT on configurations with CPU only, Xeon Phi only and both CPU and Xeon Phi. We also investigate the impact of I/O and the multi-n...

  20. Optimizing the MapReduce Framework on Intel Xeon Phi Coprocessor

    OpenAIRE

    Lu, Mian; Zhang, Lei; Huynh, Huynh Phung; Ong, Zhongliang; Liang, Yun; He, Bingsheng; Goh, Rick Siow Mong; Huynh, Richard

    2013-01-01

    With the ease-of-programming, flexibility and yet efficiency, MapReduce has become one of the most popular frameworks for building big-data applications. MapReduce was originally designed for distributed-computing, and has been extended to various architectures, e,g, multi-core CPUs, GPUs and FPGAs. In this work, we focus on optimizing the MapReduce framework on Xeon Phi, which is the latest product released by Intel based on the Many Integrated Core Architecture. To the best of our knowledge...

  1. Mashup d'aplicacions basat en un buscador intel·ligent

    OpenAIRE

    Sancho Piqueras, Javier

    2010-01-01

    Mashup de funcionalitats, basat en un cercador intel·ligent, en aquest cas pensat per a cursos, carreres màsters, etc. La finalitat és adjuntar diverses aplicacions amb l'únic propòsit que en aquest cas és un buscador però que també ens permet utilitzar eines per a la connectivitat mitjançant web Services, o xarxes socials. Mashup de funcionalidades, basado en un buscador inteligente, en este caso pensado para cursos, carreras másters, etc. La finalidad es juntar diversas aplicaciones con ...

  2. Profiling CPU-bound workloads on Intel Haswell-EP platforms

    CERN Document Server

    Guerri, Marco; Cristovao, Cordeiro; CERN. Geneva. IT Department

    2017-01-01

    With the increasing adoption of public and private cloud resources to support the demands in terms of computing capacity of the WLCG, the HEP community has begun studying several benchmarking applications aimed at continuously assessing the performance of virtual machines procured from commercial providers. In order to characterise the behaviour of these benchmarks, in-depth profiling activities have been carried out. In this document we outline our experience in profiling one specific application, the ATLAS Kit Validation, in an attempt to explain an unexpected distribution in the performance samples obtained on systems based on Intel Haswell-EP processors.

  3. A new shared-memory programming paradigm for molecular dynamics simulations on the Intel Paragon

    International Nuclear Information System (INIS)

    D'Azevedo, E.F.; Romine, C.H.

    1994-12-01

    This report describes the use of shared memory emulation with DOLIB (Distributed Object Library) to simplify parallel programming on the Intel Paragon. A molecular dynamics application is used as an example to illustrate the use of the DOLIB shared memory library. SOTON-PAR, a parallel molecular dynamics code with explicit message-passing using a Lennard-Jones 6-12 potential, is rewritten using DOLIB primitives. The resulting code has no explicit message primitives and resembles a serial code. The new code can perform dynamic load balancing and achieves better performance than the original parallel code with explicit message-passing

  4. Starpc: a library for communication among tools on a parallel computer cluster. User's and developer's guide to Starpc

    International Nuclear Information System (INIS)

    Takemiya, Hiroshi; Yamagishi, Nobuhiro

    2000-02-01

    We report on a RPC(Remote Procedure Call)-based communication library, Starpc, for a parallel computer cluster. Starpc supports communication between Java Applets and C programs as well as between C programs. Starpc has the following three features. (1) It enables communication between Java Applets and C programs on an arbitrary computer without security violation, although Java Applets are supposed to communicate only with programs on the specific computer (Web server) in subject to a restriction on security. (2) Diverse network communication protocols are available on Starpc, because of using Nexus communication library developed at Argonne National Laboratory. (3) It works on many kinds of computers including eight parallel computers and four WS servers. In this report, the usage of Starpc and the development of applications using Starpc are described. (author)

  5. Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme

    Science.gov (United States)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2015-10-01

    The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.

  6. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  7. Optimizing zonal advection of the Advanced Research WRF (ARW) dynamics for Intel MIC

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecast (WRF) model is the most widely used community weather forecast and research model in the world. There are two distinct varieties of WRF. The Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we will use Intel Intel Many Integrated Core (MIC) architecture to substantially increase the performance of a zonal advection subroutine for optimization. It is of the most time consuming routines in the ARW dynamics core. Advection advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 2.4x.

  8. Implementation of a 3-D nonlinear MHD [magnetohydrodynamics] calculation on the Intel hypercube

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Drake, J.B.; Hicks, H.R.; Lawkins, W.F.

    1987-01-01

    The optimization of numerical schemes and increasing computer capabilities in the last ten years have improved the efficiency of 3-D nonlinear resistive MHD calculations by about two to three orders of magnitude. However, we are still very limited in performing these types of calculations. Hypercubes have a large number of processors with only local memory and bidirectional links among neighbors. The Intel Hypercube at Oak Ridge has 64 processors with 0.5 megabytes of memory per processor. The multiplicity of processors opens new possibilities for the treatment of such computations. The constraint on time and resources favored the approach of using the existing RSF code which solves as an initial value problem the reduced set of MHD equations for a periodic cylindrical geometry. This code includes minimal physics and geometry, but contains the basic three dimensionality and nonlinear structure of the equations. The code solves the reduced set of MHD equations by Fourier expansion in two angular coordinates and finite differences in the radial one. Due to the continuing interest in these calculations and the likelihood that future supercomputers will take greater advantage of parallelism, the present study was initiated by the ORNL Exploratory Studies Committee and funded entirely by Laboratory Discretionary Funds. The objectives of the study were: to ascertain the suitability of MHD calculation for parallel computation, to design and implement a parallel algorithm to perform the computations, and to evaluate the hypercube, and in particular, ORNL's Intel iPSC, for use in MHD computations

  9. 3-D electromagnetic plasma particle simulations on the Intel Delta parallel computer

    International Nuclear Information System (INIS)

    Wang, J.; Liewer, P.C.

    1994-01-01

    A three-dimensional electromagnetic PIC code has been developed on the 512 node Intel Touchstone Delta MIMD parallel computer. This code is based on the General Concurrent PIC algorithm which uses a domain decomposition to divide the computation among the processors. The 3D simulation domain can be partitioned into 1-, 2-, or 3-dimensional sub-domains. Particles must be exchanged between processors as they move among the subdomains. The Intel Delta allows one to use this code for very-large-scale simulations (i.e. over 10 8 particles and 10 6 grid cells). The parallel efficiency of this code is measured, and the overall code performance on the Delta is compared with that on Cray supercomputers. It is shown that their code runs with a high parallel efficiency of ≥ 95% for large size problems. The particle push time achieved is 115 nsecs/particle/time step for 162 million particles on 512 nodes. Comparing with the performance on a single processor Cray C90, this represents a factor of 58 speedup. The code uses a finite-difference leap frog method for field solve which is significantly more efficient than fast fourier transforms on parallel computers. The performance of this code on the 128 node Cray T3D will also be discussed

  10. Clustering of near clusters versus cluster compactness

    International Nuclear Information System (INIS)

    Yu Gao; Yipeng Jing

    1989-01-01

    The clustering properties of near Zwicky clusters are studied by using the two-point angular correlation function. The angular correlation functions for compact and medium compact clusters, for open clusters, and for all near Zwicky clusters are estimated. The results show much stronger clustering for compact and medium compact clusters than for open clusters, and that open clusters have nearly the same clustering strength as galaxies. A detailed study of the compactness-dependence of correlation function strength is worth investigating. (author)

  11. The impact of the carer support needs assessment tool (CSNAT in community palliative care using a stepped wedge cluster trial.

    Directory of Open Access Journals (Sweden)

    Samar M Aoun

    Full Text Available Family caregiving towards the end-of-life entails considerable emotional, social, financial and physical costs for caregivers. Evidence suggests that good support can improve caregiver psychological outcomes. The primary aim of this study was to investigate the impact of using the carer support needs assessment tool (CSNAT, as an intervention to identify and address support needs in end of life home care, on family caregiver outcomes. A stepped wedge design was used to trial the CSNAT intervention in three bases of Silver Chain Hospice Care in Western Australia, 2012-14. The intervention consisted of at least two visits from nurses (2-3 weeks apart to identify, review and address caregivers' needs. The outcome measures for the intervention and control groups were caregiver strain and distress as measured by the Family Appraisal of Caregiving Questionnaire (FACQ-PC, caregiver mental and physical health as measured by SF-12v2, and caregiver workload as measured by extent of caregiver assistance with activities of daily living, at baseline and follow up. Total recruitment was 620. There was 45% attrition for each group between baseline and follow-up mainly due to patient deaths resulting in 322 caregivers completing the study (233 in the intervention group and 89 in the control group. At follow-up, the intervention group showed significant reduction in caregiver strain relative to controls, p=0.018, d=0.348 (95% CI 0.25 to 0.41. Priority support needs identified by caregivers included knowing what to expect in the future, having time for yourself in the day and dealing with your feelings and worries. Despite the challenges at the clinician, organisational and trial levels, the CSNAT intervention led to an improvement in caregiver strain. Effective implementation of an evidence-informed and caregiver-led tool represents a necessary step towards helping palliative care providers better assess and address caregiver needs, ensuring adequate family

  12. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    Science.gov (United States)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining

  13. Mobile-health tool to improve maternal and neonatal health care in Bangladesh: a cluster randomized controlled trial.

    Science.gov (United States)

    Tobe, Ruoyan Gai; Haque, Syed Emdadul; Ikegami, Kiyoko; Mori, Rintaro

    2018-04-16

    In Bangladesh, the targets on reduction of maternal mortality and utilization of related obstetric services provided by skilled health personnel in Millennium Development Goals 5 remains unmet, and the progress in reduction of neonatal mortality lag behind that in the reduction of infant and under-five mortalities, remaining as an essential issue towards the achievement of maternal and neonatal health targets in health related Sustainable Development Goals (SDGs). As access to appropriate perinatal care is crucial to reduce maternal and neonatal deaths, recently several mobile platform-based health programs sponsored by donor countries and Non-Governmental Organizations have targeted to reduce maternal and child mortality. On the other hand, good health-care is necessary for the development. Thus, we designed this implementation research to improve maternal and child health care for targeting SDGs. This cluster randomized trial will be conducted in Lohagora of Narail District and Dhamrai of Dhaka District. Participants are pregnant women in the respective areas. The total sample size is 3000 where 500 pregnant women will get Mother and Child Handbook (MCH) and messages using mobile phone on health care during pregnancy and antenatal care about one year in each area. The other 500 in each area will get health education using only MCH book. The rest 1000 participants will be controlled; it means 500 in each area. We randomly assigned the intervention and controlled area based on smallest administrative area (Unions) in Bangladesh. The data collection and health education will be provided through trained research officers starting from February 2017 to August 2018. Each health education session is conducting in their house. The study proposal was reviewed and approved by NCCD, Japan and Bangladesh Medical Research Council (BMRC), Bangladesh. The data will be analyzed using STATA and SPSS software. For the improvement of maternal and neonatal care, this community

  14. High-throughput sockets over RDMA for the Intel Xeon Phi coprocessor

    CERN Document Server

    Santogidis, Aram

    2017-01-01

    In this paper we describe the design, implementation and performance of Trans4SCIF, a user-level socket-like transport library for the Intel Xeon Phi coprocessor. Trans4SCIF library is primarily intended for high-throughput applications. It uses RDMA transfers over the native SCIF support, in a way that is transparent for the application, which has the illusion of using conventional stream sockets. We also discuss the integration of Trans4SCIF with the ZeroMQ messaging library, used extensively by several applications running at CERN. We show that this can lead to a substantial, up to 3x, increase of application throughput compared to the default TCP/IP transport option.

  15. GW Calculations of Materials on the Intel Xeon-Phi Architecture

    Science.gov (United States)

    Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Biller, Ariel; Chelikowsky, James R.; Louie, Steven G.

    Intel Xeon-Phi processors are expected to power a large number of High-Performance Computing (HPC) systems around the United States and the world in the near future. We evaluate the ability of GW and pre-requisite Density Functional Theory (DFT) calculations for materials on utilizing the Xeon-Phi architecture. We describe the optimization process and performance improvements achieved. We find that the GW method, like other higher level Many-Body methods beyond standard local/semilocal approximations to Kohn-Sham DFT, is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-waves, band-pairs and frequencies. Support provided by the SCIDAC program, Department of Energy, Office of Science, Advanced Scientic Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-AC02-05CH11231 (LBNL).

  16. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  17. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  18. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  19. Communication overhead on the Intel Paragon, IBM SP2 and Meiko CS-2

    Science.gov (United States)

    Bokhari, Shahid H.

    1995-01-01

    Interprocessor communication overhead is a crucial measure of the power of parallel computing systems-its impact can severely limit the performance of parallel programs. This report presents measurements of communication overhead on three contemporary commercial multicomputer systems: the Intel Paragon, the IBM SP2 and the Meiko CS-2. In each case the time to communicate between processors is presented as a function of message length. The time for global synchronization and memory access is discussed. The performance of these machines in emulating hypercubes and executing random pairwise exchanges is also investigated. It is shown that the interprocessor communication time depends heavily on the specific communication pattern required. These observations contradict the commonly held belief that communication overhead on contemporary machines is independent of the placement of tasks on processors. The information presented in this report permits the evaluation of the efficiency of parallel algorithm implementations against standard baselines.

  20. A performance study of sparse Cholesky factorization on INTEL iPSC/860

    Science.gov (United States)

    Zubair, M.; Ghose, M.

    1992-01-01

    The problem of Cholesky factorization of a sparse matrix has been very well investigated on sequential machines. A number of efficient codes exist for factorizing large unstructured sparse matrices. However, there is a lack of such efficient codes on parallel machines in general, and distributed machines in particular. Some of the issues that are critical to the implementation of sparse Cholesky factorization on a distributed memory parallel machine are ordering, partitioning and mapping, load balancing, and ordering of various tasks within a processor. Here, we focus on the effect of various partitioning schemes on the performance of sparse Cholesky factorization on the Intel iPSC/860. Also, a new partitioning heuristic for structured as well as unstructured sparse matrices is proposed, and its performance is compared with other schemes.

  1. Plasma turbulence calculations on the Intel iPSC/860 (rx) hypercube

    International Nuclear Information System (INIS)

    Lynch, V.E.; Ruiter, J.R.

    1990-01-01

    One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A serial algorithm used for plasma turbulence calculations was modified to allocate a radial region in each node. In this way, convolutions at a fixed radius are performed in parallel, and communication is limited to boundary values for each radial region. For a semi-implicity numerical scheme (tridiagonal matrix solver), there is a factor of 3 improvement in efficiency with the Intel iPSC/860 machine using 64 processors over a single-processor Cray-II. For block-tridiagonal matrix cases (fully implicit code), a second parallelization takes place. The Fourier components are distributed in nodes. In each node, the block-tridiagonal matrix is inverted for each of allocated Fourier components. The algorithm for this second case has not yet been optimized. 10 refs., 4 figs

  2. Evaluation of the Intel iWarp parallel processor for space flight applications

    Science.gov (United States)

    Hine, Butler P., III; Fong, Terrence W.

    1993-01-01

    The potential of a DARPA-sponsored advanced processor, the Intel iWarp, for use in future SSF Data Management Systems (DMS) upgrades is evaluated through integration into the Ames DMS testbed and applications testing. The iWarp is a distributed, parallel computing system well suited for high performance computing applications such as matrix operations and image processing. The system architecture is modular, supports systolic and message-based computation, and is capable of providing massive computational power in a low-cost, low-power package. As a consequence, the iWarp offers significant potential for advanced space-based computing. This research seeks to determine the iWarp's suitability as a processing device for space missions. In particular, the project focuses on evaluating the ease of integrating the iWarp into the SSF DMS baseline architecture and the iWarp's ability to support computationally stressing applications representative of SSF tasks.

  3. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  4. Evaluation of an early detection tool for social-emotional and behavioral problems in toddlers: The Brief Infant Toddler Social and Emotional Assessment - A cluster randomized trial

    Directory of Open Access Journals (Sweden)

    Carter Alice S

    2011-06-01

    Full Text Available Abstract Background The prevalence of social-emotional and behavioral problems is estimated to be 8 to 9% among preschool children. Effective early detection tools are needed to promote the provision of adequate care at an early stage. The Brief Infant-Toddler Social and Emotional Assessment (BITSEA was developed for this purpose. This study evaluates the effectiveness of the BITSEA to enhance social-emotional and behavioral health of preschool children. Methods and Design A cluster randomized controlled trial is set up in youth health care centers in the larger Rotterdam area in the Netherlands, to evaluate the BITSEA. The 31 youth health care centers are randomly allocated to either the control group or the intervention group. The intervention group uses the scores on the BITSEA and cut-off points to evaluate a child's social-emotional and behavioral health and to decide whether or not the child should be referred. The control group provides care as usual, which involves administering a questionnaire that structures the conversation between child health professionals and parents. At a one year follow-up measurement the social-emotional and behavioral health of all children included in the study population will be evaluated. Discussion It is hypothesized that better results will be found, in terms of social-emotional and behavioral health in the intervention group, compared to the control group, due to more adequate early detection, referral and more appropriate and timely care. Trial registration Current Controlled Trials NTR2035

  5. Prevention of Hospital-Acquired Adverse Drug Reactions in Older People Using Screening Tool of Older Persons' Prescriptions and Screening Tool to Alert to Right Treatment Criteria: A Cluster Randomized Controlled Trial.

    Science.gov (United States)

    O'Connor, Marie N; O'Sullivan, David; Gallagher, Paul F; Eustace, Joseph; Byrne, Stephen; O'Mahony, Denis

    2016-08-01

    To determine whether use of the Screening Tool of Older Persons' Prescriptions (STOPP) and Screening Tool to Alert to Right Treatment (START) criteria reduces incident hospital-acquired adverse drug reactions (ADRs), 28-day medication costs, and median length of hospital stay in older adults admitted with acute illness. Single-blind cluster randomized controlled trial (RCT) of unselected older adults hospitalized over a 13-month period. Tertiary referral hospital in southern Ireland. Consecutively admitted individuals aged 65 and older (N = 732). Single time point presentation to attending physicians of potentially inappropriate medications according to the STOPP/START criteria. The primary outcome was the proportion of participants experiencing one or more ADRs during the index hospitalization. Secondary outcomes were median length of stay (LOS) and 28-day total medication cost. One or more ADRs occurred in 78 of the 372 control participants (21.0%; median age 78, interquartile range (IQR) 72-84) and in 42 of the 360 intervention participants (11.7%; median age 80, IQR 73-85) (absolute risk reduction = 9.3%, number needed to treat = 11). The median LOS in the hospital was 8 days (IQR 4-14 days) in both groups. At discharge, median medication cost was significantly lower in the intervention group (€73.16, IQR €38.68-121.72) than in the control group (€90.62, IQR €49.38-162.53) (Wilcoxon rank test Z statistic = -3.274, P older adults but did not affect median LOS. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.

  6. Design of a cluster-randomized trial of electronic health record-based tools to address overweight and obesity in primary care.

    Science.gov (United States)

    Baer, Heather J; Wee, Christina C; DeVito, Katerina; Orav, E John; Frolkis, Joseph P; Williams, Deborah H; Wright, Adam; Bates, David W

    2015-08-01

    Primary care providers often fail to identify patients who are overweight or obese or discuss weight management with them. Electronic health record-based tools may help providers with the assessment and management of overweight and obesity. We describe the design of a trial to examine the effectiveness of electronic health record-based tools for the assessment and management of overweight and obesity among adult primary care patients, as well as the challenges we encountered. We developed several new features within the electronic health record used by primary care practices affiliated with Brigham and Women's Hospital in Boston, MA. These features included (1) reminders to measure height and weight, (2) an alert asking providers to add overweight or obesity to the problem list, (3) reminders with tailored management recommendations, and (4) a Weight Management screen. We then conducted a pragmatic, cluster-randomized controlled trial in 12 primary care practices. We randomized 23 clinical teams ("clinics") within the practices to the intervention group (n = 11) or the control group (n = 12). The new features were activated only for clinics in the intervention group. The intervention was implemented in two phases: the height and weight reminders went live on 15 December 2011 (Phase 1), and all of the other features went live on 11 June 2012 (Phase 2). Study enrollment went from December 2011 through December 2012, and follow-up ended in December 2013. The primary outcomes were 6-month and 12-month weight change among adult patients with body mass index ≥25 who had a visit at one of the primary care clinics during Phase 2. Secondary outcome measures included the proportion of patients with a recorded body mass index in the electronic health record, the proportion of patients with body mass index ≥25 who had a diagnosis of overweight or obesity on the electronic health record problem list, and the proportion of patients with body mass index ≥25 who had

  7. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    Science.gov (United States)

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  8. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  9. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    OpenAIRE

    H. Wang; H. Wang; H. Wang; H. Wang; H. Chen; H. Chen; Q. Wu; Q. Wu; J. Lin; X. Chen; X. Xie; R. Wang; R. Wang; X. Tang; Z. Wang

    2017-01-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (code...

  10. Student Intern Ben Freed Competes as Finalist in Intel STS Competition, Three Other Interns Named Semifinalists | Poster

    Science.gov (United States)

    By Ashley DeVine, Staff Writer Werner H. Kirstin (WHK) student intern Ben Freed was one of 40 finalists to compete in the Intel Science Talent Search (STS) in Washington, DC, in March. “It was seven intense days of interacting with amazing judges and incredibly smart and interesting students. We met President Obama, and then the MIT astronomy lab named minor planets after each

  11. Evaluating the transport layer of the ALFA framework for the Intel(®) Xeon Phi(™) Coprocessor

    OpenAIRE

    Santogidis, Aram; Hirstius, Andreas; Lalis, Spyros

    2015-01-01

    The ALFA framework supports the software development of major High Energy Physics experiments. As part of our research effort to optimize the transport layer of ALFA, we focus on profiling its data transfer performance for inter-node communication on the Intel Xeon Phi Coprocessor. In this article we present the collected performance measurements with the related analysis of the results. The optimization opportunities that are discovered, help us to formulate the future plans of enabling high...

  12. Computationally efficient implementation of sarse-tap FIR adaptive filters with tap-position control on intel IA-32 processors

    OpenAIRE

    Hirano, Akihiro; Nakayama, Kenji

    2008-01-01

    This paper presents an computationally ef cient implementation of sparse-tap FIR adaptive lters with tapposition control on Intel IA-32 processors with single-instruction multiple-data (SIMD) capability. In order to overcome randomorder memory access which prevents a ectorization, a blockbased processing and a re-ordering buffer are introduced. A dynamic register allocation and the use of memory-to-register operations help the maximization of the loop-unrolling level. Up to 66percent speedup ...

  13. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™.

    Science.gov (United States)

    Gomes, Jeremias M; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H

    2015-10-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel ® Xeon Phi ™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP's irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63 × on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7 × and 1.62 × , respectively, as compared to efficient CPU and GPU implementations.

  14. Optimizing meridional advection of the Advanced Research WRF (ARW) dynamics for Intel Xeon Phi coprocessor

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    The most widely used community weather forecast and research model in the world is the Weather Research and Forecast (WRF) model. Two distinct varieties of WRF exist. The one we are interested is the Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we optimize a meridional (north-south direction) advection subroutine for Intel Xeon Phi coprocessor. Advection is of the most time consuming routines in the ARW dynamics core. It advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.2x.

  15. Software and DVFS Tuning for Performance and Energy-Efficiency on Intel KNL Processors

    Directory of Open Access Journals (Sweden)

    Enrico Calore

    2018-06-01

    Full Text Available Energy consumption of processors and memories is quickly becoming a limiting factor in the deployment of large computing systems. For this reason, it is important to understand the energy performance of these processors and to study strategies allowing their use in the most efficient way. In this work, we focus on the computing and energy performance of the Knights Landing Xeon Phi, the latest Intel many-core architecture processor for HPC applications. We consider the 64-core Xeon Phi 7230 and profile its performance and energy efficiency using both its on-chip MCDRAM and the off-chip DDR4 memory as the main storage for application data. As a benchmark application, we use a lattice Boltzmann code heavily optimized for this architecture and implemented using several different arrangements of the application data in memory (data-layouts, in short. We also assess the dependence of energy consumption on data-layouts, memory configurations (DDR4 or MCDRAM and the number of threads per core. We finally consider possible trade-offs between computing performance and energy efficiency, tuning the clock frequency of the processor using the Dynamic Voltage and Frequency Scaling (DVFS technique.

  16. Modeling high-temperature superconductors and metallic alloys on the Intel IPSC/860

    Science.gov (United States)

    Geist, G. A.; Peyton, B. W.; Shelton, W. A.; Stocks, G. M.

    Oak Ridge National Laboratory has embarked on several computational Grand Challenges, which require the close cooperation of physicists, mathematicians, and computer scientists. One of these projects is the determination of the material properties of alloys from first principles and, in particular, the electronic structure of high-temperature superconductors. While the present focus of the project is on superconductivity, the approach is general enough to permit study of other properties of metallic alloys such as strength and magnetic properties. This paper describes the progress to date on this project. We include a description of a self-consistent KKR-CPA method, parallelization of the model, and the incorporation of a dynamic load balancing scheme into the algorithm. We also describe the development and performance of a consolidated KKR-CPA code capable of running on CRAYs, workstations, and several parallel computers without source code modification. Performance of this code on the Intel iPSC/860 is also compared to a CRAY 2, CRAY YMP, and several workstations. Finally, some density of state calculations of two perovskite superconductors are given.

  17. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    Science.gov (United States)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  18. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™

    Science.gov (United States)

    Gomes, Jeremias M.; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H.

    2016-01-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP’s irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations. PMID:27298591

  19. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    Science.gov (United States)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  20. Plasma Science and Applications at the Intel Science Fair: A Retrospective

    Science.gov (United States)

    Berry, Lee

    2009-11-01

    For the past five years, the Coalition for Plasma Science (CPS) has presented an award for a plasma project at the Intel International Science and Engineering Fair (ISEF). Eligible projects have ranged from grape-based plasma production in a microwave oven to observation of the effects of viscosity in a fluid model of quark-gluon plasma. Most projects have been aimed at applications, including fusion, thrusters, lighting, materials processing, and GPS improvements. However diagnostics (spectroscopy), technology (magnets), and theory (quark-gluon plasmas) have also been represented. All of the CPS award-winning projects so far have been based on experiments, with two awards going to women students and three to men. Since the award was initiated, both the number and quality of plasma projects has increased. The CPS expects this trend to continue, and looks forward to continuing its work with students who are excited about the possibilities of plasma. You too can share this excitement by judging at the 2010 fair in San Jose on May 11-12.

  1. Implementation of 5-layer thermal diffusion scheme in weather research and forecasting model with Intel Many Integrated Cores

    Science.gov (United States)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2014-10-01

    For weather forecasting and research, the Weather Research and Forecasting (WRF) model has been developed, consisting of several components such as dynamic solvers and physical simulation modules. WRF includes several Land- Surface Models (LSMs). The LSMs use atmospheric information, the radiative and precipitation forcing from the surface layer scheme, the radiation scheme, and the microphysics/convective scheme all together with the land's state variables and land-surface properties, to provide heat and moisture fluxes over land and sea-ice points. The WRF 5-layer thermal diffusion simulation is an LSM based on the MM5 5-layer soil temperature model with an energy budget that includes radiation, sensible, and latent heat flux. The WRF LSMs are very suitable for massively parallel computation as there are no interactions among horizontal grid points. The features, efficient parallelization and vectorization essentials, of Intel Many Integrated Core (MIC) architecture allow us to optimize this WRF 5-layer thermal diffusion scheme. In this work, we present the results of the computing performance on this scheme with Intel MIC architecture. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.1x. Accordingly, the same CPU-based optimizations improved the performance on Intel Xeon E5- 2603 by a factor of 1.6x as compared to the first version of multi-threaded code.

  2. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes.

    Science.gov (United States)

    Einkemmer, Lukas

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation.

  3. The parallel processing of EGS4 code on distributed memory scalar parallel computer:Intel Paragon XP/S15-256

    Energy Technology Data Exchange (ETDEWEB)

    Takemiya, Hiroshi; Ohta, Hirofumi; Honma, Ichirou

    1996-03-01

    The parallelization of Electro-Magnetic Cascade Monte Carlo Simulation Code, EGS4 on distributed memory scalar parallel computer: Intel Paragon XP/S15-256 is described. EGS4 has the feature that calculation time for one incident particle is quite different from each other because of the dynamic generation of secondary particles and different behavior of each particle. Granularity for parallel processing, parallel programming model and the algorithm of parallel random number generation are discussed and two kinds of method, each of which allocates particles dynamically or statically, are used for the purpose of realizing high speed parallel processing of this code. Among four problems chosen for performance evaluation, the speedup factors for three problems have been attained to nearly 100 times with 128 processor. It has been found that when both the calculation time for each incident particles and its dispersion are large, it is preferable to use dynamic particle allocation method which can average the load for each processor. And it has also been found that when they are small, it is preferable to use static particle allocation method which reduces the communication overhead. Moreover, it is pointed out that to get the result accurately, it is necessary to use double precision variables in EGS4 code. Finally, the workflow of program parallelization is analyzed and tools for program parallelization through the experience of the EGS4 parallelization are discussed. (author).

  4. Cluster fusion algorithm: application to Lennard-Jones clusters

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Solov'yov, Andrey V.; Greiner, Walter

    2006-01-01

    paths up to the cluster size of 150 atoms. We demonstrate that in this way all known global minima structures of the Lennard-Jones clusters can be found. Our method provides an efficient tool for the calculation and analysis of atomic cluster structure. With its use we justify the magic number sequence......We present a new general theoretical framework for modelling the cluster structure and apply it to description of the Lennard-Jones clusters. Starting from the initial tetrahedral cluster configuration, adding new atoms to the system and absorbing its energy at each step, we find cluster growing...... for the clusters of noble gas atoms and compare it with experimental observations. We report the striking correspondence of the peaks in the dependence of the second derivative of the binding energy per atom on cluster size calculated for the chain of the Lennard-Jones clusters based on the icosahedral symmetry...

  5. Cluster fusion algorithm: application to Lennard-Jones clusters

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Solov'yov, Andrey V.; Greiner, Walter

    2008-01-01

    paths up to the cluster size of 150 atoms. We demonstrate that in this way all known global minima structures of the Lennard-Jones clusters can be found. Our method provides an efficient tool for the calculation and analysis of atomic cluster structure. With its use we justify the magic number sequence......We present a new general theoretical framework for modelling the cluster structure and apply it to description of the Lennard-Jones clusters. Starting from the initial tetrahedral cluster configuration, adding new atoms to the system and absorbing its energy at each step, we find cluster growing...... for the clusters of noble gas atoms and compare it with experimental observations. We report the striking correspondence of the peaks in the dependence of the second derivative of the binding energy per atom on cluster size calculated for the chain of the Lennard-Jones clusters based on the icosahedral symmetry...

  6. Thread-level parallelization and optimization of NWChem for the Intel MIC architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); de Jong, Wibe [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments. In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant e ort was required to safely and efeciently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI+OpenMP hybrid implementations attain up to 65× better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6× better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.

  7. Navier-Stokes Aerodynamic Simulation of the V-22 Osprey on the Intel Paragon MPP

    Science.gov (United States)

    Vadyak, Joseph; Shrewsbury, George E.; Narramore, Jim C.; Montry, Gary; Holst, Terry; Kwak, Dochan (Technical Monitor)

    1995-01-01

    The paper will describe the Development of a general three-dimensional multiple grid zone Navier-Stokes flowfield simulation program (ENS3D-MPP) designed for efficient execution on the Intel Paragon Massively Parallel Processor (MPP) supercomputer, and the subsequent application of this method to the prediction of the viscous flowfield about the V-22 Osprey tiltrotor vehicle. The flowfield simulation code solves the thin Layer or full Navier-Stoke's equation - for viscous flow modeling, or the Euler equations for inviscid flow modeling on a structured multi-zone mesh. In the present paper only viscous simulations will be shown. The governing difference equations are solved using a time marching implicit approximate factorization method with either TVD upwind or central differencing used for the convective terms and central differencing used for the viscous diffusion terms. Steady state or Lime accurate solutions can be calculated. The present paper will focus on steady state applications, although time accurate solution analysis is the ultimate goal of this effort. Laminar viscosity is calculated using Sutherland's law and the Baldwin-Lomax two layer algebraic turbulence model is used to compute the eddy viscosity. The Simulation method uses an arbitrary block, curvilinear grid topology. An automatic grid adaption scheme is incorporated which concentrates grid points in high density gradient regions. A variety of user-specified boundary conditions are available. This paper will present the application of the scalable and superscalable versions to the steady state viscous flow analysis of the V-22 Osprey using a multiple zone global mesh. The mesh consists of a series of sheared cartesian grid blocks with polar grids embedded within to better simulate the wing tip mounted nacelle. MPP solutions will be shown in comparison to equivalent Cray C-90 results and also in comparison to experimental data. Discussions on meshing considerations, wall clock execution time

  8. Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Williams, Samuel; Jong, Wibe de; Oliker, Leonid

    2014-10-10

    In the multicore era it was possible to exploit the increase in on-chip parallelism by simply running multiple MPI processes per chip. Unfortunately, manycore processors' greatly increased thread- and data-level parallelism coupled with a reduced memory capacity demand an altogether different approach. In this paper we explore augmenting two NWChem modules, triples correction of the CCSD(T) and Fock matrix construction, with OpenMP in order that they might run efficiently on future manycore architectures. As the next NERSC machine will be a self-hosted Intel MIC (Xeon Phi) based supercomputer, we leverage an existing MIC testbed at NERSC to evaluate our experiments. In order to proxy the fact that future MIC machines will not have a host processor, we run all of our experiments in tt native mode. We found that while straightforward application of OpenMP to the deep loop nests associated with the tensor contractions of CCSD(T) was sufficient in attaining high performance, significant effort was required to safely and efficiently thread the TEXAS integral package when constructing the Fock matrix. Ultimately, our new MPI OpenMP hybrid implementations attain up to 65x better performance for the triples part of the CCSD(T) due in large part to the fact that the limited on-card memory limits the existing MPI implementation to a single process per card. Additionally, we obtain up to 1.6x better performance on Fock matrix constructions when compared with the best MPI implementations running multiple processes per card.

  9. Privacy-preserving distributed clustering

    NARCIS (Netherlands)

    Erkin, Z.; Veugen, T.; Toft, T.; Lagendijk, R.L.

    2013-01-01

    Clustering is a very important tool in data mining and is widely used in on-line services for medical, financial and social environments. The main goal in clustering is to create sets of similar objects in a data set. The data set to be used for clustering can be owned by a single entity, or in some

  10. X ray emission: a tool and a probe for laser - clusters interaction; L'emission X: un outil et une sonde pour l'interaction laser - agregats

    Energy Technology Data Exchange (ETDEWEB)

    Prigent, Ch

    2004-12-01

    In intense laser-cluster interaction, the experimental results show a strong energetic coupling between radiation and matter. We have measured absolute X-ray yields and charge state distributions under well control conditions as a function of physical parameters governing the interaction; namely laser intensity, pulse duration, wavelength or polarization state of the laser light, the size and the species of the clusters (Ar, Kr, Xe). We have highlighted, for the first time, an intensity threshold in the X-ray production very low ({approx} 2.10{sup 14} W/cm{sup 2} for a pulse duration of 300 fs) which can results from an effect of the dynamical polarisation of clusters in an intense electric field. A weak dependence with the wavelength (400 nm / 800 nm) on the absolute X-ray yields has been found. Moreover, we have observed a saturation of the X-ray emission probability below a critical cluster size. (author)

  11. Cluster headache

    Science.gov (United States)

    Histamine headache; Headache - histamine; Migrainous neuralgia; Headache - cluster; Horton's headache; Vascular headache - cluster ... Doctors do not know exactly what causes cluster headaches. They ... (chemical in the body released during an allergic response) or ...

  12. TreeCluster: Massively scalable transmission clustering using phylogenetic trees

    OpenAIRE

    Moshiri, Alexander

    2018-01-01

    Background: The ability to infer transmission clusters from molecular data is critical to designing and evaluating viral control strategies. Viral sequencing datasets are growing rapidly, but standard methods of transmission cluster inference do not scale well beyond thousands of sequences. Results: I present TreeCluster, a cross-platform tool that performs transmission cluster inference on a given phylogenetic tree orders of magnitude faster than existing inference methods and supports multi...

  13. Emmarcar el debat: Lliure expressió contra propietat intel·lectual, els propers cinquanta anys

    Directory of Open Access Journals (Sweden)

    Eben Moglen

    2007-02-01

    Full Text Available

    El Prof. Moglen explica i analitza, des d'una perspectiva històrica, la profunda revolució social i legal que resulta de la tecnologia digital quan aquesta s'aplica a tots els camps: programari, música i tot tipus de creacions. En concret, explica la manera en què la tecnologia digital està forçant una modificació substancial (desaparició dels sistemes de propietat intel·lectual i fa prediccions per al futur pròxim dels mercats de la PI.

  14. Clustered lot quality assurance sampling: a tool to monitor immunization coverage rapidly during a national yellow fever and polio vaccination campaign in Cameroon, May 2009.

    Science.gov (United States)

    Pezzoli, L; Tchio, R; Dzossa, A D; Ndjomo, S; Takeu, A; Anya, B; Ticha, J; Ronveaux, O; Lewis, R F

    2012-01-01

    We used the clustered lot quality assurance sampling (clustered-LQAS) technique to identify districts with low immunization coverage and guide mop-up actions during the last 4 days of a combined oral polio vaccine (OPV) and yellow fever (YF) vaccination campaign conducted in Cameroon in May 2009. We monitored 17 pre-selected districts at risk for low coverage. We designed LQAS plans to reject districts with YF vaccination coverage LQAS proved to be useful in guiding the campaign vaccination strategy before the completion of the operations.

  15. Weighted Clustering

    DEFF Research Database (Denmark)

    Ackerman, Margareta; Ben-David, Shai; Branzei, Simina

    2012-01-01

    We investigate a natural generalization of the classical clustering problem, considering clustering tasks in which different instances may have different weights.We conduct the first extensive theoretical analysis on the influence of weighted data on standard clustering algorithms in both...... the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...

  16. A Fast SVM-Based Tongue’s Colour Classification Aided by k-Means Clustering Identifiers and Colour Attributes as Computer-Assisted Tool for Tongue Diagnosis

    Directory of Open Access Journals (Sweden)

    Nur Diyana Kamarudin

    2017-01-01

    Full Text Available In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye’s ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue’s multicolour classification based on a support vector machine (SVM whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black, deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.

  17. A Fast SVM-Based Tongue's Colour Classification Aided by k-Means Clustering Identifiers and Colour Attributes as Computer-Assisted Tool for Tongue Diagnosis

    Science.gov (United States)

    Ooi, Chia Yee; Kawanabe, Tadaaki; Odaguchi, Hiroshi; Kobayashi, Fuminori

    2017-01-01

    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds. PMID:29065640

  18. A Fast SVM-Based Tongue's Colour Classification Aided by k-Means Clustering Identifiers and Colour Attributes as Computer-Assisted Tool for Tongue Diagnosis.

    Science.gov (United States)

    Kamarudin, Nur Diyana; Ooi, Chia Yee; Kawanabe, Tadaaki; Odaguchi, Hiroshi; Kobayashi, Fuminori

    2017-01-01

    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k -means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k -means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.

  19. Evaluation of the Single-precision Floatingpoint Vector Add Kernel Using the Intel FPGA SDK for OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-04-20

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication and kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.

  20. Efficient sparse matrix-matrix multiplication for computing periodic responses by shooting method on Intel Xeon Phi

    Science.gov (United States)

    Stoykov, S.; Atanassov, E.; Margenov, S.

    2016-10-01

    Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.

  1. A parallel implementation of particle tracking with space charge effects on an INTEL iPSC/860

    International Nuclear Information System (INIS)

    Chang, L.; Bourianoff, G.; Cole, B.; Machida, S.

    1993-05-01

    Particle-tracking simulation is one of the scientific applications that is well-suited to parallel computations. At the Superconducting Super Collider, it has been theoretically and empirically demonstrated that particle tracking on a designed lattice can achieve very high parallel efficiency on a MIMD Intel iPSC/860 machine. The key to such success is the realization that the particles can be tracked independently without considering their interaction. The perfectly parallel nature of particle tracking is broken if the interaction effects between particles are included. The space charge introduces an electromagnetic force that will affect the motion of tracked particles in 3-D space. For accurate modeling of the beam dynamics with space charge effects, one needs to solve three-dimensional Maxwell field equations, usually by a particle-in-cell (PIC) algorithm. This will require each particle to communicate with its neighbor grids to compute the momentum changes at each time step. It is expected that the 3-D PIC method will degrade parallel efficiency of particle-tracking implementation on any parallel computer. In this paper, we describe an efficient scheme for implementing particle tracking with space charge effects on an INTEL iPSC/860 machine. Experimental results show that a parallel efficiency of 75% can be obtained

  2. Reflective memory recorder upgrade: an opportunity to benchmark PowerPC and Intel architectures for real time

    Science.gov (United States)

    Abuter, Roberto; Tischer, Helmut; Frahm, Robert

    2014-07-01

    Several high frequency loops are required to run the VLTI (Very Large Telescope Interferometer) 2, e.g. for fringe tracking11, 5, angle tracking, vibration cancellation, data capture. All these loops rely on low latency real time computers based on the VME bus, Motorola PowerPC14 hardware architecture. In this context, one highly demanding application in terms of cycle time, latency and data transfer volume is the VLTI centralized recording facility, so called, RMN recorder1 (Reflective Memory Recorder). This application captures and transfers data flowing through the distributed memory of the system in real time. Some of the VLTI data producers are running with frequencies up to 8 KHz. With the evolution from first generation instruments like MIDI3, PRIMA5, and AMBER4 which use one or two baselines, to second generation instruments like MATISSE10 and GRAVITY9 which will use all six baselines simultaneously, the quantity of signals has increased by, at least, a factor of six. This has led to a significant overload of the RMN recorder1 which has reached the natural limits imposed by the underlying hardware. At the same time, new, more powerful computers, based on the Intel multicore families of CPUs and PCI buses have become available. With the purpose of improving the performance of the RMN recorder1 application and in order to make it capable of coping with the demands of the new generation instruments, a slightly modified implementation has been developed and integrated into an Intel based multicore computer15 running the VxWorks17 real time operating system. The core of the application is based on the standard VLT software framework for instruments13. The real time task reads from the reflective memory using the onboard DMA access12 and captured data is transferred to the outside world via a TCP socket on a dedicated Ethernet connection. The diversity of the software and hardware that are involved makes this application suitable as a benchmarking platform. A

  3. Cluster management.

    Science.gov (United States)

    Katz, R

    1992-11-01

    Cluster management is a management model that fosters decentralization of management, develops leadership potential of staff, and creates ownership of unit-based goals. Unlike shared governance models, there is no formal structure created by committees and it is less threatening for managers. There are two parts to the cluster management model. One is the formation of cluster groups, consisting of all staff and facilitated by a cluster leader. The cluster groups function for communication and problem-solving. The second part of the cluster management model is the creation of task forces. These task forces are designed to work on short-term goals, usually in response to solving one of the unit's goals. Sometimes the task forces are used for quality improvement or system problems. Clusters are groups of not more than five or six staff members, facilitated by a cluster leader. A cluster is made up of individuals who work the same shift. For example, people with job titles who work days would be in a cluster. There would be registered nurses, licensed practical nurses, nursing assistants, and unit clerks in the cluster. The cluster leader is chosen by the manager based on certain criteria and is trained for this specialized role. The concept of cluster management, criteria for choosing leaders, training for leaders, using cluster groups to solve quality improvement issues, and the learning process necessary for manager support are described.

  4. Isotopic clusters

    International Nuclear Information System (INIS)

    Geraedts, J.M.P.

    1983-01-01

    Spectra of isotopically mixed clusters (dimers of SF 6 ) are calculated as well as transition frequencies. The result leads to speculations about the suitability of the laser-cluster fragmentation process for isotope separation. (Auth.)

  5. Cluster Headache

    Science.gov (United States)

    ... a role. Unlike migraine and tension headache, cluster headache generally isn't associated with triggers, such as foods, hormonal changes or stress. Once a cluster period begins, however, drinking alcohol ...

  6. Cluster Headache

    OpenAIRE

    Pearce, Iris

    1985-01-01

    Cluster headache is the most severe primary headache with recurrent pain attacks described as worse than giving birth. The aim of this paper was to make an overview of current knowledge on cluster headache with a focus on pathophysiology and treatment. This paper presents hypotheses of cluster headache pathophysiology, current treatment options and possible future therapy approaches. For years, the hypothalamus was regarded as the key structure in cluster headache, but is now thought to be pa...

  7. Categorias Cluster

    OpenAIRE

    Queiroz, Dayane Andrade

    2015-01-01

    Neste trabalho apresentamos as categorias cluster, que foram introduzidas por Aslak Bakke Buan, Robert Marsh, Markus Reineke, Idun Reiten e Gordana Todorov, com o objetivo de categoriíicar as algebras cluster criadas em 2002 por Sergey Fomin e Andrei Zelevinsky. Os autores acima, em [4], mostraram que existe uma estreita relação entre algebras cluster e categorias cluster para quivers cujo grafo subjacente é um diagrama de Dynkin. Para isto desenvolveram uma teoria tilting na estrutura triang...

  8. Meaningful Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Calapristi, Augustin J.; Crow, Vernon L.; Hetzler, Elizabeth G.; Turner, Alan E.

    2004-05-26

    We present an approach to the disambiguation of cluster labels that capitalizes on the notion of semantic similarity to assign WordNet senses to cluster labels. The approach provides interesting insights on how document clustering can provide the basis for developing a novel approach to word sense disambiguation.

  9. Horticultural cluster

    OpenAIRE

    SHERSTIUK S.V.; POSYLAYEVA K.I.

    2013-01-01

    In the article there are the theoretical and methodological approaches to the nature and existence of the cluster. The cluster differences from other kinds of cooperative and integration associations. Was develop by scientific-practical recommendations for forming a competitive horticultur cluster.

  10. Photo fragmentation dynamics of small argon clusters and biological molecular: new tools by trapping and vectorial correlation; Dynamique de photofragmentation de petits agregats d'argon et de molecules biologiques: nouvel outil par piegeage et correlation vectorielle

    Energy Technology Data Exchange (ETDEWEB)

    Lepere, V

    2006-09-15

    The present work concerns the building up of a complex set-up whose aim being the investigation of the photo fragmentation of ionised clusters and biological molecules. This new tool is based on the association of several techniques. Two ion sources are available: clusters produced in a supersonic beam are ionised by 70 eV electrons while ions of biological interest are produced in an 'electro-spray'. Ro-vibrational cooling is achieved in a 'Zajfman' electrostatic ion trap. The lifetime of ions can also be measured using the trap. Two types of lasers are used to excite the ionised species: the femtosecond laser available at the ELYSE facilities and a nanosecond laser. Both lasers have a repetition rate of 1 kHz. The neutral and ionised fragments are detected in coincidence using a sophisticated detection system allowing time and localisation of the various fragments to be determined. With such a tool, I was able to investigate in details the fragmentation dynamics of ionised clusters and bio-molecules. The first experiments deal with the measurement of the lifetime of the Ar{sup 2+} dimer II(1/2)u metastable state. The relative population of this state was also determined. The Ar{sup 2+} and Ar{sup 3+} photo-fragmentation was then studied and electronic transitions responsible for their dissociation identified. The detailed analysis of our data allowed to distinguish the various fragmentation mechanisms. Finally, a preliminary investigation of the protonated tryptamine fragmentation is presented. (author)

  11. Cluster Matters

    DEFF Research Database (Denmark)

    Gulati, Mukesh; Lund-Thomsen, Peter; Suresh, Sangeetha

    2018-01-01

    sell their products successfully in international markets, but there is also an increasingly large consumer base within India. Indeed, Indian industrial clusters have contributed to a substantial part of this growth process, and there are several hundred registered clusters within the country...... of this handbook, which focuses on the role of CSR in MSMEs. Hence we contribute to the literature on CSR in industrial clusters and specifically CSR in Indian industrial clusters by investigating the drivers of CSR in India’s industrial clusters....

  12. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  13. Analysis of the Intel 386 and i486 microprocessors for the Space Station Freedom Data Management System

    Science.gov (United States)

    Liu, Yuan-Kwei

    1991-01-01

    The feasibility is analyzed of upgrading the Intel 386 microprocessor, which has been proposed as the baseline processor for the Space Station Freedom (SSF) Data Management System (DMS), to the more advanced i486 microprocessors. The items compared between the two processors include the instruction set architecture, power consumption, the MIL-STD-883C Class S (Space) qualification schedule, and performance. The advantages of the i486 over the 386 are (1) lower power consumption; and (2) higher floating point performance. The i486 on-chip cache does not have parity check or error detection and correction circuitry. The i486 with on-chip cache disabled, however, has lower integer performance than the 386 without cache, which is the current DMS design choice. Adding cache to the 386/386 DX memory hierachy appears to be the most beneficial change to the current DMS design at this time.

  14. La responsabilitat davant la intel·ligència artificial en el comerç electrònic

    OpenAIRE

    Martín i Palomas, Elisabet

    2015-01-01

    Es planteja en aquesta tesi l'efecte produït sobre la responsabilitat derivada de les accions realitzades autònomament per sistemes dotats d'intel·ligència artificial, sense la participació directa de cap ésser humà, en els temes més directament relacionats amb el comerç electrònic. Per a això s'analitzen les activitats realitzades per algunes de les principals empreses internacionals de comerç electrònic, com el grup nord-americà eBay o el grup xinès Alibaba. Després de desenvolupar els prin...

  15. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  16. Evaluation of an early detection tool for social-emotional and behavioral problems in toddlers: The Brief Infant Toddler Social and Emotional Assessment - A cluster randomized trial

    NARCIS (Netherlands)

    I. Kruizinga (Ingrid); W. Jansen (Wilma); A.S. Carter (Alice); H. Raat (Hein)

    2011-01-01

    textabstractBackground: The prevalence of social-emotional and behavioral problems is estimated to be 8 to 9% among preschool children. Effective early detection tools are needed to promote the provision of adequate care at an early stage. The Brief Infant-Toddler Social and Emotional Assessment

  17. Data Clustering

    Science.gov (United States)

    Wagstaff, Kiri L.

    2012-03-01

    On obtaining a new data set, the researcher is immediately faced with the challenge of obtaining a high-level understanding from the observations. What does a typical item look like? What are the dominant trends? How many distinct groups are included in the data set, and how is each one characterized? Which observable values are common, and which rarely occur? Which items stand out as anomalies or outliers from the rest of the data? This challenge is exacerbated by the steady growth in data set size [11] as new instruments push into new frontiers of parameter space, via improvements in temporal, spatial, and spectral resolution, or by the desire to "fuse" observations from different modalities and instruments into a larger-picture understanding of the same underlying phenomenon. Data clustering algorithms provide a variety of solutions for this task. They can generate summaries, locate outliers, compress data, identify dense or sparse regions of feature space, and build data models. It is useful to note up front that "clusters" in this context refer to groups of items within some descriptive feature space, not (necessarily) to "galaxy clusters" which are dense regions in physical space. The goal of this chapter is to survey a variety of data clustering methods, with an eye toward their applicability to astronomical data analysis. In addition to improving the individual researcher’s understanding of a given data set, clustering has led directly to scientific advances, such as the discovery of new subclasses of stars [14] and gamma-ray bursts (GRBs) [38]. All clustering algorithms seek to identify groups within a data set that reflect some observed, quantifiable structure. Clustering is traditionally an unsupervised approach to data analysis, in the sense that it operates without any direct guidance about which items should be assigned to which clusters. There has been a recent trend in the clustering literature toward supporting semisupervised or constrained

  18. Cluster evolution

    International Nuclear Information System (INIS)

    Schaeffer, R.

    1987-01-01

    The galaxy and cluster luminosity functions are constructed from a model of the mass distribution based on hierarchical clustering at an epoch where the matter distribution is non-linear. These luminosity functions are seen to reproduce the present distribution of objects as can be inferred from the observations. They can be used to deduce the redshift dependence of the cluster distribution and to extrapolate the observations towards the past. The predicted evolution of the cluster distribution is quite strong, although somewhat less rapid than predicted by the linear theory

  19. Les multituds intel·ligents com a generadores de dades massives : la intel·ligència col·lectiva al servei de la innovació social

    Directory of Open Access Journals (Sweden)

    Sanz, Sandra

    2015-06-01

    Full Text Available Les últimes dècades es registra un increment de mobilitzacions socials organitzades, intervingudes, narrades i coordinades a través de les TIC. Són mostra de multituds intel·ligents (smart mobs que s'aprofiten dels nous mitjans de comunicació per organitzar-se. Tant pel nombre de missatges intercanviats i generats com per les pròpies interaccions generades, aquestes multituds intel·ligents es converteixen en objecte de les dades massives. La seva anàlisi a partir de les possibilitats que brinda l'enginyeria de dades pot contribuir a detectar idees construïdes com també sabers compartits fruit de la intel·ligència col·lectiva. Aquest fet afavoriria la reutilització d'aquesta informació per incrementar el coneixement del col·lectiu i contribuir al desenvolupament de la innovació social. És per això que en aquest article s'assenyalen els interrogants i les limitacions que encara presenten aquestes anàlisis i es posa en relleu la necessitat d'aprofundir en el desenvolupament de nous mètodes i tècniques d'anàlisi.En las últimas décadas se registra un incremento de movilizaciones sociales organizadas, mediadas, narradas y coordinadas a través de TICs. Son muestra de smart mobs o multitudes inteligentes que se aprovechan de los nuevos medios de comunicación para organizarse. Tanto por el número de mensajes intercambiados y generados como por las propias interacciones generadas, estas multitudes inteligentes se convierten en objeto del big data. Su análisis a partir de las posibilidades que brinda la ingeniería de datos puede contribuir a detectar ideas construidas así como saberes compartidos fruto de la inteligencia colectiva. Ello favorecería la reutilización de esta información para incrementar el conocimiento del colectivo y contribuir al desarrollo de la innovación social. Es por ello que en este artículo se señalan los interrogantes y limitaciones que todavía presentan estos análisis y se pone de relieve la

  20. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models.

    Science.gov (United States)

    Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald

    2011-06-01

    Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for

  1. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR is an efficient tool for metamodelling of nonlinear dynamic models

    Directory of Open Access Journals (Sweden)

    Omholt Stig W

    2011-06-01

    Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback

  2. SCIPHI - Score-P and Cube Extensions for Intel Xeon Phi

    OpenAIRE

    Feld, Christian; Schlütter, Marc; Saviankou, Pavel; Knobloch, Michael; Mohr, Bernd

    2017-01-01

    The KNL processors offers unique features concerning memory hierarchy and vectorization capabilities. To improve tool support within these two areas, we present extensions to the Score-P measurement system and the Cube report explorer.KNL introduced a new memory architecture, utilizing MCDRAM and DDR. To help the user in the decision where to place data structures, we record a MCDRAM candidate metric. In addition we track all MCDRAM allocations through the hbwmalloc API, providing memory metr...

  3. OBSERVED SCALING RELATIONS FOR STRONG LENSING CLUSTERS: CONSEQUENCES FOR COSMOLOGY AND CLUSTER ASSEMBLY

    International Nuclear Information System (INIS)

    Comerford, Julia M.; Moustakas, Leonidas A.; Natarajan, Priyamvada

    2010-01-01

    Scaling relations of observed galaxy cluster properties are useful tools for constraining cosmological parameters as well as cluster formation histories. One of the key cosmological parameters, σ 8 , is constrained using observed clusters of galaxies, although current estimates of σ 8 from the scaling relations of dynamically relaxed galaxy clusters are limited by the large scatter in the observed cluster mass-temperature (M-T) relation. With a sample of eight strong lensing clusters at 0.3 8 , but combining the cluster concentration-mass relation with the M-T relation enables the inclusion of unrelaxed clusters as well. Thus, the resultant gains in the accuracy of σ 8 measurements from clusters are twofold: the errors on σ 8 are reduced and the cluster sample size is increased. Therefore, the statistics on σ 8 determination from clusters are greatly improved by the inclusion of unrelaxed clusters. Exploring cluster scaling relations further, we find that the correlation between brightest cluster galaxy (BCG) luminosity and cluster mass offers insight into the assembly histories of clusters. We find preliminary evidence for a steeper BCG luminosity-cluster mass relation for strong lensing clusters than the general cluster population, hinting that strong lensing clusters may have had more active merging histories.

  4. Application of MALDI-TOF MS fingerprinting as a quick tool for identification and clustering of foodborne pathogens isolated from food products.

    Science.gov (United States)

    Elbehiry, Ayman; Marzouk, Eman; Hamada, Mohamed; Al-Dubaib, Musaad; Alyamani, Essam; Moussa, Ihab M; AlRowaidhan, Anhar; Hemeg, Hassan A

    2017-10-01

    Foodborne pathogens can be associated with a wide variety of food products and it is very important to identify them to supply safe food and prevent foodborne infections. Since traditional techniques are timeconsuming and laborious, this study was designed for rapid identification and clustering of foodborne pathogens isolated from various restaurants in Al-Qassim region, Kingdom of Saudi Arabia (KSA) using matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). Sixty-nine bacterial and thirty-two fungal isolates isolated from 80 food samples were used in this study. Preliminary identification was carried out through culture and BD Phoenix™ methods. A confirmatory identification technique was then performed using MALDI-TOF MS. The BD Phoenix results revealed that 97% (67/69 isolates) of bacteria were correctly identified as 75% Enterobacter cloacae, 95.45% Campylobacter jejuni and 100% for Escherichia coli, Salmonella enterica, Staphylococcus aureus, Acinetobacter baumannii, and Klebsiella pneumoniae. While 94.44% (29/32 isolates) of fungi were correctly identified as 77.77% Alternaria alternate, 88.88% Aspergillus niger and 100% for Aspergillus flavus, Penicillium digitatum, Candida albicans and Debaryomyces hansenii. However, all bacterial and fungal isolates were 100% properly identified by MALDI-TOF MS fingerprinting with a score value ≥2.00. A gel view illustrated that the spectral peaks for the identified isolates fluctuate between 3,000 and 10,000 Da. The results of main spectra library (MSP) dendrogram showed that the bacterial and fungal isolates matched with 19 and 9 reference strains stored in the Bruker taxonomy, respectively. Our results indicated that MALDI-TOF MS is a promising technique for fast and accurate identification of foodborne pathogens.

  5. Mobile phones as a health communication tool to improve skilled attendance at delivery in Zanzibar: a cluster-randomised controlled trial.

    Science.gov (United States)

    Lund, S; Hemed, M; Nielsen, B B; Said, A; Said, K; Makungu, M H; Rasch, V

    2012-09-01

    To examine the association between a mobile phone intervention and skilled delivery attendance in a resource-limited setting. Pragmatic cluster-randomised controlled trial with primary healthcare facilities as the unit of randomisation. Primary healthcare facilities in Zanzibar. Two thousand, five hundred and fifty pregnant women (1311 interventions and 1239 controls) who attended antenatal care at one of the selected primary healthcare facilities were included at their first antenatal care visit and followed until 42 days after delivery. All pregnant women were eligible for study participation. Twenty-four primary healthcare facilities in six districts in Zanzibar were allocated by simple randomisation to either mobile phone intervention (n = 12) or standard care (n = 12). The intervention consisted of a short messaging service (SMS) and mobile phone voucher component. Skilled delivery attendance. The mobile phone intervention was associated with an increase in skilled delivery attendance: 60% of the women in the intervention group versus 47% in the control group delivered with skilled attendance. The intervention produced a significant increase in skilled delivery attendance amongst urban women (odds ratio, 5.73; 95% confidence interval, 1.51-21.81), but did not reach rural women. The mobile phone intervention significantly increased skilled delivery attendance amongst women of urban residence. Mobile phone solutions may contribute to the saving of lives of women and their newborns and the achievement of Millennium Development Goals 4 and 5, and should be considered by maternal and child health policy makers in developing countries. © 2012 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2012 RCOG.

  6. Lattice QCD calculations on commodity clusters at DESY

    International Nuclear Information System (INIS)

    Gellrich, A.; Pop, D.; Wegner, P.; Wittig, H.; Hasenbusch, M.; Jansen, K.

    2003-06-01

    Lattice Gauge Theory is an integral part of particle physics that requires high performance computing in the multi-Tflops regime. These requirements are motivated by the rich research program and the physics milestones to be reached by the lattice community. Over the last years the enormous gains in processor performance, memory bandwidth, and external I/O bandwidth for parallel applications have made commodity clusters exploiting PCs or workstations also suitable for large Lattice Gauge Theory applications. For more than one year two clusters have been operated at the two DESY sites in Hamburg and Zeuthen, consisting of 32 resp. 16 dual-CPU PCs, equipped with Intel Pentium 4 Xeon processors. Interconnection of the nodes is done by way of Myrinet. Linux was chosen as the operating system. In the course of the projects benchmark programs for architectural studies were developed. The performance of the Wilson-Dirac Operator (also in an even-odd preconditioned version) as the inner loop of the Lattice QCD (LQCD) algorithms plays the most important role in classifying the hardware basis to be used. Using the SIMD streaming extensions (SSE/SSE2) on Intel's Pentium 4 Xeon CPUs give promising results for both the single CPU and the parallel version. The parallel performance, in addition to the CPU power and the memory throughput, is nevertheless strongly influenced by the behavior of hardware components like the PC chip-set and the communication interfaces. The paper starts by giving a short explanation about the physics background and the motivation for using PC clusters for Lattice QCD. Subsequently, the concept, implementation, and operating experiences of the two clusters are discussed. Finally, the paper presents benchmark results and discusses comparisons to systems with different hardware components including Myrinet-, GigaBit-Ethernet-, and Infiniband-based interconnects. (orig.)

  7. Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Christopher H.; Long, Hai; Sides, Scott; Vaidhynathan, Deepthi; Jones, Wesley

    2015-10-15

    Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement of future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.

  8. Evaluation of the Intel Xeon Phi Co-processor to accelerate the sensitivity map calculation for PET imaging

    Science.gov (United States)

    Dey, T.; Rodrigue, P.

    2015-07-01

    We aim to evaluate the Intel Xeon Phi coprocessor for acceleration of 3D Positron Emission Tomography (PET) image reconstruction. We focus on the sensitivity map calculation as one computational intensive part of PET image reconstruction, since it is a promising candidate for acceleration with the Many Integrated Core (MIC) architecture of the Xeon Phi. The computation of the voxels in the field of view (FoV) can be done in parallel and the 103 to 104 samples needed to calculate the detection probability of each voxel can take advantage of vectorization. We use the ray tracing kernels of the Embree project to calculate the hit points of the sample rays with the detector and in a second step the sum of the radiological path taking into account attenuation is determined. The core components are implemented using the Intel single instruction multiple data compiler (ISPC) to enable a portable implementation showing efficient vectorization either on the Xeon Phi and the Host platform. On the Xeon Phi, the calculation of the radiological path is also implemented in hardware specific intrinsic instructions (so-called `intrinsics') to allow manually-optimized vectorization. For parallelization either OpenMP and ISPC tasking (based on pthreads) are evaluated.Our implementation achieved a scalability factor of 0.90 on the Xeon Phi coprocessor (model 5110P) with 60 cores at 1 GHz. Only minor differences were found between parallelization with OpenMP and the ISPC tasking feature. The implementation using intrinsics was found to be about 12% faster than the portable ISPC version. With this version, a speedup of 1.43 was achieved on the Xeon Phi coprocessor compared to the host system (HP SL250s Gen8) equipped with two Xeon (E5-2670) CPUs, with 8 cores at 2.6 to 3.3 GHz each. Using a second Xeon Phi card the speedup could be further increased to 2.77. No significant differences were found between the results of the different Xeon Phi and the Host implementations. The examination

  9. Evaluation of the Intel Xeon Phi Co-processor to accelerate the sensitivity map calculation for PET imaging

    International Nuclear Information System (INIS)

    Dey, T.; Rodrigue, P.

    2015-01-01

    We aim to evaluate the Intel Xeon Phi coprocessor for acceleration of 3D Positron Emission Tomography (PET) image reconstruction. We focus on the sensitivity map calculation as one computational intensive part of PET image reconstruction, since it is a promising candidate for acceleration with the Many Integrated Core (MIC) architecture of the Xeon Phi. The computation of the voxels in the field of view (FoV) can be done in parallel and the 10 3 to 10 4 samples needed to calculate the detection probability of each voxel can take advantage of vectorization. We use the ray tracing kernels of the Embree project to calculate the hit points of the sample rays with the detector and in a second step the sum of the radiological path taking into account attenuation is determined. The core components are implemented using the Intel single instruction multiple data compiler (ISPC) to enable a portable implementation showing efficient vectorization either on the Xeon Phi and the Host platform. On the Xeon Phi, the calculation of the radiological path is also implemented in hardware specific intrinsic instructions (so-called 'intrinsics') to allow manually-optimized vectorization. For parallelization either OpenMP and ISPC tasking (based on pthreads) are evaluated.Our implementation achieved a scalability factor of 0.90 on the Xeon Phi coprocessor (model 5110P) with 60 cores at 1 GHz. Only minor differences were found between parallelization with OpenMP and the ISPC tasking feature. The implementation using intrinsics was found to be about 12% faster than the portable ISPC version. With this version, a speedup of 1.43 was achieved on the Xeon Phi coprocessor compared to the host system (HP SL250s Gen8) equipped with two Xeon (E5-2670) CPUs, with 8 cores at 2.6 to 3.3 GHz each. Using a second Xeon Phi card the speedup could be further increased to 2.77. No significant differences were found between the results of the different Xeon Phi and the Host implementations. The

  10. A Comprehensive Study of Neutralizing Antigenic Sites on the Hepatitis E Virus (HEV) Capsid by Constructing, Clustering, and Characterizing a Tool Box*

    Science.gov (United States)

    Zhao, Min; Li, Xiao-Jing; Tang, Zi-Min; Yang, Fan; Wang, Si-Ling; Cai, Wei; Zhang, Ke; Xia, Ning-Shao; Zheng, Zi-Zheng

    2015-01-01

    The hepatitis E virus (HEV) ORF2 encodes a single structural capsid protein. The E2s domain (amino acids 459–606) of the capsid protein has been identified as the major immune target. All identified neutralizing epitopes are located on this domain; however, a comprehensive characterization of antigenic sites on the domain is lacking due to its high degree of conformation dependence. Here, we used the statistical software SPSS to analyze cELISA (competitive ELISA) data to classify monoclonal antibodies (mAbs), which recognized conformational epitopes on E2s domain. Using this novel analysis method, we identified various conformational mAbs that recognized the E2s domain. These mAbs were distributed into 6 independent groups, suggesting the presence of at least 6 epitopes. Twelve representative mAbs covering the six groups were selected as a tool box to further map functional antigenic sites on the E2s domain. By combining functional and location information of the 12 representative mAbs, this study provided a complete picture of potential neutralizing epitope regions and immune-dominant determinants on E2s domain. One epitope region is located on top of the E2s domain close to the monomer interface; the other is located on the monomer side of the E2s dimer around the groove zone. Besides, two non-neutralizing epitopes were also identified on E2s domain that did not stimulate neutralizing antibodies. Our results help further the understanding of protective mechanisms induced by the HEV vaccine. Furthermore, the tool box with 12 representative mAbs will be useful for studying the HEV infection process. PMID:26085097

  11. Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

    Energy Technology Data Exchange (ETDEWEB)

    Allada, Veerendra, Benjegerdes, Troy; Bode, Brett

    2009-08-31

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

  12. Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

    International Nuclear Information System (INIS)

    Allada, Veerendra; Benjegerdes, Troy; Bode, Brett

    2009-01-01

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

  13. Clustering Dycom

    KAUST Repository

    Minku, Leandro L.

    2017-10-06

    Background: Software Effort Estimation (SEE) can be formulated as an online learning problem, where new projects are completed over time and may become available for training. In this scenario, a Cross-Company (CC) SEE approach called Dycom can drastically reduce the number of Within-Company (WC) projects needed for training, saving the high cost of collecting such training projects. However, Dycom relies on splitting CC projects into different subsets in order to create its CC models. Such splitting can have a significant impact on Dycom\\'s predictive performance. Aims: This paper investigates whether clustering methods can be used to help finding good CC splits for Dycom. Method: Dycom is extended to use clustering methods for creating the CC subsets. Three different clustering methods are investigated, namely Hierarchical Clustering, K-Means, and Expectation-Maximisation. Clustering Dycom is compared against the original Dycom with CC subsets of different sizes, based on four SEE databases. A baseline WC model is also included in the analysis. Results: Clustering Dycom with K-Means can potentially help to split the CC projects, managing to achieve similar or better predictive performance than Dycom. However, K-Means still requires the number of CC subsets to be pre-defined, and a poor choice can negatively affect predictive performance. EM enables Dycom to automatically set the number of CC subsets while still maintaining or improving predictive performance with respect to the baseline WC model. Clustering Dycom with Hierarchical Clustering did not offer significant advantage in terms of predictive performance. Conclusion: Clustering methods can be an effective way to automatically generate Dycom\\'s CC subsets.

  14. An efficient MPI/OpenMP parallelization of the Hartree–Fock–Roothaan method for the first generation of Intel® Xeon Phi™ processor architecture

    International Nuclear Information System (INIS)

    Mironov, Vladimir; Moskovsky, Alexander; D’Mello, Michael; Alexeev, Yuri

    2017-01-01

    The Hartree-Fock (HF) method in the quantum chemistry package GAMESS represents one of the most irregular algorithms in computation today. Major steps in the calculation are the irregular computation of electron repulsion integrals (ERIs) and the building of the Fock matrix. These are the central components of the main Self Consistent Field (SCF) loop, the key hotspot in Electronic Structure (ES) codes. By threading the MPI ranks in the official release of the GAMESS code, we not only speed up the main SCF loop (4x to 6x for large systems), but also achieve a significant (>2x) reduction in the overall memory footprint. These improvements are a direct consequence of memory access optimizations within the MPI ranks. We benchmark our implementation against the official release of the GAMESS code on the Intel R Xeon PhiTM supercomputer. Here, scaling numbers are reported on up to 7,680 cores on Intel Xeon Phi coprocessors.

  15. Experience with low-power x86 processors (Atom) for HEP usage. An initial analysis of the Intel® dual core Atom™ N330 processor

    CERN Document Server

    Balazs, G; Nowak, A; CERN. Geneva. IT Department

    2009-01-01

    In this paper we compare a system based on an Intel Atom N330 low-power processor to a modern Intel Xeon® dual-socket server using CERN IT’s standard criteria for comparing price-performance and performance per watt. The Xeon server corresponds to what is typically acquired as servers in the LHC Computing Grid. The comparisons used public pricing information from November 2008. After the introduction in section 1, section 2 describes the hardware and software setup. In section 3 we describe the power measurements we did and in section 4 we discuss the throughput performance results. In section 5 we summarize our initial conclusions. We then go on to describe our long term vision and possible future scenarios for using such low-power processors, and finally we list interesting development directions.

  16. Clustering analysis

    International Nuclear Information System (INIS)

    Romli

    1997-01-01

    Cluster analysis is the name of group of multivariate techniques whose principal purpose is to distinguish similar entities from the characteristics they process.To study this analysis, there are several algorithms that can be used. Therefore, this topic focuses to discuss the algorithms, such as, similarity measures, and hierarchical clustering which includes single linkage, complete linkage and average linkage method. also, non-hierarchical clustering method, which is popular name K -mean method ' will be discussed. Finally, this paper will be described the advantages and disadvantages of every methods

  17. Cluster analysis

    CERN Document Server

    Everitt, Brian S; Leese, Morven; Stahl, Daniel

    2011-01-01

    Cluster analysis comprises a range of methods for classifying multivariate data into subgroups. By organizing multivariate data into such subgroups, clustering can help reveal the characteristics of any structure or patterns present. These techniques have proven useful in a wide range of areas such as medicine, psychology, market research and bioinformatics.This fifth edition of the highly successful Cluster Analysis includes coverage of the latest developments in the field and a new chapter dealing with finite mixture models for structured data.Real life examples are used throughout to demons

  18. Cluster editing

    DEFF Research Database (Denmark)

    Böcker, S.; Baumbach, Jan

    2013-01-01

    . The problem has been the inspiration for numerous algorithms in bioinformatics, aiming at clustering entities such as genes, proteins, phenotypes, or patients. In this paper, we review exact and heuristic methods that have been proposed for the Cluster Editing problem, and also applications......The Cluster Editing problem asks to transform a graph into a disjoint union of cliques using a minimum number of edge modifications. Although the problem has been proven NP-complete several times, it has nevertheless attracted much research both from the theoretical and the applied side...

  19. Heat dissipation for the Intel Core i5 processor using multiwalled carbon-nanotube-based ethylene glycol

    International Nuclear Information System (INIS)

    Thang, Bui Hung; Trinh, Pham Van; Quang, Le Dinh; Khoi, Phan Hong; Minh, Phan Ngoc; Huong, Nguyen Thi

    2014-01-01

    Carbon nanotubes (CNTs) are some of the most valuable materials with high thermal conductivity. The thermal conductivity of individual multiwalled carbon nanotubes (MWCNTs) grown by using chemical vapor deposition is 600 ± 100 Wm -1 K -1 compared with the thermal conductivity 419 Wm -1 K -1 of Ag. Carbon-nanotube-based liquids - a new class of nanomaterials, have shown many interesting properties and distinctive features offering potential in heat dissipation applications for electronic devices, such as computer microprocessor, high power LED, etc. In this work, a multiwalled carbon-nanotube-based liquid was made of well-dispersed hydroxyl-functional multiwalled carbon nanotubes (MWCNT-OH) in ethylene glycol (EG)/distilled water (DW) solutions by using Tween-80 surfactant and an ultrasonication method. The concentration of MWCNT-OH in EG/DW solutions ranged from 0.1 to 1.2 gram/liter. The dispersion of the MWCNT-OH-based EG/DW solutions was evaluated by using a Zeta-Sizer analyzer. The MWCNT-OH-based EG/DW solutions were used as coolants in the liquid cooling system for the Intel Core i5 processor. The thermal dissipation efficiency and the thermal response of the system were evaluated by directly measuring the temperature of the micro-processor using the Core Temp software and the temperature sensors built inside the micro-processor. The results confirmed the advantages of CNTs in thermal dissipation systems for computer processors and other high-power electronic devices.

  20. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    Science.gov (United States)

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  1. Occupational Clusters.

    Science.gov (United States)

    Pottawattamie County School System, Council Bluffs, IA.

    The 15 occupational clusters (transportation, fine arts and humanities, communications and media, personal service occupations, construction, hospitality and recreation, health occupations, marine science occupations, consumer and homemaking-related occupations, agribusiness and natural resources, environment, public service, business and office…

  2. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  3. Effectiveness of the Assessment of Burden of COPD (ABC) tool on health-related quality of life in patients with COPD: a cluster randomised controlled trial in primary and hospital care

    Science.gov (United States)

    Slok, Annerika H M; Kotz, Daniel; van Breukelen, Gerard; Chavannes, Niels H; Rutten-van Mölken, Maureen P M H; Kerstjens, Huib A M; van der Molen, Thys; Asijee, Guus M; Dekhuijzen, P N Richard; Holverda, Sebastiaan; Salomé, Philippe L; Goossens, Lucas M A; Twellaar, Mascha; in ‘t Veen, Johannes C C M; van Schayck, Onno C P

    2016-01-01

    Objective Assessing the effectiveness of the Assessment of Burden of COPD (ABC) tool on disease-specific quality of life in patients with chronic obstructive pulmonary disease (COPD) measured with the St. George's Respiratory Questionnaire (SGRQ), compared with usual care. Methods A pragmatic cluster randomised controlled trial, in 39 Dutch primary care practices and 17 hospitals, with 357 patients with COPD (postbronchodilator FEV1/FVC ratio <0.7) aged ≥40 years, who could understand and read the Dutch language. Healthcare providers were randomly assigned to the intervention or control group. The intervention group applied the ABC tool, which consists of a short validated questionnaire assessing the experienced burden of COPD, objective COPD parameter (eg, lung function) and a treatment algorithm including a visual display and treatment advice. The control group provided usual care. Researchers were blinded to group allocation during analyses. Primary outcome was the number of patients with a clinically relevant improvement in SGRQ score between baseline and 18-month follow-up. Secondary outcomes were the COPD Assessment Test (CAT) and the Patient Assessment of Chronic Illness Care (PACIC; a measurement of perceived quality of care). Results At 18-month follow-up, 34% of the 146 patients from 27 healthcare providers in the intervention group showed a clinically relevant improvement in the SGRQ, compared with 22% of the 148 patients from 29 healthcare providers in the control group (OR 1.85, 95% CI 1.08 to 3.16). No difference was found on the CAT (−0.26 points (scores ranging from 0 to 40); 95% CI −1.52 to 0.99). The PACIC showed a higher improvement in the intervention group (0.32 points (scores ranging from 1 to 5); 95% CI 0.14 to 0.50). Conclusions This study showed that use of the ABC tool may increase quality of life and perceived quality of care. Trial registration number NTR3788; Results. PMID:27401361

  4. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  5. Super computer made with Linux cluster

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Oh, Yeong Eun; Kim, Jeong Seok

    2002-01-01

    This book consists of twelve chapters, which introduce super computer made with Linux cluster. The contents of this book are Linux cluster, the principle of cluster, design of Linux cluster, general things for Linux, building up terminal server and client, Bear wolf cluster by Debian GNU/Linux, cluster system with red hat, Monitoring system, application programming-MPI, on set-up and install application programming-PVM, with PVM programming and XPVM application programming-open PBS with composition and install and set-up and GRID with GRID system, GSI, GRAM, MDS, its install and using of tool kit

  6. Cluster generator

    Science.gov (United States)

    Donchev, Todor I [Urbana, IL; Petrov, Ivan G [Champaign, IL

    2011-05-31

    Described herein is an apparatus and a method for producing atom clusters based on a gas discharge within a hollow cathode. The hollow cathode includes one or more walls. The one or more walls define a sputtering chamber within the hollow cathode and include a material to be sputtered. A hollow anode is positioned at an end of the sputtering chamber, and atom clusters are formed when a gas discharge is generated between the hollow anode and the hollow cathode.

  7. Cluster Bulleticity

    OpenAIRE

    Massey, Richard; Kitching, Thomas; Nagai, Daisuke

    2010-01-01

    The unique properties of dark matter are revealed during collisions between clusters of galaxies, such as the bullet cluster (1E 0657−56) and baby bullet (MACS J0025−12). These systems provide evidence for an additional, invisible mass in the separation between the distributions of their total mass, measured via gravitational lensing, and their ordinary ‘baryonic’ matter, measured via its X-ray emission. Unfortunately, the information available from these systems is limited by their rarity. C...

  8. Cluster headache

    OpenAIRE

    Leroux, Elizabeth; Ducros, Anne

    2008-01-01

    Abstract Cluster headache (CH) is a primary headache disease characterized by recurrent short-lasting attacks (15 to 180 minutes) of excruciating unilateral periorbital pain accompanied by ipsilateral autonomic signs (lacrimation, nasal congestion, ptosis, miosis, lid edema, redness of the eye). It affects young adults, predominantly males. Prevalence is estimated at 0.5–1.0/1,000. CH has a circannual and circadian periodicity, attacks being clustered (hence the name) in bouts that can occur ...

  9. Design of the Lifestyle Interventions for severe mentally ill Outpatients in the Netherlands (LION) trial; a cluster randomised controlled study of a multidimensional web tool intervention to improve cardiometabolic health in patients with severe mental illness.

    Science.gov (United States)

    Looijmans, Anne; Jörg, Frederike; Bruggeman, Richard; Schoevers, Robert; Corpeleijn, Eva

    2017-03-21

    The cardiometabolic health of persons with a severe mental illness (SMI) is alarming with obesity rates of 45-55% and diabetes type 2 rates of 10-15%. Unhealthy lifestyle behaviours play a large role in this. Despite the multidisciplinary guideline for SMI patients recommending to monitor and address patients' lifestyle, most mental health care professionals have limited lifestyle-related knowledge and skills, and (lifestyle) treatment protocols are lacking. Evidence-based practical lifestyle tools may support both patients and staff in improving patients' lifestyle. This paper describes the Lifestyle Interventions for severe mentally ill Outpatients in the Netherlands (LION) trial, to investigate whether a multidimensional lifestyle intervention using a web tool can be effective in improving cardiometabolic health in SMI patients. The LION study is a 12-month pragmatic single-blind multi-site cluster randomised controlled trial. 21 Flexible Assertive Community Treatment (ACT) teams and eight sheltered living teams of five mental health organizations in the Netherlands are invited to participate. Per team, nurses are trained in motivational interviewing and use of the multidimensional web tool, covering lifestyle behaviour awareness, lifestyle knowledge, motivation and goal setting. Nurses coach patients to change their lifestyle using the web tool, motivational interviewing and stages-of-change techniques during biweekly sessions in a) assessing current lifestyle behaviour using the traffic light method (healthy behaviours colour green, unhealthy behaviours colour red), b) creating a lifestyle plan with maximum three attainable lifestyle goals and c) discussing the lifestyle plan regularly. The study population is SMI patients and statistical inference is on patient level using multilevel analyses. Primary outcome is waist circumference and other cardiometabolic risk factors after six and twelve months intervention, which are measured as part of routine outcome

  10. The impact of a knowledge translation intervention employing educational outreach and a point-of-care reminder tool vs standard lay health worker training on tuberculosis treatment completion rates: study protocol for a cluster randomized controlled trial.

    Science.gov (United States)

    Puchalski Ritchie, Lisa M; van Lettow, Monique; Makwakwa, Austine; Chan, Adrienne K; Hamid, Jemila S; Kawonga, Harry; Martiniuk, Alexandra L C; Schull, Michael J; van Schoor, Vanessa; Zwarenstein, Merrick; Barnsley, Jan; Straus, Sharon E

    2016-09-07

    Despite availability of effective treatment, tuberculosis (TB) remains an important cause of morbidity and mortality globally, with low- and middle-income countries most affected. In many such settings, including Malawi, the high burden of disease and severe shortage of skilled healthcare workers has led to task-shifting of outpatient TB care to lay health workers (LHWs). LHWs improve access to healthcare and some outcomes, including TB completion rates, but lack of training and supervision limit their impact. The goals of this study are to improve TB care provided by LHWs in Malawi by refining, implementing, and evaluating a knowledge translation strategy designed to address a recognized gap in LHWs' TB and job-specific knowledge and, through this, to improve patient outcomes. We are employing a mixed-methods design that includes a pragmatic cluster randomized controlled trial and a process evaluation using qualitative methods. Trial participants will include all health centers providing TB care in four districts in the South East Zone of Malawi. The intervention employs educational outreach, a point-of-care reminder tool, and a peer support network. The primary outcome is proportion of treatment successes, defined as the total of TB patients cured or completing treatment, with outcomes taken from Ministry of Health treatment records. With an alpha of 0.05, power of 0.80, a baseline treatment success of 0.80, intraclass correlation coefficient of 0.1 based on our pilot study, and an estimated 100 clusters (health centers providing TB care), a minimum of 6 patients per cluster is required to detect a clinically significant 0.10 increase in the proportion of treatment successes. Our process evaluation will include interviews with LHWs and patients, and a document analysis of LHW training logs, quarterly peer trainer meetings, and mentorship meeting notes. An estimated 10-15 LHWs and 10-15 patients will be required to reach saturation in each of 2 planned interview

  11. Acceleration of Cherenkov angle reconstruction with the new Intel Xeon/FPGA compute platform for the particle identification in the LHCb Upgrade

    Science.gov (United States)

    Faerber, Christian

    2017-10-01

    The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a ‘triggerless’ readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40 MHz. This increases the data bandwidth from the detector down to the Event Filter farm to 40 TBit/s, which also has to be processed to select the interesting proton-proton collision for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered for use inside the new Event Filter farm. In the high performance computing sector more and more FPGA compute accelerators are used to improve the compute performance and reduce the power consumption (e.g. in the Microsoft Catapult project and Bing search engine). Also for the LHCb upgrade the usage of an experimental FPGA accelerated computing platform in the Event Building or in the Event Filter farm is being considered and therefore tested. This platform from Intel hosts a general CPU and a high performance FPGA linked via a high speed link which is for this platform a QPI link. On the FPGA an accelerator is implemented. The used system is a two socket platform from Intel with a Xeon CPU and an FPGA. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. As a first step, a computing intensive algorithm to reconstruct Cherenkov angles for the LHCb RICH particle identification was successfully ported in Verilog to the Intel Xeon/FPGA platform and accelerated by a factor of 35. The same algorithm was ported to the Intel Xeon/FPGA platform with OpenCL. The implementation work and the performance will be compared. Also another FPGA accelerator the Nallatech 385 PCIe accelerator with the same Stratix V FPGA were tested for performance. The results show that the Intel

  12. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  13. Heat dissipation for the Intel Core i5 processor using multiwalled carbon-nanotube-based ethylene glycol

    Energy Technology Data Exchange (ETDEWEB)

    Thang, Bui Hung; Trinh, Pham Van; Quang, Le Dinh; Khoi, Phan Hong; Minh, Phan Ngoc [Vietnam Academy of Science and Technology, Ho Chi Minh CIty (Viet Nam); Huong, Nguyen Thi [Hanoi University of Science, Hanoi (Viet Nam); Vietnam National University, Hanoi (Viet Nam)

    2014-08-15

    Carbon nanotubes (CNTs) are some of the most valuable materials with high thermal conductivity. The thermal conductivity of individual multiwalled carbon nanotubes (MWCNTs) grown by using chemical vapor deposition is 600 ± 100 Wm{sup -1}K{sup -1} compared with the thermal conductivity 419 Wm{sup -1}K{sup -1} of Ag. Carbon-nanotube-based liquids - a new class of nanomaterials, have shown many interesting properties and distinctive features offering potential in heat dissipation applications for electronic devices, such as computer microprocessor, high power LED, etc. In this work, a multiwalled carbon-nanotube-based liquid was made of well-dispersed hydroxyl-functional multiwalled carbon nanotubes (MWCNT-OH) in ethylene glycol (EG)/distilled water (DW) solutions by using Tween-80 surfactant and an ultrasonication method. The concentration of MWCNT-OH in EG/DW solutions ranged from 0.1 to 1.2 gram/liter. The dispersion of the MWCNT-OH-based EG/DW solutions was evaluated by using a Zeta-Sizer analyzer. The MWCNT-OH-based EG/DW solutions were used as coolants in the liquid cooling system for the Intel Core i5 processor. The thermal dissipation efficiency and the thermal response of the system were evaluated by directly measuring the temperature of the micro-processor using the Core Temp software and the temperature sensors built inside the micro-processor. The results confirmed the advantages of CNTs in thermal dissipation systems for computer processors and other high-power electronic devices.

  14. Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2

    Science.gov (United States)

    Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad

    1995-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.

  15. Speckle imaging of globular clusters

    International Nuclear Information System (INIS)

    Sams, B.J. III

    1990-01-01

    Speckle imaging is a powerful tool for high resolution astronomy. Its application to the core regions of globular clusters produces high resolution stellar maps of the bright stars, but is unable to image the faint stars which are most reliable dynamical indicators. The limits on resolving these faint, extended objects are physical, not algorithmic, and cannot be overcome using speckle. High resolution maps may be useful for resolving multicomponent stellar systems in the cluster centers. 30 refs

  16. Genetic algorithm based two-mode clustering of metabolomics data

    NARCIS (Netherlands)

    Hageman, J.A.; van den Berg, R.A.; Westerhuis, J.A.; van der Werf, M.J.; Smilde, A.K.

    2008-01-01

    Metabolomics and other omics tools are generally characterized by large data sets with many variables obtained under different environmental conditions. Clustering methods and more specifically two-mode clustering methods are excellent tools for analyzing this type of data. Two-mode clustering

  17. Clustering Dycom

    KAUST Repository

    Minku, Leandro L.; Hou, Siqing

    2017-01-01

    baseline WC model is also included in the analysis. Results: Clustering Dycom with K-Means can potentially help to split the CC projects, managing to achieve similar or better predictive performance than Dycom. However, K-Means still requires the number

  18. Computational Design of Clusters for Catalysis

    Science.gov (United States)

    Jimenez-Izal, Elisa; Alexandrova, Anastassia N.

    2018-04-01

    When small clusters are studied in chemical physics or physical chemistry, one perhaps thinks of the fundamental aspects of cluster electronic structure, or precision spectroscopy in ultracold molecular beams. However, small clusters are also of interest in catalysis, where the cold ground state or an isolated cluster may not even be the right starting point. Instead, the big question is: What happens to cluster-based catalysts under real conditions of catalysis, such as high temperature and coverage with reagents? Myriads of metastable cluster states become accessible, the entire system is dynamic, and catalysis may be driven by rare sites present only under those conditions. Activity, selectivity, and stability are highly dependent on size, composition, shape, support, and environment. To probe and master cluster catalysis, sophisticated tools are being developed for precision synthesis, operando measurements, and multiscale modeling. This review intends to tell the messy story of clusters in catalysis.

  19. Cluster analysis of received constellations for optical performance monitoring

    NARCIS (Netherlands)

    van Weerdenburg, J.J.A.; van Uden, R.; Sillekens, E.; de Waardt, H.; Koonen, A.M.J.; Okonkwo, C.

    2016-01-01

    Performance monitoring based on centroid clustering to investigate constellation generation offsets. The tool allows flexibility in constellation generation tolerances by forwarding centroids to the demapper. The relation of fibre nonlinearities and singular value decomposition of intra-cluster

  20. Cluster forcing

    DEFF Research Database (Denmark)

    Christensen, Thomas Budde

    The cluster theory attributed to Michael Porter has significantly influenced industrial policies in countries across Europe and North America since the beginning of the 1990s. Institutions such as the EU, OECD and the World Bank and governments in countries such as the UK, France, The Netherlands...... or management. Both the Accelerate Wales and the Accelerate Cluster programmes target this issue by trying to establish networks between companies that can be used to supply knowledge from research institutions to manufacturing companies. The paper concludes that public sector interventions can make...... businesses. The universities were not considered by the participating companies to be important parts of the local business environment and inputs from universities did not appear to be an important source to access knowledge about new product development or new techniques in production, distribution...

  1. Regional Innovation Clusters

    Data.gov (United States)

    Small Business Administration — The Regional Innovation Clusters serve a diverse group of sectors and geographies. Three of the initial pilot clusters, termed Advanced Defense Technology clusters,...

  2. Cluster analysis

    OpenAIRE

    Mucha, Hans-Joachim; Sofyan, Hizir

    2000-01-01

    As an explorative technique, duster analysis provides a description or a reduction in the dimension of the data. It classifies a set of observations into two or more mutually exclusive unknown groups based on combinations of many variables. Its aim is to construct groups in such a way that the profiles of objects in the same groups are relatively homogenous whereas the profiles of objects in different groups are relatively heterogeneous. Clustering is distinct from classification techniques, ...

  3. Cluster algebras in mathematical physics

    International Nuclear Information System (INIS)

    Francesco, Philippe Di; Gekhtman, Michael; Kuniba, Atsuo; Yamazaki, Masahito

    2014-01-01

    This special issue of Journal of Physics A: Mathematical and Theoretical contains reviews and original research articles on cluster algebras and their applications to mathematical physics. Cluster algebras were introduced by S Fomin and A Zelevinsky around 2000 as a tool for studying total positivity and dual canonical bases in Lie theory. Since then the theory has found diverse applications in mathematics and mathematical physics. Cluster algebras are axiomatically defined commutative rings equipped with a distinguished set of generators (cluster variables) subdivided into overlapping subsets (clusters) of the same cardinality subject to certain polynomial relations. A cluster algebra of rank n can be viewed as a subring of the field of rational functions in n variables. Rather than being presented, at the outset, by a complete set of generators and relations, it is constructed from the initial seed via an iterative procedure called mutation producing new seeds successively to generate the whole algebra. A seed consists of an n-tuple of rational functions called cluster variables and an exchange matrix controlling the mutation. Relations of cluster algebra type can be observed in many areas of mathematics (Plücker and Ptolemy relations, Stokes curves and wall-crossing phenomena, Feynman integrals, Somos sequences and Hirota equations to name just a few examples). The cluster variables enjoy a remarkable combinatorial pattern; in particular, they exhibit the Laurent phenomenon: they are expressed as Laurent polynomials rather than more general rational functions in terms of the cluster variables in any seed. These characteristic features are often referred to as the cluster algebra structure. In the last decade, it became apparent that cluster structures are ubiquitous in mathematical physics. Examples include supersymmetric gauge theories, Poisson geometry, integrable systems, statistical mechanics, fusion products in infinite dimensional algebras, dilogarithm

  4. Educational Outreach with an Integrated Clinical Tool for Nurse-Led Non-communicable Chronic Disease Management in Primary Care in South Africa: A Pragmatic Cluster Randomised Controlled Trial.

    Science.gov (United States)

    Fairall, Lara R; Folb, Naomi; Timmerman, Venessa; Lombard, Carl; Steyn, Krisela; Bachmann, Max O; Bateman, Eric D; Lund, Crick; Cornick, Ruth; Faris, Gill; Gaziano, Thomas; Georgeu-Pepper, Daniella; Zwarenstein, Merrick; Levitt, Naomi S

    2016-11-01

    In many low-income countries, care for patients with non-communicable diseases (NCDs) and mental health conditions is provided by nurses. The benefits of nurse substitution and supplementation in NCD care in high-income settings are well recognised, but evidence from low- and middle-income countries is limited. Primary Care 101 (PC101) is a programme designed to support and expand nurses' role in NCD care, comprising educational outreach to nurses and a clinical management tool with enhanced prescribing provisions. We evaluated the effect of the programme on primary care nurses' capacity to manage NCDs. In a cluster randomised controlled trial design, 38 public sector primary care clinics in the Western Cape Province, South Africa, were randomised. Nurses in the intervention clinics were trained to use the PC101 management tool during educational outreach sessions delivered by health department trainers and were authorised to prescribe an expanded range of drugs for several NCDs. Control clinics continued use of the Practical Approach to Lung Health and HIV/AIDS in South Africa (PALSA PLUS) management tool and usual training. Patients attending these clinics with one or more of hypertension (3,227), diabetes (1,842), chronic respiratory disease (1,157) or who screened positive for depression (2,466), totalling 4,393 patients, were enrolled between 28 March 2011 and 10 November 2011. Primary outcomes were treatment intensification in the hypertension, diabetes, and chronic respiratory disease cohorts, defined as the proportion of patients in whom treatment was escalated during follow-up over 14 mo, and case detection in the depression cohort. Primary outcome data were analysed for 2,110 (97%) intervention and 2,170 (97%) control group patients. Treatment intensification rates in intervention clinics were not superior to those in the control clinics (hypertension: 44% in the intervention group versus 40% in the control group, risk ratio [RR] 1.08 [95% CI 0.94 to 1

  5. Clustering methods for the optimization of atomic cluster structure

    Science.gov (United States)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  6. CORECLUSTER: A Degeneracy Based Graph Clustering Framework

    OpenAIRE

    Giatsidis , Christos; Malliaros , Fragkiskos; Thilikos , Dimitrios M. ,; Vazirgiannis , Michalis

    2014-01-01

    International audience; Graph clustering or community detection constitutes an important task forinvestigating the internal structure of graphs, with a plethora of applications in several domains. Traditional tools for graph clustering, such asspectral methods, typically suffer from high time and space complexity. In thisarticle, we present \\textsc{CoreCluster}, an efficient graph clusteringframework based on the concept of graph degeneracy, that can be used along withany known graph clusteri...

  7. Application of Intel Many Integrated Core (MIC) architecture to the Yonsei University planetary boundary layer scheme in Weather Research and Forecasting model

    Science.gov (United States)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecasting (WRF) model provided operational services worldwide in many areas and has linked to our daily activity, in particular during severe weather events. The scheme of Yonsei University (YSU) is one of planetary boundary layer (PBL) models in WRF. The PBL is responsible for vertical sub-grid-scale fluxes due to eddy transports in the whole atmospheric column, determines the flux profiles within the well-mixed boundary layer and the stable layer, and thus provide atmospheric tendencies of temperature, moisture (including clouds), and horizontal momentum in the entire atmospheric column. The YSU scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. To accelerate the computation process of the YSU scheme, we employ Intel Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.4x. Furthermore, the same CPU-based optimizations improved the performance on Intel Xeon E5-2603 by a factor of 1.6x as compared to the first version of multi-threaded code.

  8. Parallel algorithms for large-scale biological sequence alignment on Xeon-Phi based clusters.

    Science.gov (United States)

    Lan, Haidong; Chan, Yuandong; Xu, Kai; Schmidt, Bertil; Peng, Shaoliang; Liu, Weiguo

    2016-07-19

    Computing alignments between two or more sequences are common operations frequently performed in computational molecular biology. The continuing growth of biological sequence databases establishes the need for their efficient parallel implementation on modern accelerators. This paper presents new approaches to high performance biological sequence database scanning with the Smith-Waterman algorithm and the first stage of progressive multiple sequence alignment based on the ClustalW heuristic on a Xeon Phi-based compute cluster. Our approach uses a three-level parallelization scheme to take full advantage of the compute power available on this type of architecture; i.e. cluster-level data parallelism, thread-level coarse-grained parallelism, and vector-level fine-grained parallelism. Furthermore, we re-organize the sequence datasets and use Xeon Phi shuffle operations to improve I/O efficiency. Evaluations show that our method achieves a peak overall performance up to 220 GCUPS for scanning real protein sequence databanks on a single node consisting of two Intel E5-2620 CPUs and two Intel Xeon Phi 7110P cards. It also exhibits good scalability in terms of sequence length and size, and number of compute nodes for both database scanning and multiple sequence alignment. Furthermore, the achieved performance is highly competitive in comparison to optimized Xeon Phi and GPU implementations. Our implementation is available at https://github.com/turbo0628/LSDBS-mpi .

  9. A static analysis tool set for assembler code verification

    International Nuclear Information System (INIS)

    Dhodapkar, S.D.; Bhattacharjee, A.K.; Sen, Gopa

    1991-01-01

    Software Verification and Validation (V and V) is an important step in assuring reliability and quality of the software. The verification of program source code forms an important part of the overall V and V activity. The static analysis tools described here are useful in verification of assembler code. The tool set consists of static analysers for Intel 8086 and Motorola 68000 assembly language programs. The analysers examine the program source code and generate information about control flow within the program modules, unreachable code, well-formation of modules, call dependency between modules etc. The analysis of loops detects unstructured loops and syntactically infinite loops. Software metrics relating to size and structural complexity are also computed. This report describes the salient features of the design, implementation and the user interface of the tool set. The outputs generated by the analyser are explained using examples taken from some projects analysed by this tool set. (author). 7 refs., 17 figs

  10. Cluster growing process and a sequence of magic numbers

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Solov'yov, Andrey V.; Greiner, Walter

    2003-01-01

    demonstrate that in this way all known global minimum structures of the Lennard-Jones (LJ) clusters can be found. Our method provides an efficient tool for the calculation and analysis of atomic cluster structure. With its use we justify the magic number sequence for the clusters of noble gas atoms......We present a new theoretical framework for modeling the cluster growing process. Starting from the initial tetrahedral cluster configuration, adding new atoms to the system, and absorbing its energy at each step, we find cluster growing paths up to the cluster sizes of more than 100 atoms. We...

  11. Spectral-element simulation of two-dimensional elastic wave propagation in fully heterogeneous media on a GPU cluster

    Science.gov (United States)

    Rudianto, Indra; Sudarmaji

    2018-04-01

    We present an implementation of the spectral-element method for simulation of two-dimensional elastic wave propagation in fully heterogeneous media. We have incorporated most of realistic geological features in the model, including surface topography, curved layer interfaces, and 2-D wave-speed heterogeneity. To accommodate such complexity, we use an unstructured quadrilateral meshing technique. Simulation was performed on a GPU cluster, which consists of 24 core processors Intel Xeon CPU and 4 NVIDIA Quadro graphics cards using CUDA and MPI implementation. We speed up the computation by a factor of about 5 compared to MPI only, and by a factor of about 40 compared to Serial implementation.

  12. Nuclear clustering - a cluster core model study

    International Nuclear Information System (INIS)

    Paul Selvi, G.; Nandhini, N.; Balasubramaniam, M.

    2015-01-01

    Nuclear clustering, similar to other clustering phenomenon in nature is a much warranted study, since it would help us in understanding the nature of binding of the nucleons inside the nucleus, closed shell behaviour when the system is highly deformed, dynamics and structure at extremes. Several models account for the clustering phenomenon of nuclei. We present in this work, a cluster core model study of nuclear clustering in light mass nuclei

  13. X ray emission: a tool and a probe for laser - clusters interaction; L'emission X: un outil et une sonde pour l'interaction laser - agregats

    Energy Technology Data Exchange (ETDEWEB)

    Prigent, Ch

    2004-12-01

    In intense laser-cluster interaction, the experimental results show a strong energetic coupling between radiation and matter. We have measured absolute X-ray yields and charge state distributions under well control conditions as a function of physical parameters governing the interaction; namely laser intensity, pulse duration, wavelength or polarization state of the laser light, the size and the species of the clusters (Ar, Kr, Xe). We have highlighted, for the first time, an intensity threshold in the X-ray production very low ({approx} 2.10{sup 14} W/cm{sup 2} for a pulse duration of 300 fs) which can results from an effect of the dynamical polarisation of clusters in an intense electric field. A weak dependence with the wavelength (400 nm / 800 nm) on the absolute X-ray yields has been found. Moreover, we have observed a saturation of the X-ray emission probability below a critical cluster size. (author)

  14. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  15. Heterogeneous Gpu&Cpu Cluster For High Performance Computing In Cryptography

    Directory of Open Access Journals (Sweden)

    Michał Marks

    2012-01-01

    Full Text Available This paper addresses issues associated with distributed computing systems andthe application of mixed GPU&CPU technology to data encryption and decryptionalgorithms. We describe a heterogenous cluster HGCC formed by twotypes of nodes: Intel processor with NVIDIA graphics processing unit and AMDprocessor with AMD graphics processing unit (formerly ATI, and a novel softwareframework that hides the heterogeneity of our cluster and provides toolsfor solving complex scientific and engineering problems. Finally, we present theresults of numerical experiments. The considered case study is concerned withparallel implementations of selected cryptanalysis algorithms. The main goal ofthe paper is to show the wide applicability of the GPU&CPU technology tolarge scale computation and data processing.

  16. Educational program on HPC technologies based on the heterogeneous cluster HybriLIT (LIT JINR

    Directory of Open Access Journals (Sweden)

    Vladimir V. Korenkov

    2017-12-01

    Full Text Available The article highlights the issues of training personnel for work with high-performance computing systems (HPC, as well as of support of the software and information environment which is necessary for the efficient use of heterogeneous computing resources and the development of parallel and hybrid applications. The heterogeneous computing cluster HybriLIT, which is one of the components of the Multifunctional Information and Computing Complex of JINR, is used as the main platform for training and re-training specialists, as well as for training students, graduate students and young scientists. The HybriLIT cluster is a dynamic, actively developing structure, incorporating the most advanced HPC computing architectures (graphics accelerators, Intel Xeon Phi coprocessors, and also it has a developed software and information environment, which in turn, makes it possible to build educational programs on the up-to-date level, and enables the learners to master both modern computing platforms and modern IT technologies.

  17. Star clusters and K2

    Science.gov (United States)

    Dotson, Jessie; Barentsen, Geert; Cody, Ann Marie

    2018-01-01

    The K2 survey has expanded the Kepler legacy by using the repurposed spacecraft to observe over 20 star clusters. The sample includes open and globular clusters at all ages, including very young (1-10 Myr, e.g. Taurus, Upper Sco, NGC 6530), moderately young (0.1-1 Gyr, e.g. M35, M44, Pleiades, Hyades), middle-aged (e.g. M67, Ruprecht 147, NGC 2158), and old globular clusters (e.g. M9, M19, Terzan 5). K2 observations of stellar clusters are exploring the rotation period-mass relationship to significantly lower masses than was previously possible, shedding light on the angular momentum budget and its dependence on mass and circumstellar disk properties, and illuminating the role of multiplicity in stellar angular momentum. Exoplanets discovered by K2 in stellar clusters provides planetary systems ripe for modeling given the extensive information available about their ages and environment. I will review the star clusters sampled by K2 across 16 fields so far, highlighting several characteristics, caveats, and unexplored uses of the public data set along the way. With fuel expected to run out in 2018, I will discuss the closing Campaigns, highlight the final target selection opportunities, and explain the data archive and TESS-compatible software tools the K2 mission intends to leave behind for posterity.

  18. Cluster headache

    Directory of Open Access Journals (Sweden)

    Ducros Anne

    2008-07-01

    Full Text Available Abstract Cluster headache (CH is a primary headache disease characterized by recurrent short-lasting attacks (15 to 180 minutes of excruciating unilateral periorbital pain accompanied by ipsilateral autonomic signs (lacrimation, nasal congestion, ptosis, miosis, lid edema, redness of the eye. It affects young adults, predominantly males. Prevalence is estimated at 0.5–1.0/1,000. CH has a circannual and circadian periodicity, attacks being clustered (hence the name in bouts that can occur during specific months of the year. Alcohol is the only dietary trigger of CH, strong odors (mainly solvents and cigarette smoke and napping may also trigger CH attacks. During bouts, attacks may happen at precise hours, especially during the night. During the attacks, patients tend to be restless. CH may be episodic or chronic, depending on the presence of remission periods. CH is associated with trigeminovascular activation and neuroendocrine and vegetative disturbances, however, the precise cautive mechanisms remain unknown. Involvement of the hypothalamus (a structure regulating endocrine function and sleep-wake rhythms has been confirmed, explaining, at least in part, the cyclic aspects of CH. The disease is familial in about 10% of cases. Genetic factors play a role in CH susceptibility, and a causative role has been suggested for the hypocretin receptor gene. Diagnosis is clinical. Differential diagnoses include other primary headache diseases such as migraine, paroxysmal hemicrania and SUNCT syndrome. At present, there is no curative treatment. There are efficient treatments to shorten the painful attacks (acute treatments and to reduce the number of daily attacks (prophylactic treatments. Acute treatment is based on subcutaneous administration of sumatriptan and high-flow oxygen. Verapamil, lithium, methysergide, prednisone, greater occipital nerve blocks and topiramate may be used for prophylaxis. In refractory cases, deep-brain stimulation of the

  19. Simulating the Euclidean time Schroedinger equations using an Intel iPSC/860 hypercube: Application to the t-J model of high-Tc superconductivity

    International Nuclear Information System (INIS)

    Kovarik, M.D.; Barnes, T.; Tennessee Univ., Knoxville, TN

    1993-01-01

    We describe a Monte Carlo simulation of a dynamical fermion problem in two spatial dimensions on an Intel iPSC/860 hypercube. The problem studied is the determination of the dispersion relation of a dynamical hole in the t-J model of the high temperature superconductors. Since this problem involves the motion of many fermions in more than one spatial dimensions, it is representative of the class of systems that suffer from the ''minus sign problem'' of dynamical fermions which has made Monte Carlo simulation very difficult. We demonstrate that for small values of the hole hopping parameter one can extract the entire hole dispersion relation using the GRW Monte Carlo algorithm, which is a simulation of the Euclidean time Schroedinger equation, and present results on 4 x 4 and 6 x 6 lattices. Generalization to physical hopping parameter values wig only require use of an improved trial wavefunction for importance sampling

  20. IntroductionThe Cluster mission

    Directory of Open Access Journals (Sweden)

    M. Fehringer

    Full Text Available The Cluster mission, ESA’s first cornerstone project, together with the SOHO mission, dating back to the first proposals in 1982, was finally launched in the summer of 2000. On 16 July and 9 August, respectively, two Russian Soyuz rockets blasted off from the Russian cosmodrome in Baikonour to deliver two Cluster spacecraft, each into their proper orbit. By the end of August 2000, the four Cluster satellites had reached their final tetrahedral constellation. The commissioning of 44 instruments, both individually and as an ensemble of complementary tools, was completed five months later to ensure the optimal use of their combined observational potential. On 1 February 2001, the mission was declared operational. The main goal of the Cluster mission is to study the small-scale plasma structures in three dimensions in key plasma regions, such as the solar wind, bow shock, magnetopause, polar cusps, magnetotail and the auroral zones. With its unique capabilities of three-dimensional spatial resolution, Cluster plays a major role in the International Solar Terrestrial Program (ISTP, where Cluster and the Solar and Heliospheric Observatory (SOHO are the European contributions. Cluster’s payload consists of state-of-the-art plasma instrumentation to measure electric and magnetic fields from the quasi-static up to high frequencies, and electron and ion distribution functions from energies of nearly 0 eV to a few MeV. The science operations are coordinated by the Joint Science Operations Centre (JSOC, at the Rutherford Appleton Laboratory (UK, and implemented by the European Space Operations Centre (ESOC, in Darmstadt, Germany. A network of eight national data centres has been set up for raw data processing, for the production of physical parameters, and their distribution to end users all over the world. The latest information on the Cluster mission can be found at http://sci.esa.int/cluster/.

  1. IntroductionThe Cluster mission

    Directory of Open Access Journals (Sweden)

    C. P. Escoubet

    2001-09-01

    Full Text Available The Cluster mission, ESA’s first cornerstone project, together with the SOHO mission, dating back to the first proposals in 1982, was finally launched in the summer of 2000. On 16 July and 9 August, respectively, two Russian Soyuz rockets blasted off from the Russian cosmodrome in Baikonour to deliver two Cluster spacecraft, each into their proper orbit. By the end of August 2000, the four Cluster satellites had reached their final tetrahedral constellation. The commissioning of 44 instruments, both individually and as an ensemble of complementary tools, was completed five months later to ensure the optimal use of their combined observational potential. On 1 February 2001, the mission was declared operational. The main goal of the Cluster mission is to study the small-scale plasma structures in three dimensions in key plasma regions, such as the solar wind, bow shock, magnetopause, polar cusps, magnetotail and the auroral zones. With its unique capabilities of three-dimensional spatial resolution, Cluster plays a major role in the International Solar Terrestrial Program (ISTP, where Cluster and the Solar and Heliospheric Observatory (SOHO are the European contributions. Cluster’s payload consists of state-of-the-art plasma instrumentation to measure electric and magnetic fields from the quasi-static up to high frequencies, and electron and ion distribution functions from energies of nearly 0 eV to a few MeV. The science operations are coordinated by the Joint Science Operations Centre (JSOC, at the Rutherford Appleton Laboratory (UK, and implemented by the European Space Operations Centre (ESOC, in Darmstadt, Germany. A network of eight national data centres has been set up for raw data processing, for the production of physical parameters, and their distribution to end users all over the world. The latest information on the Cluster mission can be found at http://sci.esa.int/cluster/.

  2. Brightest Cluster Galaxies in REXCESS Clusters

    Science.gov (United States)

    Haarsma, Deborah B.; Leisman, L.; Bruch, S.; Donahue, M.

    2009-01-01

    Most galaxy clusters contain a Brightest Cluster Galaxy (BCG) which is larger than the other cluster ellipticals and has a more extended profile. In the hierarchical model, the BCG forms through many galaxy mergers in the crowded center of the cluster, and thus its properties give insight into the assembly of the cluster as a whole. In this project, we are working with the Representative XMM-Newton Cluster Structure Survey (REXCESS) team (Boehringer et al 2007) to study BCGs in 33 X-ray luminous galaxy clusters, 0.055 < z < 0.183. We are imaging the BCGs in R band at the Southern Observatory for Astrophysical Research (SOAR) in Chile. In this poster, we discuss our methods and give preliminary measurements of the BCG magnitudes, morphology, and stellar mass. We compare these BCG properties with the properties of their host clusters, particularly of the X-ray emitting gas.

  3. Multiple Clustering Views via Constrained Projections

    DEFF Research Database (Denmark)

    Dang, Xuan-Hong; Assent, Ira; Bailey, James

    2012-01-01

    Clustering, the grouping of data based on mutual similarity, is often used as one of principal tools to analyze and understand data. Unfortunately, most conventional techniques aim at finding only a single clustering over the data. For many practical applications, especially those being described...... in high dimensional data, it is common to see that the data can be grouped into different yet meaningful ways. This gives rise to the recently emerging research area of discovering alternative clusterings. In this preliminary work, we propose a novel framework to generate multiple clustering views....... The framework relies on a constrained data projection approach by which we ensure that a novel alternative clustering being found is not only qualitatively strong but also distinctively different from a reference clustering solution. We demonstrate the potential of the proposed framework using both synthetic...

  4. Cluster Ion Implantation in Graphite and Diamond

    DEFF Research Database (Denmark)

    Popok, Vladimir

    2014-01-01

    Cluster ion beam technique is a versatile tool which can be used for controllable formation of nanosize objects as well as modification and processing of surfaces and shallow layers on an atomic scale. The current paper present an overview and analysis of data obtained on a few sets of graphite...... and diamond samples implanted by keV-energy size-selected cobalt and argon clusters. One of the emphases is put on pinning of metal clusters on graphite with a possibility of following selective etching of graphene layers. The other topic of concern is related to the development of scaling law for cluster...... implantation. Implantation of cobalt and argon clusters into two different allotropic forms of carbon, namely, graphite and diamond is analysed and compared in order to approach universal theory of cluster stopping in matter....

  5. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  6. Clustering Coefficients for Correlation Networks.

    Science.gov (United States)

    Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu

    2018-01-01

    Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly

  7. Clustering Coefficients for Correlation Networks

    Directory of Open Access Journals (Sweden)

    Naoki Masuda

    2018-03-01

    Full Text Available Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients

  8. Clustering Coefficients for Correlation Networks

    Science.gov (United States)

    Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu

    2018-01-01

    Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly

  9. Clustervision: Visual Supervision of Unsupervised Clustering.

    Science.gov (United States)

    Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam

    2018-01-01

    Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.

  10. Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.

    Science.gov (United States)

    Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K

    2013-03-01

    Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.

  11. Effects on energetic impact of atomic clusters with surfaces

    International Nuclear Information System (INIS)

    Popok, V.N.; Vuchkovich, S.; Abdela, A.; Campbell, E.E.B.

    2007-01-01

    A brief state-of-the-art review in the field of cluster ion interaction with surface is presented. Cluster beams are efficient tools for manipulating agglomerates of atoms providing control over the synthesis as well as modification of surfaces on the nm-scale. The application of cluster beams for technological purposes requires knowledge of the physics of cluster-surface impact. This has some significant differences compared to monomer ion - surface interactions. The main effects of cluster-surface collisions are discussed. Recent results obtained in experiments on silicon surface nanostructuring using keV-energy implantation of inert gas cluster ions are presented and compared with molecular dynamics simulations. (authors)

  12. Interactive visual exploration and refinement of cluster assignments.

    Science.gov (United States)

    Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R

    2017-09-12

    With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.

  13. High performance electromagnetic simulation tools

    Science.gov (United States)

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  14. Diversity among galaxy clusters

    International Nuclear Information System (INIS)

    Struble, M.F.; Rood, H.J.

    1988-01-01

    The classification of galaxy clusters is discussed. Consideration is given to the classification scheme of Abell (1950's), Zwicky (1950's), Morgan, Matthews, and Schmidt (1964), and Morgan-Bautz (1970). Galaxies can be classified based on morphology, chemical composition, spatial distribution, and motion. The correlation between a galaxy's environment and morphology is examined. The classification scheme of Rood-Sastry (1971), which is based on clusters's morphology and galaxy population, is described. The six types of clusters they define include: (1) a cD-cluster dominated by a single large galaxy, (2) a cluster dominated by a binary, (3) a core-halo cluster, (4) a cluster dominated by several bright galaxies, (5) a cluster appearing flattened, and (6) an irregularly shaped cluster. Attention is also given to the evolution of cluster structures, which is related to initial density and cluster motion

  15. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    Science.gov (United States)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  16. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    Science.gov (United States)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  17. What Makes Clusters Decline?

    DEFF Research Database (Denmark)

    Østergaard, Christian Richter; Park, Eun Kyung

    2015-01-01

    Most studies on regional clusters focus on identifying factors and processes that make clusters grow. However, sometimes technologies and market conditions suddenly shift, and clusters decline. This paper analyses the process of decline of the wireless communication cluster in Denmark. The longit...... but being quick to withdraw in times of crisis....

  18. Clustering of correlated networks

    OpenAIRE

    Dorogovtsev, S. N.

    2003-01-01

    We obtain the clustering coefficient, the degree-dependent local clustering, and the mean clustering of networks with arbitrary correlations between the degrees of the nearest-neighbor vertices. The resulting formulas allow one to determine the nature of the clustering of a network.

  19. cluML: A markup language for clustering and cluster validity assessment of microarray data.

    Science.gov (United States)

    Bolshakova, Nadia; Cunningham, Pádraig

    2005-01-01

    cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.

  20. Relevant Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2009-01-01

    Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace...... clusters. Their results are typically highly redundant, i.e. many clusters are detected multiple times in several projections. In this work, we propose a novel model for relevant subspace clustering (RESCU). We present a global optimization which detects the most interesting non-redundant subspace clusters...... achieves top clustering quality while competing approaches show greatly varying performance....

  1. PREFACE: Nuclear Cluster Conference; Cluster'07

    Science.gov (United States)

    Freer, Martin

    2008-05-01

    The Cluster Conference is a long-running conference series dating back to the 1960's, the first being initiated by Wildermuth in Bochum, Germany, in 1969. The most recent meeting was held in Nara, Japan, in 2003, and in 2007 the 9th Cluster Conference was held in Stratford-upon-Avon, UK. As the name suggests the town of Stratford lies upon the River Avon, and shortly before the conference, due to unprecedented rainfall in the area (approximately 10 cm within half a day), lay in the River Avon! Stratford is the birthplace of the `Bard of Avon' William Shakespeare, and this formed an intriguing conference backdrop. The meeting was attended by some 90 delegates and the programme contained 65 70 oral presentations, and was opened by a historical perspective presented by Professor Brink (Oxford) and closed by Professor Horiuchi (RCNP) with an overview of the conference and future perspectives. In between, the conference covered aspects of clustering in exotic nuclei (both neutron and proton-rich), molecular structures in which valence neutrons are exchanged between cluster cores, condensates in nuclei, neutron-clusters, superheavy nuclei, clusters in nuclear astrophysical processes and exotic cluster decays such as 2p and ternary cluster decay. The field of nuclear clustering has become strongly influenced by the physics of radioactive beam facilities (reflected in the programme), and by the excitement that clustering may have an important impact on the structure of nuclei at the neutron drip-line. It was clear that since Nara the field had progressed substantially and that new themes had emerged and others had crystallized. Two particular topics resonated strongly condensates and nuclear molecules. These topics are thus likely to be central in the next cluster conference which will be held in 2011 in the Hungarian city of Debrechen. Martin Freer Participants and Cluster'07

  2. Effectiveness of the Assessment of Burden of COPD (ABC) tool on health-related quality of life in patients with COPD : A cluster randomised controlled trial in primary and hospital care

    NARCIS (Netherlands)

    A.H.M. Slok (Annerika); D. Kotz (Daniel); G.J.P. van Breukelen (Gerard); N.H. Chavannes (Nicolas); M.P.M.H. Rutten-van Mölken (Maureen); H.A.M. Kerstjens (Huib); T. van der Molen (Thys); G.M. Asijee (Guus); P.N.R. Dekhuijzen (Richard); S. Holverda (Sebastiaan); P.L. Salome´ (Philippe); L.M.A. Goossens (Lucas); M. Twellaar (Mascha); J.C.C.M. In't Veen (Johannes C.C.M.); O.C.P. Schayck (Onno)

    2016-01-01

    markdownabstractObjective: Assessing the effectiveness of the Assessment of Burden of COPD (ABC) tool on diseasespecific quality of life in patients with chronic obstructive pulmonary disease (COPD) measured with the St. George's Respiratory Questionnaire (SGRQ), compared with usual care.

  3. The GNEMRE Dendro Tool.

    Energy Technology Data Exchange (ETDEWEB)

    Merchant, Bion John

    2007-10-01

    The GNEMRE Dendro Tool provides a previously unrealized analysis capability in the field of nuclear explosion monitoring. Dendro Tool allows analysts to quickly and easily determine the similarity between seismic events using the waveform time-series for each of the events to compute cross-correlation values. Events can then be categorized into clusters of similar events. This analysis technique can be used to characterize historical archives of seismic events in order to determine many of the unique sources that are present. In addition, the source of any new events can be quickly identified simply by comparing the new event to the historical set.

  4. Cluster policy in Europe and Asia: A comparison using selected cluster policy characteristics

    Directory of Open Access Journals (Sweden)

    Martina Sopoligová

    2017-10-01

    Full Text Available Currently, cluster concept is one of the most important tools for governments to enhance competitiveness and innovations through sectoral specialization and cooperation. The paper focuses on applications of the cluster policy in the distinct territorial context of Europe and Asia so that to perform a comparison between different approaches to the cluster concept application in real practice. The paper introduces a comparative study of the cluster policy concepts based on the characteristics defined by the authors, such as scope, approach, targeting, autonomy, institutional coordination, policy instruments and evaluation system studied for the selected European and Asian countries such as Denmark, France, Germany, China, Japan, and South Korea. The research draws upon processing the secondary data obtained through content analysis of the related literature, government documents and strategies, and also cluster funding programmes. The findings demonstrate the diversity of cluster policies implemented in the context of European and Asian conditions at the current stage of their development.

  5. Management of cluster headache

    DEFF Research Database (Denmark)

    Tfelt-Hansen, Peer C; Jensen, Rigmor H

    2012-01-01

    The prevalence of cluster headache is 0.1% and cluster headache is often not diagnosed or misdiagnosed as migraine or sinusitis. In cluster headache there is often a considerable diagnostic delay - an average of 7 years in a population-based survey. Cluster headache is characterized by very severe...... or severe orbital or periorbital pain with a duration of 15-180 minutes. The cluster headache attacks are accompanied by characteristic associated unilateral symptoms such as tearing, nasal congestion and/or rhinorrhoea, eyelid oedema, miosis and/or ptosis. In addition, there is a sense of restlessness...... and agitation. Patients may have up to eight attacks per day. Episodic cluster headache (ECH) occurs in clusters of weeks to months duration, whereas chronic cluster headache (CCH) attacks occur for more than 1 year without remissions. Management of cluster headache is divided into acute attack treatment...

  6. Symmetries of cluster configurations

    International Nuclear Information System (INIS)

    Kramer, P.

    1975-01-01

    A deeper understanding of clustering phenomena in nuclei must encompass at least two interrelated aspects of the subject: (A) Given a system of A nucleons with two-body interactions, what are the relevant and persistent modes of clustering involved. What is the nature of the correlated nucleon groups which form the clusters, and what is their mutual interaction. (B) Given the cluster modes and their interaction, what systematic patterns of nuclear structure and reactions emerge from it. Are there, for example, families of states which share the same ''cluster parents''. Which cluster modes are compatible or exclude each other. What quantum numbers could characterize cluster configurations. There is no doubt that we can learn a good deal from the experimentalists who have discovered many of the features relevant to aspect (B). Symmetries specific to cluster configurations which can throw some light on both aspects of clustering are discussed

  7. Cluster Decline and Resilience

    DEFF Research Database (Denmark)

    Østergaard, Christian Richter; Park, Eun Kyung

    Most studies on regional clusters focus on identifying factors and processes that make clusters grow. However, sometimes technologies and market conditions suddenly shift, and clusters decline. This paper analyses the process of decline of the wireless communication cluster in Denmark, 1963......-2011. Our longitudinal study reveals that technological lock-in and exit of key firms have contributed to impairment of the cluster’s resilience in adapting to disruptions. Entrepreneurship has a positive effect on cluster resilience, while multinational companies have contradicting effects by bringing...... in new resources to the cluster but being quick to withdraw in times of crisis....

  8. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Science.gov (United States)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  9. Comprehensive cluster analysis with Transitivity Clustering.

    Science.gov (United States)

    Wittkop, Tobias; Emig, Dorothea; Truss, Anke; Albrecht, Mario; Böcker, Sebastian; Baumbach, Jan

    2011-03-01

    Transitivity Clustering is a method for the partitioning of biological data into groups of similar objects, such as genes, for instance. It provides integrated access to various functions addressing each step of a typical cluster analysis. To facilitate this, Transitivity Clustering is accessible online and offers three user-friendly interfaces: a powerful stand-alone version, a web interface, and a collection of Cytoscape plug-ins. In this paper, we describe three major workflows: (i) protein (super)family detection with Cytoscape, (ii) protein homology detection with incomplete gold standards and (iii) clustering of gene expression data. This protocol guides the user through the most important features of Transitivity Clustering and takes ∼1 h to complete.

  10. A Monte Carlo study of the ''minus sign problem'' in the t-J model using an intel IPSC/860 hypercube

    International Nuclear Information System (INIS)

    Kovarik, M.D.; Barnes, T.; Tennessee Univ., Knoxville, TN

    1993-01-01

    We describe a Monte Carlo simulation of the 2-dimensional t-J model on an Intel iPSC/860 hypercube. The problem studied is the determination of the dispersion relation of a dynamical hole in the t-J model of the high temperature superconductors. Since this problem involves the motion of many fermions in more than one spatial dimensions, it is representative of the class of systems that suffer from the ''minus sign problem'' of dynamical fermions which has made Monte Carlo simulation very difficult. We demonstrate that for small values of the hole hopping parameter one can extract the entire hole dispersion relation using the GRW Monte Carlo algorithm, which is a simulation of the Euclidean time Schroedinger equation, and present results on 4 x 4 and 6 x 6 lattices. We demonstrate that a qualitative picture at higher hopping parameters may be found by extrapolating weak hopping results where the minus sign problem is less severe. Generalization to physical hopping parameter values will only require use of an improved trial wavefunction for importance sampling

  11. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    Science.gov (United States)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging

  12. LMC clusters: young

    International Nuclear Information System (INIS)

    Freeman, K.C.

    1980-01-01

    The young globular clusters of the LMC have ages of 10 7 -10 8 y. Their masses and structure are similar to those of the smaller galactic globular clusters. Their stellar mass functions (in the mass range 6 solar masses to 1.2 solar masses) vary greatly from cluster to cluster, although the clusters are similar in total mass, age, structure and chemical composition. It would be very interesting to know why these clusters are forming now in the LMC and not in the Galaxy. The author considers the 'young globular' or 'blue populous' clusters of the LMC. The ages of these objects are 10 7 to 10 8 y, and their masses are 10 4 to 10 5 solar masses, so they are populous enough to be really useful for studying the evolution of massive stars. The author concentrates on the structure and stellar content of these young clusters. (Auth.)

  13. Star clusters and associations

    International Nuclear Information System (INIS)

    Ruprecht, J.; Palous, J.

    1983-01-01

    All 33 papers presented at the symposium were inputted to INIS. They dealt with open clusters, globular clusters, stellar associations and moving groups, and local kinematics and galactic structures. (E.S.)

  14. Cluster beam injection

    International Nuclear Information System (INIS)

    Bottiglioni, F.; Coutant, J.; Fois, M.

    1978-01-01

    Areas of possible applications of cluster injection are discussed. The deposition inside the plasma of molecules, issued from the dissociation of the injected clusters, has been computed. Some empirical scaling laws for the penetration are given

  15. Clustering at high redshifts

    International Nuclear Information System (INIS)

    Shaver, P.A.

    1986-01-01

    Evidence for clustering of and with high-redshift QSOs is discussed. QSOs of different redshifts show no clustering, but QSOs of similar redshifts appear to be clustered on a scale comparable to that of galaxies at the present epoch. In addition, spectroscopic studies of close pairs of QSOs indicate that QSOs are surrounded by a relatively high density of absorbing matter, possibly clusters of galaxies

  16. Shared decision making in type 2 diabetes with a support decision tool that takes into account clinical factors, the intensity of treatment and patient preferences : Design of a cluster randomised (OPTIMAL) trial

    NARCIS (Netherlands)

    Den Ouden, Henk; Vos, Rimke C.; Reidsma, Carla; Rutten, Guy Ehm

    2015-01-01

    Background: No more than 10-15% of type 2 diabetes mellitus (T2DM) patients achieve all treatment goals regarding glycaemic control, lipids and blood pressure. Shared decision making (SDM) should increase that percentage; however, not all support decision tools are appropriate. Because the

  17. Effectiveness of the Assessment of Burden of COPD (ABC) tool on health-related quality of life in patients with COPD: a cluster randomised controlled trial in primary and hospital care

    NARCIS (Netherlands)

    A.H.M. Slok (Annerika); D. Kotz (Daniel); G. van Breukelen (Gerard); N.H. Chavannes (Nicolas); M.P.M.H. Rutten-van Mölken (Maureen); H.A.M. Kerstjens (Huib); T. van der Molen (Thys); G.M. Asijee (Guus); P.N.R. Dekhuijzen (Richard); S. Holverda (Sebastiaan); P.L. Salome (Philippe); L.M.A. Goossens (Lucas); M. Twellaar (Mascha); J.C.C.M. in 't Veen (Johannes); O.C.P. van Schayck (Onno)

    2016-01-01

    markdownabstract__Objective:__ Assessing the effectiveness of the Assessment of Burden of COPD (ABC) tool on diseasespecific quality of life in patients with chronic obstructive pulmonary disease (COPD) measured with the St. George’s Respiratory Questionnaire (SGRQ), compared with usual care.

  18. Fragmentation of percolation cluster perimeters

    Science.gov (United States)

    Debierre, Jean-Marc; Bradley, R. Mark

    1996-05-01

    We introduce a model for the fragmentation of porous random solids under the action of an external agent. In our model, the solid is represented by a bond percolation cluster on the square lattice and bonds are removed only at the external perimeter (or `hull') of the cluster. This model is shown to be related to the self-avoiding walk on the Manhattan lattice and to the disconnection events at a diffusion front. These correspondences are used to predict the leading and the first correction-to-scaling exponents for several quantities defined for hull fragmentation. Our numerical results support these predictions. In addition, the algorithm used to construct the perimeters reveals itself to be a very efficient tool for detecting subtle correlations in the pseudo-random number generator used. We present a quantitative test of two generators which supports recent results reported in more systematic studies.

  19. Cluster Physics with Merging Galaxy Clusters

    Directory of Open Access Journals (Sweden)

    Sandor M. Molnar

    2016-02-01

    Full Text Available Collisions between galaxy clusters provide a unique opportunity to study matter in a parameter space which cannot be explored in our laboratories on Earth. In the standard LCDM model, where the total density is dominated by the cosmological constant ($Lambda$ and the matter density by cold dark matter (CDM, structure formation is hierarchical, and clusters grow mostly by merging.Mergers of two massive clusters are the most energetic events in the universe after the Big Bang,hence they provide a unique laboratory to study cluster physics.The two main mass components in clusters behave differently during collisions:the dark matter is nearly collisionless, responding only to gravity, while the gas is subject to pressure forces and dissipation, and shocks and turbulenceare developed during collisions. In the present contribution we review the different methods used to derive the physical properties of merging clusters. Different physical processes leave their signatures on different wavelengths, thusour review is based on a multifrequency analysis. In principle, the best way to analyze multifrequency observations of merging clustersis to model them using N-body/HYDRO numerical simulations. We discuss the results of such detailed analyses.New high spatial and spectral resolution ground and space based telescopeswill come online in the near future. Motivated by these new opportunities,we briefly discuss methods which will be feasible in the near future in studying merging clusters.

  20. Size selected metal clusters

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. The Optical Absorption Spectra of Small Silver Clusters (5-11) ... Soft Landing and Fragmentation of Small Clusters Deposited in Noble-Gas Films. Harbich, W.; Fedrigo, S.; Buttet, J. Phys. Rev. B 1998, 58, 7428. CO combustion on supported gold clusters. Arenz M ...

  1. The Durban Auto Cluster

    DEFF Research Database (Denmark)

    Lorentzen, Jochen; Robbins, Glen; Barnes, Justin

    2004-01-01

    The paper describes the formation of the Durban Auto Cluster in the context of trade liberalization. It argues that the improvement of operational competitiveness of firms in the cluster is prominently due to joint action. It tests this proposition by comparing the gains from cluster activities...

  2. Marketing research cluster analysis

    Directory of Open Access Journals (Sweden)

    Marić Nebojša

    2002-01-01

    Full Text Available One area of applications of cluster analysis in marketing is identification of groups of cities and towns with similar demographic profiles. This paper considers main aspects of cluster analysis by an example of clustering 12 cities with the use of Minitab software.

  3. Marketing research cluster analysis

    OpenAIRE

    Marić Nebojša

    2002-01-01

    One area of applications of cluster analysis in marketing is identification of groups of cities and towns with similar demographic profiles. This paper considers main aspects of cluster analysis by an example of clustering 12 cities with the use of Minitab software.

  4. Minimalist's linux cluster

    International Nuclear Information System (INIS)

    Choi, Chang-Yeong; Kim, Jeong-Hyun; Kim, Seyong

    2004-01-01

    Using barebone PC components and NIC's, we construct a linux cluster which has 2-dimensional mesh structure. This cluster has smaller footprint, is less expensive, and use less power compared to conventional linux cluster. Here, we report our experience in building such a machine and discuss our current lattice project on the machine

  5. Range-clustering queries

    NARCIS (Netherlands)

    Abrahamsen, M.; de Berg, M.T.; Buchin, K.A.; Mehr, M.; Mehrabi, A.D.

    2017-01-01

    In a geometric k -clustering problem the goal is to partition a set of points in R d into k subsets such that a certain cost function of the clustering is minimized. We present data structures for orthogonal range-clustering queries on a point set S : given a query box Q and an integer k>2 , compute

  6. Cosmology with cluster surveys

    Indian Academy of Sciences (India)

    Abstract. Surveys of clusters of galaxies provide us with a powerful probe of the den- sity and nature of the dark energy. The red-shift distribution of detected clusters is highly sensitive to the dark energy equation of state parameter w. Upcoming Sunyaev–. Zel'dovich (SZ) surveys would provide us large yields of clusters to ...

  7. Clusters in nonsmooth oscillator networks

    Science.gov (United States)

    Nicks, Rachel; Chambon, Lucie; Coombes, Stephen

    2018-03-01

    For coupled oscillator networks with Laplacian coupling, the master stability function (MSF) has proven a particularly powerful tool for assessing the stability of the synchronous state. Using tools from group theory, this approach has recently been extended to treat more general cluster states. However, the MSF and its generalizations require the determination of a set of Floquet multipliers from variational equations obtained by linearization around a periodic orbit. Since closed form solutions for periodic orbits are invariably hard to come by, the framework is often explored using numerical techniques. Here, we show that further insight into network dynamics can be obtained by focusing on piecewise linear (PWL) oscillator models. Not only do these allow for the explicit construction of periodic orbits, their variational analysis can also be explicitly performed. The price for adopting such nonsmooth systems is that many of the notions from smooth dynamical systems, and in particular linear stability, need to be modified to take into account possible jumps in the components of Jacobians. This is naturally accommodated with the use of saltation matrices. By augmenting the variational approach for studying smooth dynamical systems with such matrices we show that, for a wide variety of networks that have been used as models of biological systems, cluster states can be explicitly investigated. By way of illustration, we analyze an integrate-and-fire network model with event-driven synaptic coupling as well as a diffusively coupled network built from planar PWL nodes, including a reduction of the popular Morris-Lecar neuron model. We use these examples to emphasize that the stability of network cluster states can depend as much on the choice of single node dynamics as it does on the form of network structural connectivity. Importantly, the procedure that we present here, for understanding cluster synchronization in networks, is valid for a wide variety of systems in

  8. Orbital localization criterion as a complementary tool in the bonding analysis by means of electron localization function: study of the Si(n)(BH)(5-n)(2-) (n = 0-5) clusters.

    Science.gov (United States)

    Oña, Ofelia B; Alcoba, Diego R; Torre, Alicia; Lain, Luis; Torres-Vega, Juan J; Tiznado, William

    2013-12-05

    A recently proposed molecular orbital localization procedure, based on the electron localization function (ELF) technique, has been used to describe chemical bonding in the cluster series Sin(BH)(5-n)(2-) (n = 0-5). The method combines the chemically intuitive information obtained from the traditional ELF analysis with the flexibility and generality of canonical molecular orbital theory. This procedure attempts to localize the molecular orbitals in regions that have the highest probability for finding a pair of electrons, providing a chemical bonding description according to the classical Lewis theory. The results confirm that conservation of the structures upon isoelectronic replacement of a B-H group by a Si atom, allowing evolution from B5H5(2-) to Si5(2-), is in total agreement with the preservation of the chemical bonding pattern.

  9. Improved Ant Colony Clustering Algorithm and Its Performance Study

    Science.gov (United States)

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  10. Techniques and tools for measuring energy efficiency of scientific software applications

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Niemi, Tapio; Pestana, Gonçalo; Khan, Kashif; Nurminen, Jukka K; Nyback, Filip; Ou, Zhonghong

    2015-01-01

    The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running on ARM and Intel architectures, and compare their power consumption and performance. We leverage several profiling tools (both in hardware and software) to extract different characteristics of the power use. We report the results of these measurements and the experience gained in developing a set of measurement techniques and profiling tools to accurately assess the power consumption for scientific workloads. (paper)

  11. Cluster analysis for applications

    CERN Document Server

    Anderberg, Michael R

    1973-01-01

    Cluster Analysis for Applications deals with methods and various applications of cluster analysis. Topics covered range from variables and scales to measures of association among variables and among data units. Conceptual problems in cluster analysis are discussed, along with hierarchical and non-hierarchical clustering methods. The necessary elements of data analysis, statistics, cluster analysis, and computer implementation are integrated vertically to cover the complete path from raw data to a finished analysis.Comprised of 10 chapters, this book begins with an introduction to the subject o

  12. Clusters in nuclei

    CERN Document Server

    Following the pioneering discovery of alpha clustering and of molecular resonances, the field of nuclear clustering is today one of those domains of heavy-ion nuclear physics that faces the greatest challenges, yet also contains the greatest opportunities. After many summer schools and workshops, in particular over the last decade, the community of nuclear molecular physicists has decided to collaborate in producing a comprehensive collection of lectures and tutorial reviews covering the field. This third volume follows the successful Lect. Notes Phys. 818 (Vol. 1) and 848 (Vol. 2), and comprises six extensive lectures covering the following topics:  - Gamma Rays and Molecular Structure - Faddeev Equation Approach for Three Cluster Nuclear Reactions - Tomography of the Cluster Structure of Light Nuclei Via Relativistic Dissociation - Clustering Effects Within the Dinuclear Model : From Light to Hyper-heavy Molecules in Dynamical Mean-field Approach - Clusterization in Ternary Fission - Clusters in Light N...

  13. Spatial cluster modelling

    CERN Document Server

    Lawson, Andrew B

    2002-01-01

    Research has generated a number of advances in methods for spatial cluster modelling in recent years, particularly in the area of Bayesian cluster modelling. Along with these advances has come an explosion of interest in the potential applications of this work, especially in epidemiology and genome research. In one integrated volume, this book reviews the state-of-the-art in spatial clustering and spatial cluster modelling, bringing together research and applications previously scattered throughout the literature. It begins with an overview of the field, then presents a series of chapters that illuminate the nature and purpose of cluster modelling within different application areas, including astrophysics, epidemiology, ecology, and imaging. The focus then shifts to methods, with discussions on point and object process modelling, perfect sampling of cluster processes, partitioning in space and space-time, spatial and spatio-temporal process modelling, nonparametric methods for clustering, and spatio-temporal ...

  14. Clusters and how to make it work : Cluster Strategy Toolkit

    NARCIS (Netherlands)

    Manickam, Anu; van Berkel, Karel

    2014-01-01

    Clusters are the magic answer to regional economic development. Firms in clusters are more innovative; cluster policy dominates EU policy; ‘top-sectors’ and excellence are the choice of national policy makers; clusters are ‘in’. But, clusters are complex, clusters are ‘messy’; there is no clear

  15. Multiscale visual quality assessment for cluster analysis with self-organizing maps

    Science.gov (United States)

    Bernard, Jürgen; von Landesberger, Tatiana; Bremm, Sebastian; Schreck, Tobias

    2011-01-01

    Cluster analysis is an important data mining technique for analyzing large amounts of data, reducing many objects to a limited number of clusters. Cluster visualization techniques aim at supporting the user in better understanding the characteristics and relationships among the found clusters. While promising approaches to visual cluster analysis already exist, these usually fall short of incorporating the quality of the obtained clustering results. However, due to the nature of the clustering process, quality plays an important aspect, as for most practical data sets, typically many different clusterings are possible. Being aware of clustering quality is important to judge the expressiveness of a given cluster visualization, or to adjust the clustering process with refined parameters, among others. In this work, we present an encompassing suite of visual tools for quality assessment of an important visual cluster algorithm, namely, the Self-Organizing Map (SOM) technique. We define, measure, and visualize the notion of SOM cluster quality along a hierarchy of cluster abstractions. The quality abstractions range from simple scalar-valued quality scores up to the structural comparison of a given SOM clustering with output of additional supportive clustering methods. The suite of methods allows the user to assess the SOM quality on the appropriate abstraction level, and arrive at improved clustering results. We implement our tools in an integrated system, apply it on experimental data sets, and show its applicability.

  16. Computational Aspects of Nuclear Coupled-Cluster Theory

    International Nuclear Information System (INIS)

    Dean, David Jarvis; Hagen, Gaute; Hjorth-Jensen, M.; Papenbrock, T.F.

    2008-01-01

    Coupled-cluster theory represents an important theoretical tool that we use to solve the quantum many-body problem. Coupled-cluster theory also lends itself to computation in a parallel computing environment. In this article, we present selected results from ab initio studies of stable and weakly bound nuclei utilizing computational techniques that we employ to solve coupled-cluster theory. We also outline several perspectives for future research directions in this area.

  17. Spectromicroscopy of self-assembled protein clusters

    Energy Technology Data Exchange (ETDEWEB)

    Schonschek, O.; Hormes, J.; Herzog, V. [Univ. of Bonn (Germany)

    1997-04-01

    The aim of this project is to use synchrotron radiation as a tool to study biomedical questions concerned with the thyroid glands. The biological background is outlined in a recent paper. In short, Thyroglobulin (TG), the precursor protein of the hormone thyroxine, forms large (20 - 500 microns in diameter) clusters in the extracellular lumen of thyrocytes. The process of the cluster formation is still not well understood but is thought to be a main storage mechanism of TG and therefore thyroxine inside the thyroid glands. For human thyroids, the interconnections of the proteins inside the clusters are mainly disulfide bondings. Normally, sulfur bridges are catalyzed by an enzyme called Protein Disulfide Bridge Isomerase (PDI). While this enzyme is supposed to be not present in any extracellular space, the cluster formation of TG takes place in the lumen between the thyrocytes. A possible explanation is the autocatalysis of TG.

  18. Comparing the performance of biomedical clustering methods

    DEFF Research Database (Denmark)

    Wiwie, Christian; Baumbach, Jan; Röttger, Richard

    2015-01-01

    expression to protein domains. Performance was judged on the basis of 13 common cluster validity indices. We developed a clustering analysis platform, ClustEval (http://clusteval.mpi-inf.mpg.de), to promote streamlined evaluation, comparison and reproducibility of clustering results in the future......Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene....... This allowed us to objectively evaluate the performance of all tools on all data sets with up to 1,000 different parameter sets each, resulting in a total of more than 4 million calculated cluster validity indices. We observed that there was no universal best performer, but on the basis of this wide...

  19. TH-A-19A-08: Intel Xeon Phi Implementation of a Fast Multi-Purpose Monte Carlo Simulation for Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Souris, K; Lee, J; Sterpin, E [Universite catholique de Louvain, Brussels (Belgium)

    2014-06-15

    Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time

  20. TH-A-19A-08: Intel Xeon Phi Implementation of a Fast Multi-Purpose Monte Carlo Simulation for Proton Therapy

    International Nuclear Information System (INIS)

    Souris, K; Lee, J; Sterpin, E

    2014-01-01

    Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed and accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time

  1. Agricultural Clusters in the Netherlands

    NARCIS (Netherlands)

    Schouten, M.A.; Heijman, W.J.M.

    2012-01-01

    Michael Porter was the first to use the term cluster in an economic context. He introduced the term in The Competitive Advantage of Nations (1990). The term cluster is also known as business cluster, industry cluster, competitive cluster or Porterian cluster. This article aims at determining and

  2. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  3. Open source clustering software.

    Science.gov (United States)

    de Hoon, M J L; Imoto, S; Nolan, J; Miyano, S

    2004-06-12

    We have implemented k-means clustering, hierarchical clustering and self-organizing maps in a single multipurpose open-source library of C routines, callable from other C and C++ programs. Using this library, we have created an improved version of Michael Eisen's well-known Cluster program for Windows, Mac OS X and Linux/Unix. In addition, we generated a Python and a Perl interface to the C Clustering Library, thereby combining the flexibility of a scripting language with the speed of C. The C Clustering Library and the corresponding Python C extension module Pycluster were released under the Python License, while the Perl module Algorithm::Cluster was released under the Artistic License. The GUI code Cluster 3.0 for Windows, Macintosh and Linux/Unix, as well as the corresponding command-line program, were released under the same license as the original Cluster code. The complete source code is available at http://bonsai.ims.u-tokyo.ac.jp/mdehoon/software/cluster. Alternatively, Algorithm::Cluster can be downloaded from CPAN, while Pycluster is also available as part of the Biopython distribution.

  4. Control and stimulation tools (cat) for the PC modelers

    International Nuclear Information System (INIS)

    Chan, K.S.; Lea, K.C.

    1990-01-01

    For the last couple of years, the personal computer technology has received a steady stream of improvement in CPU processing power, fast floating coprocessor, super graphics to a very large and fast hard drive. Since Intel began shipping its 80386 version of CPU, it became practical to develop or execute a substantial amount of power plant software on a personal computer. With the introduction of the RISC type personal workstation, complete simulators based on these new generation of computers will soon become a reality. As of today, although almost anybody can afford a personal computer, simulation supporting software is still rare or non-existent at all. The CAST has been designed to support those users who want to develop or debug large power plant simulation programs on a personal computer or workstation. Another separate paper will also be presented to demonstrate the real world development and debugging tools offered by CAST

  5. Simulation tools

    CERN Document Server

    Jenni, F

    2006-01-01

    In the last two decades, simulation tools made a significant contribution to the great progress in development of power electronics. Time to market was shortened and development costs were reduced drastically. Falling costs, as well as improved speed and precision, opened new fields of application. Today, continuous and switched circuits can be mixed. A comfortable number of powerful simulation tools is available. The users have to choose the best suitable for their application. Here a simple rule applies: The best available simulation tool is the tool the user is already used to (provided, it can solve the task). Abilities, speed, user friendliness and other features are continuously being improved—even though they are already powerful and comfortable. This paper aims at giving the reader an insight into the simulation of power electronics. Starting with a short description of the fundamentals of a simulation tool as well as properties of tools, several tools are presented. Starting with simplified models ...

  6. Electron: Cluster interactions

    International Nuclear Information System (INIS)

    Scheidemann, A.A.; Knight, W.D.

    1994-02-01

    Beam depletion spectroscopy has been used to measure absolute total inelastic electron-sodium cluster collision cross sections in the energy range from E ∼ 0.1 to E ∼ 6 eV. The investigation focused on the closed shell clusters Na 8 , Na 20 , Na 40 . The measured cross sections show an increase for the lowest collision energies where electron attachment is the primary scattering channel. The electron attachment cross section can be understood in terms of Langevin scattering, connecting this measurement with the polarizability of the cluster. For energies above the dissociation energy the measured electron-cluster cross section is energy independent, thus defining an electron-cluster interaction range. This interaction range increases with the cluster size

  7. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  8. Comprehensive studies of hydrogeochemical processes and quality status of groundwater with tools of cluster, grouping analysis, and fuzzy set method using GIS platform: a case study of Dalcheon in Ulsan City, Korea.

    Science.gov (United States)

    Venkatramanan, S; Chung, S Y; Rajesh, R; Lee, S Y; Ramkumar, T; Prasanna, M V

    2015-08-01

    This research aimed at developing comprehensive assessments of physicochemical quality of groundwater for drinking and irrigation purposes at Dalcheon in Ulsan City, Korea. The mean concentration of major ions represented as follows: Ca (94.3 mg/L) > Mg (41.7 mg/L) > Na (19.2 mg/L) > K (3.2 mg/L) for cations and SO4 (351 mg/L) > HCO3 (169 mg/L) > Cl (19 mg/L) for anions. Thematic maps for physicochemical parameters of groundwater were prepared, classified, weighted, and integrated in GIS method with fuzzy logic. The maps exhibited that suitable zone of drinking and irrigation purpose occupied in SE, NE, and NW sectors. The undesirable zone of drinking purpose was observed in SW and central parts and that of irrigation was in the western part of the study area. This was influenced by improperly treated effluents from an abandoned iron ore mine, irrigation, and domestic fields. By grouping analysis, groundwater types were classified into Ca(HCO3)2, (Ca,Mg)Cl2, and CaCl2, and CaHCO3 was the most predominant type. Grouping analysis also showed three types of irrigation water such as C1S1, C1S2, and C1S3. C1S3 type of high salinity to low sodium hazard was the most dominant in the study area. Equilibrium processes elucidated the groundwater samples were in the saturated to undersaturated condition with respect to aragonite, calcite, dolomite, and gypsum due to precipitation and deposition processes. Cluster analysis suggested that high contents of SO4 and HCO3 with low Cl was related with water-rock interactions and along with mining impact. This study showed that the effluents discharged from mining waste was the main sources of groundwater quality deterioration.

  9. Substructure in clusters of galaxies

    International Nuclear Information System (INIS)

    Fitchett, M.J.

    1988-01-01

    Optical observations suggesting the existence of substructure in clusters of galaxies are examined. Models of cluster formation and methods used to detect substructure in clusters are reviewed. Consideration is given to classification schemes based on a departure of bright cluster galaxies from a spherically symmetric distribution, evidence for statistically significant substructure, and various types of substructure, including velocity, spatial, and spatial-velocity substructure. The substructure observed in the galaxy distribution in clusters is discussed, focusing on observations from general cluster samples, the Virgo cluster, the Hydra cluster, Centaurus, the Coma cluster, and the Cancer cluster. 88 refs

  10. Nuclear cluster states

    International Nuclear Information System (INIS)

    Rae, W.D.M.; Merchant, A.C.

    1993-01-01

    We review clustering in light nuclei including molecular resonances in heavy ion reactions. In particular we study the systematics, paying special attention to the relationships between cluster states and superdeformed configurations. We emphasise the selection rules which govern the formation and decay of cluster states. We review some recent experimental results from Daresbury and elsewhere. In particular we report on the evidence for a 7-α chain state in 28 Si in experiments recently performed at the NSF, Daresbury. Finally we begin to address theoretically the important question of the lifetimes of cluster states as deduced from the experimental energy widths of the resonances. (Author)

  11. 15th Cluster workshop

    CERN Document Server

    Laakso, Harri; Escoubet, C. Philippe; The Cluster Active Archive : Studying the Earth’s Space Plasma Environment

    2010-01-01

    Since the year 2000 the ESA Cluster mission has been investigating the small-scale structures and processes of the Earth's plasma environment, such as those involved in the interaction between the solar wind and the magnetospheric plasma, in global magnetotail dynamics, in cross-tail currents, and in the formation and dynamics of the neutral line and of plasmoids. This book contains presentations made at the 15th Cluster workshop held in March 2008. It also presents several articles about the Cluster Active Archive and its datasets, a few overview papers on the Cluster mission, and articles reporting on scientific findings on the solar wind, the magnetosheath, the magnetopause and the magnetotail.

  12. Clusters in simple fluids

    International Nuclear Information System (INIS)

    Sator, N.

    2003-01-01

    This article concerns the correspondence between thermodynamics and the morphology of simple fluids in terms of clusters. Definitions of clusters providing a geometric interpretation of the liquid-gas phase transition are reviewed with an eye to establishing their physical relevance. The author emphasizes their main features and basic hypotheses, and shows how these definitions lead to a recent approach based on self-bound clusters. Although theoretical, this tutorial review is also addressed to readers interested in experimental aspects of clustering in simple fluids

  13. Clustering by reordering of similarity and Laplacian matrices: Application to galaxy clusters

    Science.gov (United States)

    Mahmoud, E.; Shoukry, A.; Takey, A.

    2018-04-01

    Similarity metrics, kernels and similarity-based algorithms have gained much attention due to their increasing applications in information retrieval, data mining, pattern recognition and machine learning. Similarity Graphs are often adopted as the underlying representation of similarity matrices and are at the origin of known clustering algorithms such as spectral clustering. Similarity matrices offer the advantage of working in object-object (two-dimensional) space where visualization of clusters similarities is available instead of object-features (multi-dimensional) space. In this paper, sparse ɛ-similarity graphs are constructed and decomposed into strong components using appropriate methods such as Dulmage-Mendelsohn permutation (DMperm) and/or Reverse Cuthill-McKee (RCM) algorithms. The obtained strong components correspond to groups (clusters) in the input (feature) space. Parameter ɛi is estimated locally, at each data point i from a corresponding narrow range of the number of nearest neighbors. Although more advanced clustering techniques are available, our method has the advantages of simplicity, better complexity and direct visualization of the clusters similarities in a two-dimensional space. Also, no prior information about the number of clusters is needed. We conducted our experiments on two and three dimensional, low and high-sized synthetic datasets as well as on an astronomical real-dataset. The results are verified graphically and analyzed using gap statistics over a range of neighbors to verify the robustness of the algorithm and the stability of the results. Combining the proposed algorithm with gap statistics provides a promising tool for solving clustering problems. An astronomical application is conducted for confirming the existence of 45 galaxy clusters around the X-ray positions of galaxy clusters in the redshift range [0.1..0.8]. We re-estimate the photometric redshifts of the identified galaxy clusters and obtain acceptable values

  14. Lifting to cluster-tilting objects in higher cluster categories

    OpenAIRE

    Liu, Pin

    2008-01-01

    In this note, we consider the $d$-cluster-tilted algebras, the endomorphism algebras of $d$-cluster-tilting objects in $d$-cluster categories. We show that a tilting module over such an algebra lifts to a $d$-cluster-tilting object in this $d$-cluster category.

  15. Cluster analysis of activity-time series in motor learning

    DEFF Research Database (Denmark)

    Balslev, Daniela; Nielsen, Finn Å; Futiger, Sally A

    2002-01-01

    Neuroimaging studies of learning focus on brain areas where the activity changes as a function of time. To circumvent the difficult problem of model selection, we used a data-driven analytic tool, cluster analysis, which extracts representative temporal and spatial patterns from the voxel......-time series. The optimal number of clusters was chosen using a cross-validated likelihood method, which highlights the clustering pattern that generalizes best over the subjects. Data were acquired with PET at different time points during practice of a visuomotor task. The results from cluster analysis show...

  16. Soft landing of size selected clusters in rare gas matrices

    International Nuclear Information System (INIS)

    Lau, J.T; Wurth, W.; Ehrke, H-U.; Achleitner, A.

    2003-01-01

    Soft landing of mass selected clusters in rare gas matrices is a technique used to preserve mass selection in cluster deposition. To prevent fragmentation upon deposition, the substrate is covered with rare gas matrices to dissipate the cluster kinetic energy upon impact. Theoretical and experimental studies demonstrate the power of this technique. Besides STM, optical absorption, excitation, and fluorescence experiments, x-ray absorption at core levels can be used as a tool to study soft landing conditions, as will be shown here. X-ray absorption spectroscopy is also well suited to follow diffusion and agglomeration of clusters on surfaces via energy shifts in core level absorption

  17. Neurostimulation in cluster headache

    DEFF Research Database (Denmark)

    Pedersen, Jeppe L; Barloese, Mads; Jensen, Rigmor H

    2013-01-01

    PURPOSE OF REVIEW: Neurostimulation has emerged as a viable treatment for intractable chronic cluster headache. Several therapeutic strategies are being investigated including stimulation of the hypothalamus, occipital nerves and sphenopalatine ganglion. The aim of this review is to provide...... effective strategy must be preferred as first-line therapy for intractable chronic cluster headache....

  18. Cauchy cluster process

    DEFF Research Database (Denmark)

    Ghorbani, Mohammad

    2013-01-01

    In this paper we introduce an instance of the well-know Neyman–Scott cluster process model with clusters having a long tail behaviour. In our model the offspring points are distributed around the parent points according to a circular Cauchy distribution. Using a modified Cramér-von Misses test...

  19. When Clusters become Networks

    NARCIS (Netherlands)

    S.M.W. Phlippen (Sandra); G.A. van der Knaap (Bert)

    2007-01-01

    textabstractPolicy makers spend large amounts of public resources on the foundation of science parks and other forms of geographically clustered business activities, in order to stimulate regional innovation. Underlying the relation between clusters and innovation is the assumption that co-located

  20. Mixed-Initiative Clustering

    Science.gov (United States)

    Huang, Yifen

    2010-01-01

    Mixed-initiative clustering is a task where a user and a machine work collaboratively to analyze a large set of documents. We hypothesize that a user and a machine can both learn better clustering models through enriched communication and interactive learning from each other. The first contribution or this thesis is providing a framework of…

  1. Coma cluster of galaxies

    Science.gov (United States)

    1999-01-01

    Atlas Image mosaic, covering 34' x 34' on the sky, of the Coma cluster, aka Abell 1656. This is a particularly rich cluster of individual galaxies (over 1000 members), most prominently the two giant ellipticals, NGC 4874 (right) and NGC 4889 (left). The remaining members are mostly smaller ellipticals, but spiral galaxies are also evident in the 2MASS image. The cluster is seen toward the constellation Coma Berenices, but is actually at a distance of about 100 Mpc (330 million light years, or a redshift of 0.023) from us. At this distance, the cluster is in what is known as the 'Hubble flow,' or the overall expansion of the Universe. As such, astronomers can measure the Hubble Constant, or the universal expansion rate, based on the distance to this cluster. Large, rich clusters, such as Coma, allow astronomers to measure the 'missing mass,' i.e., the matter in the cluster that we cannot see, since it gravitationally influences the motions of the member galaxies within the cluster. The near-infrared maps the overall luminous mass content of the member galaxies, since the light at these wavelengths is dominated by the more numerous older stellar populations. Galaxies, as seen by 2MASS, look fairly smooth and homogeneous, as can be seen from the Hubble 'tuning fork' diagram of near-infrared galaxy morphology. Image mosaic by S. Van Dyk (IPAC).

  2. Cluster growth kinetics

    International Nuclear Information System (INIS)

    Dubovik, V.M.; Gal'perin, A.G.; Rikhvitskij, V.S.; Lushnikov, A.A.

    2000-01-01

    Processes of some traffic blocking coming into existence are considered as probabilistic ones. We study analytic solutions for models for the dynamics of both cluster growth and cluster growth with fragmentation in the systems of finite number of objects. Assuming rates constancy of both coalescence and fragmentation, the models under consideration are linear on the probability functions

  3. Alpha clustering in nuclei

    International Nuclear Information System (INIS)

    Hodgson, P.E.

    1990-01-01

    The effects of nucleon clustering in nuclei are described, with reference to both nuclear structure and nuclear reactions, and the advantages of using the cluster formalism to describe a range of phenomena are discussed. It is shown that bound and scattering alpha-particle states can be described in a unified way using an energy-dependent alpha-nucleus potential. (author)

  4. Mining the National Career Assessment Examination Result Using Clustering Algorithm

    Science.gov (United States)

    Pagudpud, M. V.; Palaoag, T. T.; Padirayon, L. M.

    2018-03-01

    Education is an essential process today which elicits authorities to discover and establish innovative strategies for educational improvement. This study applied data mining using clustering technique for knowledge extraction from the National Career Assessment Examination (NCAE) result in the Division of Quirino. The NCAE is an examination given to all grade 9 students in the Philippines to assess their aptitudes in the different domains. Clustering the students is helpful in identifying students’ learning considerations. With the use of the RapidMiner tool, clustering algorithms such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN), k-means, k-medoid, expectation maximization clustering, and support vector clustering algorithms were analyzed. The silhouette indexes of the said clustering algorithms were compared, and the result showed that the k-means algorithm with k = 3 and silhouette index equal to 0.196 is the most appropriate clustering algorithm to group the students. Three groups were formed having 477 students in the determined group (cluster 0), 310 proficient students (cluster 1) and 396 developing students (cluster 2). The data mining technique used in this study is essential in extracting useful information from the NCAE result to better understand the abilities of students which in turn is a good basis for adopting teaching strategies.

  5. clusterMaker: a multi-algorithm clustering plugin for Cytoscape

    Directory of Open Access Journals (Sweden)

    Morris John H

    2011-11-01

    Full Text Available Abstract Background In the post-genomic era, the rapid increase in high-throughput data calls for computational tools capable of integrating data of diverse types and facilitating recognition of biologically meaningful patterns within them. For example, protein-protein interaction data sets have been clustered to identify stable complexes, but scientists lack easily accessible tools to facilitate combined analyses of multiple data sets from different types of experiments. Here we present clusterMaker, a Cytoscape plugin that implements several clustering algorithms and provides network, dendrogram, and heat map views of the results. The Cytoscape network is linked to all of the other views, so that a selection in one is immediately reflected in the others. clusterMaker is the first Cytoscape plugin to implement such a wide variety of clustering algorithms and visualizations, including the only implementations of hierarchical clustering, dendrogram plus heat map visualization (tree view, k-means, k-medoid, SCPS, AutoSOME, and native (Java MCL. Results Results are presented in the form of three scenarios of use: analysis of protein expression data using a recently published mouse interactome and a mouse microarray data set of nearly one hundred diverse cell/tissue types; the identification of protein complexes in the yeast Saccharomyces cerevisiae; and the cluster analysis of the vicinal oxygen chelate (VOC enzyme superfamily. For scenario one, we explore functionally enriched mouse interactomes specific to particular cellular phenotypes and apply fuzzy clustering. For scenario two, we explore the prefoldin complex in detail using both physical and genetic interaction clusters. For scenario three, we explore the possible annotation of a protein as a methylmalonyl-CoA epimerase within the VOC superfamily. Cytoscape session files for all three scenarios are provided in the Additional Files section. Conclusions The Cytoscape plugin cluster

  6. Negotiating Cluster Boundaries

    DEFF Research Database (Denmark)

    Giacomin, Valeria

    2017-01-01

    Palm oil was introduced to Malay(si)a as an alternative to natural rubber, inheriting its cluster organizational structure. In the late 1960s, Malaysia became the world’s largest palm oil exporter. Based on archival material from British colonial institutions and agency houses, this paper focuses...... on the governance dynamics that drove institutional change within this cluster during decolonization. The analysis presents three main findings: (i) cluster boundaries are defined by continuous tug-of-war style negotiations between public and private actors; (ii) this interaction produces institutional change...... within the cluster, in the form of cumulative ‘institutional rounds’ – the correction or disruption of existing institutions or the creation of new ones; and (iii) this process leads to a broader inclusion of local actors in the original cluster configuration. The paper challenges the prevalent argument...

  7. Mathematical classification and clustering

    CERN Document Server

    Mirkin, Boris

    1996-01-01

    I am very happy to have this opportunity to present the work of Boris Mirkin, a distinguished Russian scholar in the areas of data analysis and decision making methodologies. The monograph is devoted entirely to clustering, a discipline dispersed through many theoretical and application areas, from mathematical statistics and combina­ torial optimization to biology, sociology and organizational structures. It compiles an immense amount of research done to date, including many original Russian de­ velopments never presented to the international community before (for instance, cluster-by-cluster versions of the K-Means method in Chapter 4 or uniform par­ titioning in Chapter 5). The author's approach, approximation clustering, allows him both to systematize a great part of the discipline and to develop many in­ novative methods in the framework of optimization problems. The optimization methods considered are proved to be meaningful in the contexts of data analysis and clustering. The material presented in ...

  8. Neutrosophic Hierarchical Clustering Algoritms

    Directory of Open Access Journals (Sweden)

    Rıdvan Şahin

    2014-03-01

    Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.

  9. Herd Clustering: A synergistic data clustering approach using collective intelligence

    KAUST Repository

    Wong, Kachun; Peng, Chengbin; Li, Yue; Chan, Takming

    2014-01-01

    , this principle is used to develop a new clustering algorithm. Inspired by herd behavior, the clustering method is a synergistic approach using collective intelligence called Herd Clustering (HC). The novel part is laid in its first stage where data instances

  10. Formation of global energy minimim structures in the growth process of Lennard-Jones clusters

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Koshelev, Andrey; Shutovich, Andrey

    2003-01-01

    that in this way all known global minimum structures of the Lennard-Jones (LJ) clusters can be found. Our method provides an efficient tool for the calculation and analysis of atomic cluster structure. With its use we justify the magic numbers sequence for the clusters of noble gases atoms and compare...

  11. Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio [Richland, WA; Calapristi, Augustin J [West Richland, WA; Crow, Vernon L [Richland, WA; Hetzler, Elizabeth G [Kennewick, WA; Turner, Alan E [Kennewick, WA

    2009-12-22

    Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture are described. In one aspect, a document clustering method includes providing a document set comprising a plurality of documents, providing a cluster comprising a subset of the documents of the document set, using a plurality of terms of the documents, providing a cluster label indicative of subject matter content of the documents of the cluster, wherein the cluster label comprises a plurality of word senses, and selecting one of the word senses of the cluster label.

  12. Cluster-cluster correlations and constraints on the correlation hierarchy

    Science.gov (United States)

    Hamilton, A. J. S.; Gott, J. R., III

    1988-01-01

    The hypothesis that galaxies cluster around clusters at least as strongly as they cluster around galaxies imposes constraints on the hierarchy of correlation amplitudes in hierachical clustering models. The distributions which saturate these constraints are the Rayleigh-Levy random walk fractals proposed by Mandelbrot; for these fractal distributions cluster-cluster correlations are all identically equal to galaxy-galaxy correlations. If correlation amplitudes exceed the constraints, as is observed, then cluster-cluster correlations must exceed galaxy-galaxy correlations, as is observed.

  13. Formation of stable products from cluster-cluster collisions

    International Nuclear Information System (INIS)

    Alamanova, Denitsa; Grigoryan, Valeri G; Springborg, Michael

    2007-01-01

    The formation of stable products from copper cluster-cluster collisions is investigated by using classical molecular-dynamics simulations in combination with an embedded-atom potential. The dependence of the product clusters on impact energy, relative orientation of the clusters, and size of the clusters is studied. The structures and total energies of the product clusters are analysed and compared with those of the colliding clusters before impact. These results, together with the internal temperature, are used in obtaining an increased understanding of cluster fusion processes

  14. Integrated spectral study of small angular diameter galactic open clusters

    Science.gov (United States)

    Clariá, J. J.; Ahumada, A. V.; Bica, E.; Pavani, D. B.; Parisi, M. C.

    2017-10-01

    This paper presents flux-calibrated integrated spectra obtained at Complejo Astronómico El Leoncito (CASLEO, Argentina) for a sample of 9 Galactic open clusters of small angular diameter. The spectra cover the optical range (3800-6800 Å), with a resolution of ˜14 Å. With one exception (Ruprecht 158), the selected clusters are projected into the fourth Galactic quadrant (282o evaluate their membership status. The current cluster sample complements that of 46 open clusters previously studied by our group in an effort to gather a spectral library with several clusters per age bin. The cluster spectral library that we have been building is an important tool to tie studies of resolved and unresolved stellar content.

  15. Tune Your Brown Clustering, Please

    DEFF Research Database (Denmark)

    Derczynski, Leon; Chester, Sean; Bøgh, Kenneth Sejdenfaden

    2015-01-01

    Brown clustering, an unsupervised hierarchical clustering technique based on ngram mutual information, has proven useful in many NLP applications. However, most uses of Brown clustering employ the same default configuration; the appropriateness of this configuration has gone predominantly...

  16. Cluster Management Institutionalization

    DEFF Research Database (Denmark)

    Normann, Leo; Agger Nielsen, Jeppe

    2015-01-01

    of how it was legitimized as a “ready-to-use” management model. Further, our account reveals how cluster management translated into considerably different local variants as it travelled into specific organizations. However, these processes have not occurred sequentially with cluster management first...... legitimized at the field level, then spread, and finally translated into action in the adopting organizations. Instead, we observed entangled field and organizational-level processes. Accordingly, we argue that cluster management institutionalization is most readily understood by simultaneously investigating...

  17. The concept of cluster

    DEFF Research Database (Denmark)

    Laursen, Lea Louise Holst; Møller, Jørgen

    2013-01-01

    villages in order to secure their future. This paper will address the concept of cluster-villages as a possible approach to strengthen the conditions of contemporary Danish villages. Cluster-villages is a concept that gather a number of villages in a network-structure where the villages both work together...... to forskellige positioner ser vi en ny mulighed for landsbyudvikling, som vi kalder Clustervillages. In order to investigate the potentials and possibilities of the cluster-village concept the paper will seek to unfold the concept strategically; looking into the benefits of such concept. Further, the paper seeks...

  18. Raspberry Pi super cluster

    CERN Document Server

    Dennis, Andrew K

    2013-01-01

    This book follows a step-by-step, tutorial-based approach which will teach you how to develop your own super cluster using Raspberry Pi computers quickly and efficiently.Raspberry Pi Super Cluster is an introductory guide for those interested in experimenting with parallel computing at home. Aimed at Raspberry Pi enthusiasts, this book is a primer for getting your first cluster up and running.Basic knowledge of C or Java would be helpful but no prior knowledge of parallel computing is necessary.

  19. Introduction to cluster dynamics

    CERN Document Server

    Reinhard, Paul-Gerhard

    2008-01-01

    Clusters as mesoscopic particles represent an intermediate state of matter between single atoms and solid material. The tendency to miniaturise technical objects requires knowledge about systems which contain a ""small"" number of atoms or molecules only. This is all the more true for dynamical aspects, particularly in relation to the qick development of laser technology and femtosecond spectroscopy. Here, for the first time is a highly qualitative introduction to cluster physics. With its emphasis on cluster dynamics, this will be vital to everyone involved in this interdisciplinary subje

  20. Contextualizing the Cluster

    DEFF Research Database (Denmark)

    Giacomin, Valeria

    This dissertation examines the case of the palm oil cluster in Malaysia and Indonesia, today one of the largest agricultural clusters in the world. My analysis focuses on the evolution of the cluster from the 1880s to the 1970s in order to understand how it helped these two countries to integrate...... into the global economy in both colonial and post-colonial times. The study is based on empirical material drawn from five UK archives and background research using secondary sources, interviews, and archive visits to Malaysia and Singapore. The dissertation comprises three articles, each discussing a major under...

  1. Atomic cluster collisions

    Science.gov (United States)

    Korol, Andrey V.; Solov'yov, Andrey

    2013-01-01

    Atomic cluster collisions are a field of rapidly emerging research interest by both experimentalists and theorists. The international symposium on atomic cluster collisions (ISSAC) is the premier forum to present cutting-edge research in this field. It was established in 2003 and the most recent conference was held in Berlin, Germany in July of 2011. This Topical Issue presents original research results from some of the participants, who attended this conference. This issues specifically focuses on two research areas, namely Clusters and Fullerenes in External Fields and Nanoscale Insights in Radiation Biodamage.

  2. Combining cluster number counts and galaxy clustering

    Energy Technology Data Exchange (ETDEWEB)

    Lacasa, Fabien; Rosenfeld, Rogerio, E-mail: fabien@ift.unesp.br, E-mail: rosenfel@ift.unesp.br [ICTP South American Institute for Fundamental Research, Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil)

    2016-08-01

    The abundance of clusters and the clustering of galaxies are two of the important cosmological probes for current and future large scale surveys of galaxies, such as the Dark Energy Survey. In order to combine them one has to account for the fact that they are not independent quantities, since they probe the same density field. It is important to develop a good understanding of their correlation in order to extract parameter constraints. We present a detailed modelling of the joint covariance matrix between cluster number counts and the galaxy angular power spectrum. We employ the framework of the halo model complemented by a Halo Occupation Distribution model (HOD). We demonstrate the importance of accounting for non-Gaussianity to produce accurate covariance predictions. Indeed, we show that the non-Gaussian covariance becomes dominant at small scales, low redshifts or high cluster masses. We discuss in particular the case of the super-sample covariance (SSC), including the effects of galaxy shot-noise, halo second order bias and non-local bias. We demonstrate that the SSC obeys mathematical inequalities and positivity. Using the joint covariance matrix and a Fisher matrix methodology, we examine the prospects of combining these two probes to constrain cosmological and HOD parameters. We find that the combination indeed results in noticeably better constraints, with improvements of order 20% on cosmological parameters compared to the best single probe, and even greater improvement on HOD parameters, with reduction of error bars by a factor 1.4-4.8. This happens in particular because the cross-covariance introduces a synergy between the probes on small scales. We conclude that accounting for non-Gaussian effects is required for the joint analysis of these observables in galaxy surveys.

  3. Homological methods, representation theory, and cluster algebras

    CERN Document Server

    Trepode, Sonia

    2018-01-01

    This text presents six mini-courses, all devoted to interactions between representation theory of algebras, homological algebra, and the new ever-expanding theory of cluster algebras. The interplay between the topics discussed in this text will continue to grow and this collection of courses stands as a partial testimony to this new development. The courses are useful for any mathematician who would like to learn more about this rapidly developing field; the primary aim is to engage graduate students and young researchers. Prerequisites include knowledge of some noncommutative algebra or homological algebra. Homological algebra has always been considered as one of the main tools in the study of finite-dimensional algebras. The strong relationship with cluster algebras is more recent and has quickly established itself as one of the important highlights of today’s mathematical landscape. This connection has been fruitful to both areas—representation theory provides a categorification of cluster algebras, wh...

  4. Topics in modelling of clustered data

    CERN Document Server

    Aerts, Marc; Ryan, Louise M; Geys, Helena

    2002-01-01

    Many methods for analyzing clustered data exist, all with advantages and limitations in particular applications. Compiled from the contributions of leading specialists in the field, Topics in Modelling of Clustered Data describes the tools and techniques for modelling the clustered data often encountered in medical, biological, environmental, and social science studies. It focuses on providing a comprehensive treatment of marginal, conditional, and random effects models using, among others, likelihood, pseudo-likelihood, and generalized estimating equations methods. The authors motivate and illustrate all aspects of these models in a variety of real applications. They discuss several variations and extensions, including individual-level covariates and combined continuous and discrete outcomes. Flexible modelling with fractional and local polynomials, omnibus lack-of-fit tests, robustification against misspecification, exact, and bootstrap inferential procedures all receive extensive treatment. The application...

  5. Metal cluster compounds - chemistry and importance; clusters containing isolated main group element atoms, large metal cluster compounds, cluster fluxionality

    International Nuclear Information System (INIS)

    Walther, B.

    1988-01-01

    This part of the review on metal cluster compounds deals with clusters containing isolated main group element atoms, with high nuclearity clusters and metal cluster fluxionality. It will be obvious that main group element atoms strongly influence the geometry, stability and reactivity of the clusters. High nuclearity clusters are of interest in there own due to the diversity of the structures adopted, but their intermediate position between molecules and the metallic state makes them a fascinating research object too. These both sites of the metal cluster chemistry as well as the frequently observed ligand and core fluxionality are related to the cluster metal and surface analogy. (author)

  6. ANALISIS SEGMENTASI PELANGGAN MENGGUNAKAN KOMBINASI RFM MODEL DAN TEKNIK CLUSTERING

    Directory of Open Access Journals (Sweden)

    Beta Estri Adiana

    2018-04-01

    Full Text Available Intense competition in the business field motivates a small and medium enterprises (SMEs to manage customer services to the maximal. Improve of customer royalty by grouping cunstomers into some of groups and determining appropriate and effective marketing strategies for each group. Customer segmentation can be performed by data mining approach with clustering method. The main purpose of this paper is customer segmentation and measure their loyalty to a SME’s product. Using CRISP-DM method which consist of six phases, namely business understanding, data understanding, data preparatuin, modeling, evaluation and deployment. The K-Means algorithm is used for cluster formation and RapidMiner as a tool used to evaluate the result of clusters. Cluster formation is based on RFM (recency, frequency, monetary analysis. Davies Bouldin Index (DBI is used to find the optimal number of clusters (k. The customers are divided into 3 clusters, total of customer in first cluster is 30 customers who entered in typical customer category, the second cluster there are 8 customer whho entered in superstar customer and 89 customers in third cluster is dormant cluster category.

  7. Two-Way Regularized Fuzzy Clustering of Multiple Correspondence Analysis.

    Science.gov (United States)

    Kim, Sunmee; Choi, Ji Yeh; Hwang, Heungsun

    2017-01-01

    Multiple correspondence analysis (MCA) is a useful tool for investigating the interrelationships among dummy-coded categorical variables. MCA has been combined with clustering methods to examine whether there exist heterogeneous subclusters of a population, which exhibit cluster-level heterogeneity. These combined approaches aim to classify either observations only (one-way clustering of MCA) or both observations and variable categories (two-way clustering of MCA). The latter approach is favored because its solutions are easier to interpret by providing explicitly which subgroup of observations is associated with which subset of variable categories. Nonetheless, the two-way approach has been built on hard classification that assumes observations and/or variable categories to belong to only one cluster. To relax this assumption, we propose two-way fuzzy clustering of MCA. Specifically, we combine MCA with fuzzy k-means simultaneously to classify a subgroup of observations and a subset of variable categories into a common cluster, while allowing both observations and variable categories to belong partially to multiple clusters. Importantly, we adopt regularized fuzzy k-means, thereby enabling us to decide the degree of fuzziness in cluster memberships automatically. We evaluate the performance of the proposed approach through the analysis of simulated and real data, in comparison with existing two-way clustering approaches.

  8. Disentangling Porterian Clusters

    DEFF Research Database (Denmark)

    Jagtfelt, Tue

    , contested theory become so widely disseminated and applied as a normative and prescriptive strategy for economic development? The dissertation traces the introduction of the cluster notion into the EU’s Lisbon Strategy and demonstrates how its inclusion originates from Porter’s colleagues: Professor Örjan...... to his membership on the Commission on Industrial Competitiveness, and that the cluster notion found in his influential book, Nations, represents a significant shift in his conception of cluster compared with his early conceptions. This shift, it is argued, is a deliberate attempt by Porter to create...... a paradigmatic textbook that follows Kuhn’s blueprint for scientific revolutions by instilling Nations with circular references and thus creating a local linguistic holism conceptualized through an encompassing notion of cluster. The dissertation concludes that the two research questions are philosophically...

  9. Remarks on stellar clusters

    International Nuclear Information System (INIS)

    Teller, E.

    1985-01-01

    In the following, a few simple remarks on the evolution and properties of stellar clusters will be collected. In particular, globular clusters will be considered. Though details of such clusters are often not known, a few questions can be clarified with the help of primitive arguments. These are:- why are spherical clusters spherical, why do they have high densities, why do they consist of approximately a million stars, how may a black hole of great mass form within them, may they be the origin of gamma-ray bursts, may their invisible remnants account for the missing mass of our galaxy. The available data do not warrant a detailed evaluation. However, it is remarkable that exceedingly simple models can shed some light on the questions enumerated above. (author)

  10. From collisions to clusters

    DEFF Research Database (Denmark)

    Loukonen, Ville; Bork, Nicolai; Vehkamaki, Hanna

    2014-01-01

    -principles molecular dynamics collision simulations of (sulphuric acid)1(water)0, 1 + (dimethylamine) → (sulphuric acid)1(dimethylamine)1(water)0, 1 cluster formation processes. The simulations indicate that the sticking factor in the collisions is unity: the interaction between the molecules is strong enough...... control. As a consequence, the clusters show very dynamic ion pair structure, which differs from both the static structure optimisation calculations and the equilibrium first-principles molecular dynamics simulations. In some of the simulation runs, water mediates the proton transfer by acting as a proton...... to overcome the possible initial non-optimal collision orientations. No post-collisional cluster break up is observed. The reasons for the efficient clustering are (i) the proton transfer reaction which takes place in each of the collision simulations and (ii) the subsequent competition over the proton...

  11. Clustering of Emerging Flux

    Science.gov (United States)

    Ruzmaikin, A.

    1997-01-01

    Observations show that newly emerging flux tends to appear on the Solar surface at sites where there is flux already. This results in clustering of solar activity. Standard dynamo theories do not predict this effect.

  12. How Clusters Work

    Science.gov (United States)

    Technology innovation clusters are geographic concentrations of interconnected companies, universities, and other organizations with a focus on environmental technology. They play a key role in addressing the nation’s pressing environmental problems.

  13. Evolution of clustered storage

    CERN Multimedia

    CERN. Geneva; Van de Vyvre, Pierre

    2007-01-01

    The session actually featured two presentations: * Evolution of clustered storage by Lance Hukill, Quantum Corporation * ALICE DAQ - Usage of a Cluster-File System: Quantum StorNext by Pierre Vande Vyvre, CERN-PH the second one prepared at short notice by Pierre (thanks!) to present how the Quantum technologies are being used in the ALICE experiment. The abstract to Mr Hukill's follows. Clustered Storage is a technology that is driven by business and mission applications. The evolution of Clustered Storage solutions starts first at the alignment between End-users needs and Industry trends: * Push-and-Pull between managing for today versus planning for tomorrow * Breaking down the real business problems to the core applications * Commoditization of clients, servers, and target devices * Interchangeability, Interoperability, Remote Access, Centralized control * Oh, and yes, there is a budget and the "real world" to deal with This presentation will talk through these needs and trends, and then ask the question, ...

  14. Galaxy clusters and cosmology

    CERN Document Server

    White, S

    1994-01-01

    Galaxy clusters are the largest coherent objects in Universe. It has been known since 1933 that their dynamical properties require either a modification of the theory of gravity, or the presence of a dominant component of unseen material of unknown nature. Clusters still provide the best laboratories for studying the amount and distribution of this dark matter relative to the material which can be observed directly -- the galaxies themselves and the hot,X-ray-emitting gas which lies between them.Imaging and spectroscopy of clusters by satellite-borne X -ray telescopes has greatly improved our knowledge of the structure and composition of this intergalactic medium. The results permit a number of new approaches to some fundamental cosmological questions,but current indications from the data are contradictory. The observed irregularity of real clusters seems to imply recent formation epochs which would require a universe with approximately the critical density. On the other hand, the large baryon fraction observ...

  15. Applications of Clustering

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Applications of Clustering. Biology – medical imaging, bioinformatics, ecology, phylogenies problems etc. Market research. Data Mining. Social Networks. Any problem measuring similarity/correlation. (dimensions represent different parameters)

  16. Clustering Game Behavior Data

    DEFF Research Database (Denmark)

    Bauckhage, C.; Drachen, Anders; Sifa, Rafet

    2015-01-01

    of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering...... and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data...... scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics...

  17. Clustering on Membranes

    DEFF Research Database (Denmark)

    Johannes, Ludger; Pezeshkian, Weria; Ipsen, John H

    2018-01-01

    Clustering of extracellular ligands and proteins on the plasma membrane is required to perform specific cellular functions, such as signaling and endocytosis. Attractive forces that originate in perturbations of the membrane's physical properties contribute to this clustering, in addition to direct...... protein-protein interactions. However, these membrane-mediated forces have not all been equally considered, despite their importance. In this review, we describe how line tension, lipid depletion, and membrane curvature contribute to membrane-mediated clustering. Additional attractive forces that arise...... from protein-induced perturbation of a membrane's fluctuations are also described. This review aims to provide a survey of the current understanding of membrane-mediated clustering and how this supports precise biological functions....

  18. Air void clustering.

    Science.gov (United States)

    2015-06-01

    Air void clustering around coarse aggregate in concrete has been identified as a potential source of : low strengths in concrete mixes by several Departments of Transportation around the country. Research was : carried out to (1) develop a quantitati...

  19. tclust: An R Package for a Trimming Approach to Cluster Analysis

    Directory of Open Access Journals (Sweden)

    2012-04-01

    Full Text Available Outlying data can heavily influence standard clustering methods. At the same time, clustering principles can be useful when robustifying statistical procedures. These two reasons motivate the development of feasible robust model-based clustering approaches. With this in mind, an R package for performing non-hierarchical robust clustering, called tclust, is presented here. Instead of trying to “fit” noisy data, a proportion α of the most outlying observations is trimmed. The tclust package efficiently handles different cluster scatter constraints. Graphical exploratory tools are also provided to help the user make sensible choices for the trimming proportion as well as the number of clusters to search for.

  20. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  1. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  2. BUILDING e-CLUSTERS

    OpenAIRE

    Milan Davidovic

    2013-01-01

    E-clusters are strategic alliance in TIMES technology sector (Telecommunication, Information technology, Multimedia, Entertainment, Security) where products and processes are digitalized. They enable horizontal and vertical integration of small and medium companies and establish new added value e-chains. E-clusters also build supply chains based on cooperation relationship, innovation, organizational knowledge and compliance of intellectual properties. As an innovative approach for economic p...

  3. Clusters and exotic processes

    International Nuclear Information System (INIS)

    Schiffer, J.P.

    1975-01-01

    An attempt is made to present some data which may be construed as indicating that perhaps clusters play a role in high energy and exotic pion or kaon interactions with complex (A much greater than 16) nuclei. Also an attempt is made to summarize some very recent experimental work on pion interactions with nuclei which may or may not in the end support a picture in which clusters play an important role. (U.S.)

  4. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  5. Cluster bomb ocular injuries.

    Science.gov (United States)

    Mansour, Ahmad M; Hamade, Haya; Ghaddar, Ayman; Mokadem, Ahmad Samih; El Hajj Ali, Mohamad; Awwad, Shady

    2012-01-01

    To present the visual outcomes and ocular sequelae of victims of cluster bombs. This retrospective, multicenter case series of ocular injury due to cluster bombs was conducted for 3 years after the war in South Lebanon (July 2006). Data were gathered from the reports to the Information Management System for Mine Action. There were 308 victims of clusters bombs; 36 individuals were killed, of which 2 received ocular lacerations and; 272 individuals were injured with 18 receiving ocular injury. These 18 surviving individuals were assessed by the authors. Ocular injury occurred in 6.5% (20/308) of cluster bomb victims. Trauma to multiple organs occurred in 12 of 18 cases (67%) with ocular injury. Ocular findings included corneal or scleral lacerations (16 eyes), corneal foreign bodies (9 eyes), corneal decompensation (2 eyes), ruptured cataract (6 eyes), and intravitreal foreign bodies (10 eyes). The corneas of one patient had extreme attenuation of the endothelium. Ocular injury occurred in 6.5% of cluster bomb victims and 67% of the patients with ocular injury sustained trauma to multiple organs. Visual morbidity in civilians is an additional reason for a global ban on the use of cluster bombs.

  6. Determination of atomic cluster structure with cluster fusion algorithm

    DEFF Research Database (Denmark)

    Obolensky, Oleg I.; Solov'yov, Ilia; Solov'yov, Andrey V.

    2005-01-01

    We report an efficient scheme of global optimization, called cluster fusion algorithm, which has proved its reliability and high efficiency in determination of the structure of various atomic clusters.......We report an efficient scheme of global optimization, called cluster fusion algorithm, which has proved its reliability and high efficiency in determination of the structure of various atomic clusters....

  7. Authoring Tools

    Science.gov (United States)

    Treviranus, Jutta

    Authoring tools that are accessible and that enable authors to produce accessible Web content play a critical role in web accessibility. Widespread use of authoring tools that comply to the W3C Authoring Tool Accessibility Guidelines (ATAG) would ensure that even authors who are neither knowledgeable about nor particularly motivated to produce accessible content do so by default. The principles and techniques of ATAG are discussed. Some examples of accessible authoring tools are described including authoring tool content management components such as TinyMCE. Considerations for creating an accessible collaborative environment are also covered. As part of providing accessible content, the debate between system-based personal optimization and one universally accessible site configuration is presented. The issues and potential solutions to address the accessibility crisis presented by the advent of rich internet applications are outlined. This challenge must be met to ensure that a large segment of the population is able to participate in the move toward the web as a two-way communication mechanism.

  8. Attitude Estimation in Fractionated Spacecraft Cluster Systems

    Science.gov (United States)

    Hadaegh, Fred Y.; Blackmore, James C.

    2011-01-01

    An attitude estimation was examined in fractioned free-flying spacecraft. Instead of a single, monolithic spacecraft, a fractionated free-flying spacecraft uses multiple spacecraft modules. These modules are connected only through wireless communication links and, potentially, wireless power links. The key advantage of this concept is the ability to respond to uncertainty. For example, if a single spacecraft module in the cluster fails, a new one can be launched at a lower cost and risk than would be incurred with onorbit servicing or replacement of the monolithic spacecraft. In order to create such a system, however, it is essential to know what the navigation capabilities of the fractionated system are as a function of the capabilities of the individual modules, and to have an algorithm that can perform estimation of the attitudes and relative positions of the modules with fractionated sensing capabilities. Looking specifically at fractionated attitude estimation with startrackers and optical relative attitude sensors, a set of mathematical tools has been developed that specify the set of sensors necessary to ensure that the attitude of the entire cluster ( cluster attitude ) can be observed. Also developed was a navigation filter that can estimate the cluster attitude if these conditions are satisfied. Each module in the cluster may have either a startracker, a relative attitude sensor, or both. An extended Kalman filter can be used to estimate the attitude of all modules. A range of estimation performances can be achieved depending on the sensors used and the topology of the sensing network.

  9. Cluster dynamics at different cluster size and incident laser wavelengths

    International Nuclear Information System (INIS)

    Desai, Tara; Bernardinello, Andrea

    2002-01-01

    X-ray emission spectra from aluminum clusters of diameter -0.4 μm and gold clusters of dia. ∼1.25 μm are experimentally studied by irradiating the cluster foil targets with 1.06 μm laser, 10 ns (FWHM) at an intensity ∼10 12 W/cm 2 . Aluminum clusters show a different spectra compared to bulk material whereas gold cluster evolve towards bulk gold. Experimental data are analyzed on the basis of cluster dimension, laser wavelength and pulse duration. PIC simulations are performed to study the behavior of clusters at higher intensity I≥10 17 W/cm 2 for different size of the clusters irradiated at different laser wavelengths. Results indicate the dependence of cluster dynamics on cluster size and incident laser wavelength

  10. Tool steels

    DEFF Research Database (Denmark)

    Højerslev, C.

    2001-01-01

    On designing a tool steel, its composition and heat treatment parameters are chosen to provide a hardened and tempered martensitic matrix in which carbides are evenly distributed. In this condition the matrix has an optimum combination of hardness andtoughness, the primary carbides provide...... resistance against abrasive wear and secondary carbides (if any) increase the resistance against plastic deformation. Tool steels are alloyed with carbide forming elements (Typically: vanadium, tungsten, molybdenumand chromium) furthermore some steel types contains cobalt. Addition of alloying elements...... serves primarily two purpose (i) to improve the hardenabillity and (ii) to provide harder and thermally more stable carbides than cementite. Assuming proper heattreatment, the properties of a tool steel depends on the which alloying elements are added and their respective concentrations....

  11. GibbsCluster: unsupervised clustering and alignment of peptide sequences

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Alvarez, Bruno; Nielsen, Morten

    2017-01-01

    motif characterizing each cluster. Several parameters are available to customize cluster analysis, including adjustable penalties for small clusters and overlapping groups and a trash cluster to remove outliers. As an example application, we used the server to deconvolute multiple specificities in large......-scale peptidome data generated by mass spectrometry. The server is available at http://www.cbs.dtu.dk/services/GibbsCluster-2.0....

  12. Management Tools

    Science.gov (United States)

    1987-01-01

    Manugistics, Inc. (formerly AVYX, Inc.) has introduced a new programming language for IBM and IBM compatible computers called TREES-pls. It is a resource management tool originating from the space shuttle, that can be used in such applications as scheduling, resource allocation project control, information management, and artificial intelligence. Manugistics, Inc. was looking for a flexible tool that can be applied to many problems with minimal adaptation. Among the non-government markets are aerospace, other manufacturing, transportation, health care, food and beverage and professional services.

  13. HeinzelCluster: accelerated reconstruction for FORE and OSEM3D.

    Science.gov (United States)

    Vollmar, S; Michel, C; Treffert, J T; Newport, D F; Casey, M; Knöss, C; Wienhard, K; Liu, X; Defrise, M; Heiss, W D

    2002-08-07

    Using iterative three-dimensional (3D) reconstruction techniques for reconstruction of positron emission tomography (PET) is not feasible on most single-processor machines due to the excessive computing time needed, especially so for the large sinogram sizes of our high-resolution research tomograph (HRRT). In our first approach to speed up reconstruction time we transform the 3D scan into the format of a two-dimensional (2D) scan with sinograms that can be reconstructed independently using Fourier rebinning (FORE) and a fast 2D reconstruction method. On our dedicated reconstruction cluster (seven four-processor systems, Intel PIII@700 MHz, switched fast ethernet and Myrinet, Windows NT Server), we process these 2D sinograms in parallel. We have achieved a speedup > 23 using 26 processors and also compared results for different communication methods (RPC, Syngo, Myrinet GM). The other approach is to parallelize OSEM3D (implementation of C Michel), which has produced the best results for HRRT data so far and is more suitable for an adequate treatment of the sinogram gaps that result from the detector geometry of the HRRT. We have implemented two levels of parallelization for four dedicated cluster (a shared memory fine-grain level on each node utilizing all four processors and a coarse-grain level allowing for 15 nodes) reducing the time for one core iteration from over 7 h to about 35 min.

  14. Cluster Implantation and Deposition Apparatus

    DEFF Research Database (Denmark)

    Hanif, Muhammad; Popok, Vladimir

    2015-01-01

    In the current report, a design and capabilities of a cluster implantation and deposition apparatus (CIDA) involving two different cluster sources are described. The clusters produced from gas precursors (Ar, N etc.) by PuCluS-2 can be used to study cluster ion implantation in order to develop...

  15. Subspace K-means clustering

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla

    2013-01-01

    To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the

  16. Projected coupled cluster theory.

    Science.gov (United States)

    Qiu, Yiheng; Henderson, Thomas M; Zhao, Jinmo; Scuseria, Gustavo E

    2017-08-14

    Coupled cluster theory is the method of choice for weakly correlated systems. But in the strongly correlated regime, it faces a symmetry dilemma, where it either completely fails to describe the system or has to artificially break certain symmetries. On the other hand, projected Hartree-Fock theory captures the essential physics of many kinds of strong correlations via symmetry breaking and restoration. In this work, we combine and try to retain the merits of these two methods by applying symmetry projection to broken symmetry coupled cluster wave functions. The non-orthogonal nature of states resulting from the application of symmetry projection operators furnishes particle-hole excitations to all orders, thus creating an obstacle for the exact evaluation of overlaps. Here we provide a solution via a disentanglement framework theory that can be approximated rigorously and systematically. Results of projected coupled cluster theory are presented for molecules and the Hubbard model, showing that spin projection significantly improves unrestricted coupled cluster theory while restoring good quantum numbers. The energy of projected coupled cluster theory reduces to the unprojected one in the thermodynamic limit, albeit at a much slower rate than projected Hartree-Fock.

  17. Globular Clusters - Guides to Galaxies

    CERN Document Server

    Richtler, Tom; Joint ESO-FONDAP Workshop on Globular Clusters

    2009-01-01

    The principal question of whether and how globular clusters can contribute to a better understanding of galaxy formation and evolution is perhaps the main driving force behind the overall endeavour of studying globular cluster systems. Naturally, this splits up into many individual problems. The objective of the Joint ESO-FONDAP Workshop on Globular Clusters - Guides to Galaxies was to bring together researchers, both observational and theoretical, to present and discuss the most recent results. Topics covered in these proceedings are: internal dynamics of globular clusters and interaction with host galaxies (tidal tails, evolution of cluster masses), accretion of globular clusters, detailed descriptions of nearby cluster systems, ultracompact dwarfs, formations of massive clusters in mergers and elsewhere, the ACS Virgo survey, galaxy formation and globular clusters, dynamics and kinematics of globular cluster systems and dark matter-related problems. With its wide coverage of the topic, this book constitute...

  18. Design tools

    Science.gov (United States)

    Anton TenWolde; Mark T. Bomberg

    2009-01-01

    Overall, despite the lack of exact input data, the use of design tools, including models, is much superior to the simple following of rules of thumbs, and a moisture analysis should be standard procedure for any building envelope design. Exceptions can only be made for buildings in the same climate, similar occupancy, and similar envelope construction. This chapter...

  19. The structure of nearby clusters of galaxies Hierarchical clustering and an application to the Leo region

    CERN Document Server

    Materne, J

    1978-01-01

    A new method of classifying groups of galaxies, called hierarchical clustering, is presented as a tool for the investigation of nearby groups of galaxies. The method is free from model assumptions about the groups. The scaling of the different coordinates is necessary, and the level from which one accepts the groups as real has to be determined. Hierarchical clustering is applied to an unbiased sample of galaxies in the Leo region. Five distinct groups result which have reasonable physical properties, such as low crossing times and conservative mass-to-light ratios, and which follow a radial velocity- luminosity relation. Only 4 out of 39 galaxies were adopted as field galaxies. (27 refs).

  20. Comparison of two accelerators for Monte Carlo radiation transport calculations, Nvidia Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor: A case study for X-ray CT imaging dose calculation

    International Nuclear Information System (INIS)

    Liu, T.; Xu, X.G.; Carothers, C.D.

    2015-01-01

    Highlights: • A new Monte Carlo photon transport code ARCHER-CT for CT dose calculations is developed to execute on the GPU and coprocessor. • ARCHER-CT is verified against MCNP. • The GPU code on an Nvidia M2090 GPU is 5.15–5.81 times faster than the parallel CPU code on an Intel X5650 6-core CPU. • The coprocessor code on an Intel Xeon Phi 5110p coprocessor is 3.30–3.38 times faster than the CPU code. - Abstract: Hardware accelerators are currently becoming increasingly important in boosting high performance computing systems. In this study, we tested the performance of two accelerator models, Nvidia Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three components, ARCHER-CT CPU , ARCHER-CT GPU and ARCHER-CT COP designed to be run on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms are included in the code to calculate absorbed dose to radiosensitive organs under user-specified scan protocols. The results from ARCHER agree well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It is found that all the code components are significantly faster than the parallel MCNPX run on 12 MPI processes, and that the GPU and coprocessor codes are 5.15–5.81 and 3.30–3.38 times faster than the parallel ARCHER-CT CPU , respectively. The M2090 GPU performs better than the 5110p coprocessor in our specific test. Besides, the heterogeneous computation mode in which the CPU and the hardware accelerator work concurrently can increase the overall performance by 13–18%

  1. Spanning Tree Based Attribute Clustering

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Jorge, Cordero Hernandez

    2009-01-01

    Attribute clustering has been previously employed to detect statistical dependence between subsets of variables. We propose a novel attribute clustering algorithm motivated by research of complex networks, called the Star Discovery algorithm. The algorithm partitions and indirectly discards...... inconsistent edges from a maximum spanning tree by starting appropriate initial modes, therefore generating stable clusters. It discovers sound clusters through simple graph operations and achieves significant computational savings. We compare the Star Discovery algorithm against earlier attribute clustering...

  2. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    Science.gov (United States)

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  3. Exotic cluster structures on

    CERN Document Server

    Gekhtman, M; Vainshtein, A

    2017-01-01

    This is the second paper in the series of papers dedicated to the study of natural cluster structures in the rings of regular functions on simple complex Lie groups and Poisson-Lie structures compatible with these cluster structures. According to our main conjecture, each class in the Belavin-Drinfeld classification of Poisson-Lie structures on \\mathcal{G} corresponds to a cluster structure in \\mathcal{O}(\\mathcal{G}). The authors have shown before that this conjecture holds for any \\mathcal{G} in the case of the standard Poisson-Lie structure and for all Belavin-Drinfeld classes in SL_n, n<5. In this paper the authors establish it for the Cremmer-Gervais Poisson-Lie structure on SL_n, which is the least similar to the standard one.

  4. From superdeformation to clusters

    Energy Technology Data Exchange (ETDEWEB)

    Betts, R R [Argonne National Lab., IL (United States). Physics Div.

    1992-08-01

    Much of the discussion at the conference centred on superdeformed states and their study by precise gamma spectrometry. The author suggests that the study of superdeformation by fission fragments and by auto-scattering is of importance, and may become more important. He concludes that there exists clear evidence of shell effects at extreme deformation in light nuclei studied by fission or cluster decay. The connection between the deformed shell model and the multi-center shell model can be exploited to give give insight into the cluster structure of these extremely deformed states, and also gives hope of a spectroscopy based on selection rules for cluster decay. A clear disadvantage at this stage is inability to make this spectroscopy more quantitative through calculation of the decay widths. The introduction of a new generation of high segmentation, high resolution, particle arrays has and will have a major impact on this aspect of the study of highly deformed nuclei. 20 refs., 16 figs.

  5. Offshore Wind Farm Clusters - Towards new integrated Design Tool

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Réthoré, Pierre-Elouan; Peña, Alfredo

    In EERA DTOC testing of existing wind farm wake models against four validation data test sets from large offshore wind farms is carried out. This includes Horns Rev-1 in the North Sea, Lillgrund in the Baltic Sea, Roedsand-2 in the Baltic Sea and from 10 large offshore wind farms in Northern Euro...

  6. MARKETING COMMUNICATION TO INDUSTRIAL CLUSTERS OF SLOVAK REPUBLIC

    Directory of Open Access Journals (Sweden)

    Erika Loučanová

    2013-12-01

    Full Text Available Currently, the growing attention is paid to the promotion and development of clusters, i.e. concentration of businesses and other cooperating institutions in a sector and region. Despite of the economy globalization and sophisticated global communications technologies, the factor of geographical concentration should be declined, however the experts highlight the importance of direct contact with local and tacit knowledge. The aim of this paper is analyzing of marketing communication tools in different clusters of Slovakia.

  7. Synchronization as Aggregation: Cluster Kinetics of Pulse-Coupled Oscillators.

    Science.gov (United States)

    O'Keeffe, Kevin P; Krapivsky, P L; Strogatz, Steven H

    2015-08-07

    We consider models of identical pulse-coupled oscillators with global interactions. Previous work showed that under certain conditions such systems always end up in sync, but did not quantify how small clusters of synchronized oscillators progressively coalesce into larger ones. Using tools from the study of aggregation phenomena, we obtain exact results for the time-dependent distribution of cluster sizes as the system evolves from disorder to synchrony.

  8. The mass-temperature relation for clusters of galaxies

    DEFF Research Database (Denmark)

    Hjorth, J.; Oukbir, J.; van Kampen, E.

    1998-01-01

    A tight mass-temperature relation, M(r)/r proportional to T-x, is expected in most cosmological models if clusters of galaxies are homologous and the intracluster gas is in global equilibrium with the dark matter. We here calibrate this relation using eight clusters with well-defined global tempe...... redshift, the relation represents a new tool for determination of cosmological parameters, notably the cosmological constant Lambda....

  9. Refractory chronic cluster headache

    DEFF Research Database (Denmark)

    Mitsikostas, Dimos D; Edvinsson, Lars; Jensen, Rigmor H

    2014-01-01

    Chronic cluster headache (CCH) often resists to prophylactic pharmaceutical treatments resulting in patients' life damage. In this rare but pragmatic situation escalation to invasive management is needed but framing criteria are lacking. We aimed to reach a consensus for refractory CCH definition...... for clinical and research use. The preparation of the final consensus followed three stages. Internal between authors, a larger between all European Headache Federation members and finally an international one among all investigators that have published clinical studies on cluster headache the last five years...

  10. I Cluster geografici

    Directory of Open Access Journals (Sweden)

    Maurizio Rosina

    2010-03-01

    Full Text Available Geographic ClustersOver the past decade, public alphanumeric database have been growing at exceptional rate. Most of data can be georeferenced, so that is possible gaining new knowledge from such databases. The contribution of this paper is two-fold. We first present a model of geographic clusters, which uses only geographic and functionally data properties. The model is useful to process huge amount of public/government data, even daily upgrading. After that, we merge the model into the framework GEOPOI (GEOcoding Points Of Interest, and show some graphic map results.

  11. I Cluster geografici

    Directory of Open Access Journals (Sweden)

    Maurizio Rosina

    2010-03-01

    Full Text Available Geographic Clusters Over the past decade, public alphanumeric database have been growing at exceptional rate. Most of data can be georeferenced, so that is possible gaining new knowledge from such databases. The contribution of this paper is two-fold. We first present a model of geographic clusters, which uses only geographic and functionally data properties. The model is useful to process huge amount of public/government data, even daily upgrading. After that, we merge the model into the framework GEOPOI (GEOcoding Points Of Interest, and show some graphic map results.

  12. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  13. Android Malware Classification Using K-Means Clustering Algorithm

    Science.gov (United States)

    Hamid, Isredza Rahmi A.; Syafiqah Khalid, Nur; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Chai Wen, Chuah

    2017-08-01

    Malware was designed to gain access or damage a computer system without user notice. Besides, attacker exploits malware to commit crime or fraud. This paper proposed Android malware classification approach based on K-Means clustering algorithm. We evaluate the proposed model in terms of accuracy using machine learning algorithms. Two datasets were selected to demonstrate the practicing of K-Means clustering algorithms that are Virus Total and Malgenome dataset. We classify the Android malware into three clusters which are ransomware, scareware and goodware. Nine features were considered for each types of dataset such as Lock Detected, Text Detected, Text Score, Encryption Detected, Threat, Porn, Law, Copyright and Moneypak. We used IBM SPSS Statistic software for data classification and WEKA tools to evaluate the built cluster. The proposed K-Means clustering algorithm shows promising result with high accuracy when tested using Random Forest algorithm.

  14. A Historical Approach to Clustering in Emerging Economies

    DEFF Research Database (Denmark)

    Giacomin, Valeria

    of external factors. Indeed, researchers have explained clusters as self-contained entities and reduced their success to local exceptionality. In contrast, emerging literature has shown that clusters are integrated in broader structures beyond their location and are rather building blocks of today’s global...... economy. The working paper goes on to present two historical cases from the global south to explain how clusters work as major tools for international business. Particularly in the developing world, multinationals have used clusters as platforms for channeling foreign investment, knowledge, and imported...... inputs. The study concludes by stressing the importance of using historical evidence and data to look at clusters as agglomerations of actors and companies operating not just at the local level but across broader global networks. In doing so the historical perspective provides explanations lacking...

  15. Recommending the heterogeneous cluster type multi-processor system computing

    International Nuclear Information System (INIS)

    Iijima, Nobukazu

    2010-01-01

    Real-time reactor simulator had been developed by reusing the equipment of the Musashi reactor and its performance improvement became indispensable for research tools to increase sampling rate with introduction of arithmetic units using multi-Digital Signal Processor(DSP) system (cluster). In order to realize the heterogeneous cluster type multi-processor system computing, combination of two kinds of Control Processor (CP) s, Cluster Control Processor (CCP) and System Control Processor (SCP), were proposed with Large System Control Processor (LSCP) for hierarchical cluster if needed. Faster computing performance of this system was well evaluated by simulation results for simultaneous execution of plural jobs and also pipeline processing between clusters, which showed the system led to effective use of existing system and enhancement of the cost performance. (T. Tanaka)

  16. Search for Formation Criteria for Globular Cluster Systems

    Science.gov (United States)

    Nuritdinov, S. N.; Mirtadjieva, K. T.; Tadjibaev, I. U.

    2005-01-01

    Star cluster formation is a major mode of star formation in the extreme conditions of interacting galaxies and violent starbursts. By studying ages and metallicities of young metal-enhanced star clusters in mergers / merger remnants we can learn about the violent star formation history of these galaxies and eventually about galaxy formation and evolution. We will present a new set of evolutionary synthesis models of our GALEV code specially developed to account for the gaseous emission of presently forming star clusters and an advanced tool to compare large model grids with multi-color broad-band observations becoming presently available in large amounts. Such observations are an ecomonic way to determine the parameters of young star clusters as will be shown in the presentation. First results of newly-born clusters in mergers and starburst galaxies are presented and compared to the well-studied old globulars and interpreted in the framework of galaxy formation / evolution.

  17. Value, Cost, and Sharing: Open Issues in Constrained Clustering

    Science.gov (United States)

    Wagstaff, Kiri L.

    2006-01-01

    Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.

  18. Multi-Optimisation Consensus Clustering

    Science.gov (United States)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  19. Photochemistry in rare gas clusters

    International Nuclear Information System (INIS)

    Moeller, T.; Haeften, K. von; Pietrowski, R. von

    1999-01-01

    In this contribution photochemical processes in pure rare gas clusters will be discussed. The relaxation dynamics of electronically excited He clusters is investigated with luminescence spectroscopy. After electronic excitation of He clusters many sharp lines are observed in the visible and infrared spectral range which can be attributed to He atoms and molecules desorbing from the cluster. It turns out that the desorption of electronically excited He atoms and molecules is an important decay channel. The findings for He clusters are compared with results for Ar clusters. While desorption of electronically excited He atoms is observed for all clusters containing up to several thousand atoms a corresponding process in Ar clusters is only observed for very small clusters (N<10). (orig.)

  20. Photochemistry in rare gas clusters

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, T.; Haeften, K. von; Pietrowski, R. von [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Hamburger Synchrotronstrahlungslabor; Laarman, T. [Universitaet Hamburg, II. Institut fuer Experimentalphysik, Luruper Chaussee 149, D-22761 Hamburg (Germany)

    1999-12-01

    In this contribution photochemical processes in pure rare gas clusters will be discussed. The relaxation dynamics of electronically excited He clusters is investigated with luminescence spectroscopy. After electronic excitation of He clusters many sharp lines are observed in the visible and infrared spectral range which can be attributed to He atoms and molecules desorbing from the cluster. It turns out that the desorption of electronically excited He atoms and molecules is an important decay channel. The findings for He clusters are compared with results for Ar clusters. While desorption of electronically excited He atoms is observed for all clusters containing up to several thousand atoms a corresponding process in Ar clusters is only observed for very small clusters (N<10). (orig.)