WorldWideScience

Sample records for high performance cluster

  1. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  2. Designing a High Performance Parallel Personal Cluster

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  3. Spiking neural networks on high performance computer clusters

    Chen, Chong; Taha, Tarek M.

    2011-09-01

    In this paper we examine the acceleration of two spiking neural network models on three clusters of multicore processors representing three categories of processors: x86, STI Cell, and NVIDIA GPGPUs. The x86 cluster utilized consists of 352 dualcore AMD Opterons, the Cell cluster consists of 320 Sony Playstation 3s, while the GPGPU cluster contains 32 NVIDIA Tesla S1070 systems. The results indicate that the GPGPU platform can dominate in performance compared to the Cell and x86 platforms examined. From a cost perspective, the GPGPU is more expensive in terms of neuron/s throughput. If the cost of GPGPUs go down in the future, this platform will become very cost effective for these models.

  4. High-performance dynamic quantum clustering on graphics processors

    Wittek, Peter, E-mail: peterwittek@acm.org [Swedish School of Library and Information Science, University of Boras, Boras (Sweden)

    2013-01-15

    Clustering methods in machine learning may benefit from borrowing metaphors from physics. Dynamic quantum clustering associates a Gaussian wave packet with the multidimensional data points and regards them as eigenfunctions of the Schroedinger equation. The clustering structure emerges by letting the system evolve and the visual nature of the algorithm has been shown to be useful in a range of applications. Furthermore, the method only uses matrix operations, which readily lend themselves to parallelization. In this paper, we develop an implementation on graphics hardware and investigate how this approach can accelerate the computations. We achieve a speedup of up to two magnitudes over a multicore CPU implementation, which proves that quantum-like methods and acceleration by graphics processing units have a great relevance to machine learning.

  5. High-performance dynamic quantum clustering on graphics processors

    Wittek, Peter

    2013-01-01

    Clustering methods in machine learning may benefit from borrowing metaphors from physics. Dynamic quantum clustering associates a Gaussian wave packet with the multidimensional data points and regards them as eigenfunctions of the Schrödinger equation. The clustering structure emerges by letting the system evolve and the visual nature of the algorithm has been shown to be useful in a range of applications. Furthermore, the method only uses matrix operations, which readily lend themselves to parallelization. In this paper, we develop an implementation on graphics hardware and investigate how this approach can accelerate the computations. We achieve a speedup of up to two magnitudes over a multicore CPU implementation, which proves that quantum-like methods and acceleration by graphics processing units have a great relevance to machine learning.

  6. The high performance cluster computing system for BES offline data analysis

    Sun Yongzhao; Xu Dong; Zhang Shaoqiang; Yang Ting

    2004-01-01

    A high performance cluster computing system (EPCfarm) is introduced, which used for BES offline data analysis. The setup and the characteristics of the hardware and software of EPCfarm are described. The PBS, a queue management package, and the performance of EPCfarm is presented also. (authors)

  7. Improving the Eco-Efficiency of High Performance Computing Clusters Using EECluster

    Alberto Cocaña-Fernández

    2016-03-01

    Full Text Available As data and supercomputing centres increase their performance to improve service quality and target more ambitious challenges every day, their carbon footprint also continues to grow, and has already reached the magnitude of the aviation industry. Also, high power consumptions are building up to a remarkable bottleneck for the expansion of these infrastructures in economic terms due to the unavailability of sufficient energy sources. A substantial part of the problem is caused by current energy consumptions of High Performance Computing (HPC clusters. To alleviate this situation, we present in this work EECluster, a tool that integrates with multiple open-source Resource Management Systems to significantly reduce the carbon footprint of clusters by improving their energy efficiency. EECluster implements a dynamic power management mechanism based on Computational Intelligence techniques by learning a set of rules through multi-criteria evolutionary algorithms. This approach enables cluster operators to find the optimal balance between a reduction in the cluster energy consumptions, service quality, and number of reconfigurations. Experimental studies using both synthetic and actual workloads from a real world cluster support the adoption of this tool to reduce the carbon footprint of HPC clusters.

  8. Performance of space charge simulations using High Performance Computing (HPC) cluster

    Bartosik, Hannes; CERN. Geneva. ATS Department

    2017-01-01

    In 2016 a collaboration agreement between CERN and Istituto Nazionale di Fisica Nucleare (INFN) through its Centro Nazionale Analisi Fotogrammi (CNAF, Bologna) was signed [1], which foresaw the purchase and installation of a cluster of 20 nodes with 32 cores each, connected with InfiniBand, at CNAF for the use of CERN members to develop parallelized codes as well as conduct massive simulation campaigns with the already available parallelized tools. As outlined in [1], after the installation and the set up of the first 12 nodes, the green light to proceed with the procurement and installation of the next 8 nodes can be given only after successfully passing an acceptance test based on two specific benchmark runs. This condition is necessary to consider the first batch of the cluster operational and complying with the desired performance specifications. In this brief note, we report the results of the above mentioned acceptance test.

  9. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  10. How to build a high-performance compute cluster for the Grid

    Reinefeld, A

    2001-01-01

    The success of large-scale multi-national projects like the forthcoming analysis of the LHC particle collision data at CERN relies to a great extent on the ability to efficiently utilize computing and data-storage resources at geographically distributed sites. Currently, much effort is spent on the design of Grid management software (Datagrid, Globus, etc.), while the effective integration of computing nodes has been largely neglected up to now. This is the focus of our work. We present a framework for a high- performance cluster that can be used as a reliable computing node in the Grid. We outline the cluster architecture, the management of distributed data and the seamless integration of the cluster into the Grid environment. (11 refs).

  11. FPGA cluster for high-performance AO real-time control system

    Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.

    2006-06-01

    Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.

  12. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  13. Clustering at high redshifts

    Shaver, P.A.

    1986-01-01

    Evidence for clustering of and with high-redshift QSOs is discussed. QSOs of different redshifts show no clustering, but QSOs of similar redshifts appear to be clustered on a scale comparable to that of galaxies at the present epoch. In addition, spectroscopic studies of close pairs of QSOs indicate that QSOs are surrounded by a relatively high density of absorbing matter, possibly clusters of galaxies

  14. Operational mesoscale atmospheric dispersion prediction using high performance parallel computing cluster for emergency response

    Srinivas, C.V.; Venkatesan, R.; Muralidharan, N.V.; Das, Someshwar; Dass, Hari; Eswara Kumar, P.

    2005-08-01

    An operational atmospheric dispersion prediction system is implemented on a cluster super computer for 'Online Emergency Response' for Kalpakkam nuclear site. The numerical system constitutes a parallel version of a nested grid meso-scale meteorological model MM5 coupled to a random walk particle dispersion model FLEXPART. The system provides 48 hour forecast of the local weather and radioactive plume dispersion due to hypothetical air borne releases in a range of 100 km around the site. The parallel code was implemented on different cluster configurations like distributed and shared memory systems. Results of MM5 run time performance for 1-day prediction are reported on all the machines available for testing. A reduction of 5 times in runtime is achieved using 9 dual Xeon nodes (18 physical/36 logical processors) compared to a single node sequential run. Based on the above run time results a cluster computer facility with 9-node Dual Xeon is commissioned at IGCAR for model operation. The run time of a triple nested domain MM5 is about 4 h for 24 h forecast. The system has been operated continuously for a few months and results were ported on the IMSc home page. Initial and periodic boundary condition data for MM5 are provided by NCMRWF, New Delhi. An alternative source is found to be NCEP, USA. These two sources provide the input data to the operational models at different spatial and temporal resolutions and using different assimilation methods. A comparative study on the results of forecast is presented using these two data sources for present operational use. Slight improvement is noticed in rainfall, winds, geopotential heights and the vertical atmospheric structure while using NCEP data probably because of its high spatial and temporal resolution. (author)

  15. Clustering high dimensional data

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  16. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Moon, Hongsik

    changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  17. The performance of a new Geant4 Bertini intra-nuclear cascade model in high throughput computing (HTC) cluster architecture

    Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)

    2003-07-01

    We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)

  18. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  19. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  20. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    David K Brown

    Full Text Available Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS, a workflow management system and web interface for high performance computing (HPC. JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  1. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  2. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  3. Latent Cluster Analysis of Instructional Practices Reported by High- and Low-performing Mathematics Teachers in Four Countries

    Cheng, Qiang; Hsu, Hsien-Yuan

    2017-01-01

    Using Trends in International Mathematics and Science Study (TIMSS) 2011 eighth-grade international dataset, this study explored the profiles of instructional practices reported by high- and low-performing mathematics teachers across the US, Finland, Korea, and Russia. Concepts of conceptual teaching and procedural teaching were used to frame the design of the current study. Latent cluster analysis was applied in the investigation of the profiles of mathematics teachers’ instructional practic...

  4. LATENT CLUSTER ANALYSIS OF INSTRUCTIONAL PRACTICES REPORTED BY HIGH- AND LOW-PERFORMING MATHEMATICS TEACHERS IN FOUR COUNTRIES

    Qiang Cheng

    2017-06-01

    Full Text Available Using Trends in International Mathematics and Science Study (TIMSS 2011 eighth-grade international dataset, this study explored the profiles of instructional practices reported by high- and low-performing mathematics teachers across the US, Finland, Korea, and Russia. Concepts of conceptual teaching and procedural teaching were used to frame the design of the current study. Latent cluster analysis was applied in the investigation of the profiles of mathematics teachers’ instructional practices across the four education systems. It was found that all mathematics teachers in the high- and low-performing groups used procedurally as well as conceptually oriented practices in their teaching. However, one group of high-performing mathematics teachers from the U.S. sample and all the high-performing teachers from Finland, Korea, and Russia showed more frequent use of conceptually oriented practices than their corresponding low-performing teachers. Another group of U.S. high-performing mathematics teachers showed a distinctive procedurally oriented pattern, which presented a rather different picture. Such results provide useful suggestions for practitioners and policy makers in their effort to improve mathematics teaching and learning in the US and in other countries as well.DOI: http://dx.doi.org/10.22342/jme.8.2.4066.115-132

  5. Heterogeneous Gpu&Cpu Cluster For High Performance Computing In Cryptography

    Michał Marks

    2012-01-01

    Full Text Available This paper addresses issues associated with distributed computing systems andthe application of mixed GPU&CPU technology to data encryption and decryptionalgorithms. We describe a heterogenous cluster HGCC formed by twotypes of nodes: Intel processor with NVIDIA graphics processing unit and AMDprocessor with AMD graphics processing unit (formerly ATI, and a novel softwareframework that hides the heterogeneity of our cluster and provides toolsfor solving complex scientific and engineering problems. Finally, we present theresults of numerical experiments. The considered case study is concerned withparallel implementations of selected cryptanalysis algorithms. The main goal ofthe paper is to show the wide applicability of the GPU&CPU technology tolarge scale computation and data processing.

  6. Trajectories of Symptom Clusters, Performance Status, and Quality of Life During Concurrent Chemoradiotherapy in Patients With High-Grade Brain Cancers.

    Kim, Sang-Hee; Byun, Youngsoon

    Symptom clusters must be identified in patients with high-grade brain cancers for effective symptom management during cancer-related therapy. The aims of this study were to identify symptom clusters in patients with high-grade brain cancers and to determine the relationship of each cluster with the performance status and quality of life (QOL) during concurrent chemoradiotherapy (CCRT). Symptoms were assessed using the Memorial Symptom Assessment Scale, and the performance status was evaluated using the Karnofsky Performance Scale. Quality of life was assessed using the Functional Assessment of Cancer Therapy-General. This prospective longitudinal survey was conducted before CCRT and at 2 to 3 weeks and 4 to 6 weeks after the initiation of CCRT. A total of 51 patients with newly diagnosed primary malignant brain cancer were included. Six symptom clusters were identified, and 2 symptom clusters were present at each time point (ie, "negative emotion" and "neurocognitive" clusters before CCRT, "negative emotion and decreased vitality" and "gastrointestinal and decreased sensory" clusters at 2-3 weeks, and "body image and decreased vitality" and "gastrointestinal" clusters at 4-6 weeks). The symptom clusters at each time point demonstrated a significant relationship with the performance status or QOL. Differences were observed in symptom clusters in patients with high-grade brain cancers during CCRT. In addition, the symptom clusters were correlated with the performance status and QOL of patients, and these effects could change during CCRT. The results of this study will provide suggestions for interventions to treat or prevent symptom clusters in patients with high-grade brain cancer during CCRT.

  7. Acquisition of a High Performance Computer Cluster for Materials Research and Education

    2015-04-17

    NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO. 0704-0188 3. DATES COVERED (From - To) - UU UU UU UU 17-04-2015 1-Feb-2014 31-Jan...significantly reduce the time and labour required for materials development. The proposed cluster will also play an important role for education and...the paradigm of materials design based on time-consuming trial-and-error experiments and significantly reduce the time and labour required for

  8. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters.

    Dahlö, Martin; Scofield, Douglas G; Schaal, Wesley; Spjuth, Ola

    2018-05-01

    Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases.

  9. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters

    2018-01-01

    Abstract Background Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. Results The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Conclusions Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases. PMID:29659792

  10. Optimized Parallel Discrete Event Simulation (PDES) for High Performance Computing (HPC) Clusters

    Abu-Ghazaleh, Nael

    2005-01-01

    The aim of this project was to study the communication subsystem performance of state of the art optimistic simulator Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES...

  11. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  12. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  13. A high performance image processing platform based on CPU-GPU heterogeneous cluster with parallel image reconstroctions for micro-CT

    Ding Yu; Qi Yujin; Zhang Xuezhu; Zhao Cuilan

    2011-01-01

    In this paper, we report the development of a high-performance image processing platform, which is based on CPU-GPU heterogeneous cluster. Currently, it consists of a Dell Precision T7500 and HP XW8600 workstations with parallel programming and runtime environment, using the message-passing interface (MPI) and CUDA (Compute Unified Device Architecture). We succeeded in developing parallel image processing techniques for 3D image reconstruction of X-ray micro-CT imaging. The results show that a GPU provides a computing efficiency of about 194 times faster than a single CPU, and the CPU-GPU clusters provides a computing efficiency of about 46 times faster than the CPU clusters. These meet the requirements of rapid 3D image reconstruction and real time image display. In conclusion, the use of CPU-GPU heterogeneous cluster is an effective way to build high-performance image processing platform. (authors)

  14. A High Performance Multi-Core FPGA Implementation for 2D Pixel Clustering for the ATLAS Fast TracKer (FTK) Processor

    Sotiropoulou, C-L; The ATLAS collaboration; Beretta, M; Gkaitatzis, S; Kordas, K; Nikolaidis, S; Petridou, C; Volpi, G

    2014-01-01

    The high performance multi-core 2D pixel clustering FPGA implementation used for the input system of the ATLAS Fast TracKer (FTK) processor is presented. The input system for the FTK processor will receive data from the Pixel and micro-strip detectors read out drivers (RODs) at 760Gbps, the full rate of level 1 triggers. Clustering is required as a method to reduce the high rate of the received data before further processing, as well as to determine the cluster centroid for obtaining obtain the best spatial measurement. Our implementation targets the pixel detectors and uses a 2D-clustering algorithm that takes advantage of a moving window technique to minimize the logic required for cluster identification. The design is fully generic and the cluster detection window size can be adjusted for optimizing the cluster identification process. Τhe implementation can be parallelized by instantiating multiple cores to identify different clusters independently thus exploiting more FPGA resources. This flexibility mak...

  15. Designing a Scalable Fault Tolerance Model for High Performance Computational Chemistry: A Case Study with Coupled Cluster Perturbative Triples.

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2011-01-11

    In the past couple of decades, the massive computational power provided by the most modern supercomputers has resulted in simulation of higher-order computational chemistry methods, previously considered intractable. As the system sizes continue to increase, the computational chemistry domain continues to escalate this trend using parallel computing with programming models such as Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) programming models such as Global Arrays. The ever increasing scale of these supercomputers comes at a cost of reduced Mean Time Between Failures (MTBF), currently on the order of days and projected to be on the order of hours for upcoming extreme scale systems. While traditional disk-based check pointing methods are ubiquitous for storing intermediate solutions, they suffer from high overhead of writing and recovering from checkpoints. In practice, checkpointing itself often brings the system down. Clearly, methods beyond checkpointing are imperative to handling the aggravating issue of reducing MTBF. In this paper, we address this challenge by designing and implementing an efficient fault tolerant version of the Coupled Cluster (CC) method with NWChem, using in-memory data redundancy. We present the challenges associated with our design, including an efficient data storage model, maintenance of at least one consistent data copy, and the recovery process. Our performance evaluation without faults shows that the current design exhibits a small overhead. In the presence of a simulated fault, the proposed design incurs negligible overhead in comparison to the state of the art implementation without faults.

  16. Highly reddened Ara cluster revisited

    Koornneef, J.

    1976-01-01

    New infrared observations in a highly reddened (Asub(V)>12sup(m)) cluster in Ara are presented and discussed. Some of the stars, previously classified as M-type supergiants, exhibit a strong infrared excess to be attributed to thermospheres

  17. Latent Cluster Analysis of Instructional Practices Reported by High- and Low-Performing Mathematics Teachers in Four Countries

    Cheng, Qiang; Hsu, Hsien-Yuan

    2017-01-01

    Using Trends in International Mathematics and Science Study (TIMSS) 2011 eighth-grade international dataset, this study explored the profiles of instructional practices reported by high- and low-performing mathematics teachers across the US, Finland, Korea, and Russia. Concepts of conceptual teaching and procedural teaching were used to frame the…

  18. CTEx Beowulf cluster for MCNP performance

    Gonzaga, Roberto N.; Amorim, Aneuri S. de; Balthar, Mario Cesar V.

    2011-01-01

    This work is an introduction to the CTEx Nuclear Defense Department's Beowulf Cluster. Building a Beowulf Cluster is a complex learning process that greatly depends upon your hardware and software requirements. The feasibility and efficiency of performing MCNP5 calculations with a small, heterogeneous computing cluster built in Red Hat's Fedora Linux operating system personal computers (PC) are explored. The performance increases that may be expected with such clusters are estimated for cases that typify general radiation transport calculations. Our results show that the speed increase from additional slave PCs is nearly linear up to 10 processors. The pre compiled parallel binary version of MCNP uses the Message-Passing Interface (MPI) protocol. The use of this pre compiled parallel version of MCNP5 with the MPI protocol on a small, heterogeneous computing cluster built from Red Hat's Fedora Linux operating system PCs is the subject of this work. (author)

  19. Comparing the performance of biomedical clustering methods

    Wiwie, Christian; Baumbach, Jan; Röttger, Richard

    2015-01-01

    expression to protein domains. Performance was judged on the basis of 13 common cluster validity indices. We developed a clustering analysis platform, ClustEval (http://clusteval.mpi-inf.mpg.de), to promote streamlined evaluation, comparison and reproducibility of clustering results in the future......Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene....... This allowed us to objectively evaluate the performance of all tools on all data sets with up to 1,000 different parameter sets each, resulting in a total of more than 4 million calculated cluster validity indices. We observed that there was no universal best performer, but on the basis of this wide...

  20. Confined SnO2 quantum-dot clusters in graphene sheets as high-performance anodes for lithium-ion batteries.

    Zhu, Chengling; Zhu, Shenmin; Zhang, Kai; Hui, Zeyu; Pan, Hui; Chen, Zhixin; Li, Yao; Zhang, Di; Wang, Da-Wei

    2016-05-16

    Construction of metal oxide nanoparticles as anodes is of special interest for next-generation lithium-ion batteries. The main challenge lies in their rapid capacity fading caused by the structural degradation and instability of solid-electrolyte interphase (SEI) layer during charge/discharge process. Herein, we address these problems by constructing a novel-structured SnO2-based anode. The novel structure consists of mesoporous clusters of SnO2 quantum dots (SnO2 QDs), which are wrapped with reduced graphene oxide (RGO) sheets. The mesopores inside the clusters provide enough room for the expansion and contraction of SnO2 QDs during charge/discharge process while the integral structure of the clusters can be maintained. The wrapping RGO sheets act as electrolyte barrier and conductive reinforcement. When used as an anode, the resultant composite (MQDC-SnO2/RGO) shows an extremely high reversible capacity of 924 mAh g(-1) after 200 cycles at 100 mA g(-1), superior capacity retention (96%), and outstanding rate performance (505 mAh g(-1) after 1000 cycles at 1000 mA g(-1)). Importantly, the materials can be easily scaled up under mild conditions. Our findings pave a new way for the development of metal oxide towards enhanced lithium storage performance.

  1. Confined SnO2 quantum-dot clusters in graphene sheets as high-performance anodes for lithium-ion batteries

    Zhu, Chengling; Zhu, Shenmin; Zhang, Kai; Hui, Zeyu; Pan, Hui; Chen, Zhixin; Li, Yao; Zhang, Di; Wang, Da-Wei

    2016-01-01

    Construction of metal oxide nanoparticles as anodes is of special interest for next-generation lithium-ion batteries. The main challenge lies in their rapid capacity fading caused by the structural degradation and instability of solid-electrolyte interphase (SEI) layer during charge/discharge process. Herein, we address these problems by constructing a novel-structured SnO2-based anode. The novel structure consists of mesoporous clusters of SnO2 quantum dots (SnO2 QDs), which are wrapped with reduced graphene oxide (RGO) sheets. The mesopores inside the clusters provide enough room for the expansion and contraction of SnO2 QDs during charge/discharge process while the integral structure of the clusters can be maintained. The wrapping RGO sheets act as electrolyte barrier and conductive reinforcement. When used as an anode, the resultant composite (MQDC-SnO2/RGO) shows an extremely high reversible capacity of 924 mAh g−1 after 200 cycles at 100 mA g−1, superior capacity retention (96%), and outstanding rate performance (505 mAh g−1 after 1000 cycles at 1000 mA g−1). Importantly, the materials can be easily scaled up under mild conditions. Our findings pave a new way for the development of metal oxide towards enhanced lithium storage performance. PMID:27181691

  2. First high energy hydrogen cluster beams

    Gaillard, M.J.; Genre, R.; Hadinger, G.; Martin, J.

    1993-03-01

    The hydrogen cluster accelerator of the Institut de Physique Nucleaire de Lyon (IPN Lyon) has been upgraded by adding a Variable Energy Post-accelerator of RFQ type (VERFQ). This operation has been performed in the frame of a collaboration between KfK Karlsruhe, IAP Frankfurt and IPN Lyon. The facility has been designed to deliver beams of mass selected Hn + clusters, n chosen between 3 and 49, in the energy range 65-100 keV/u. For the first time, hydrogen clusters have been accelerated at energies as high as 2 MeV. This facility opens new fields for experiments which will greatly benefit from a velocity range never available until now for such exotic projectiles. (author) 13 refs.; 1 fig

  3. Assessment of repeatability of composition of perfumed waters by high-performance liquid chromatography combined with numerical data analysis based on cluster analysis (HPLC UV/VIS - CA).

    Ruzik, L; Obarski, N; Papierz, A; Mojski, M

    2015-06-01

    High-performance liquid chromatography (HPLC) with UV/VIS spectrophotometric detection combined with the chemometric method of cluster analysis (CA) was used for the assessment of repeatability of composition of nine types of perfumed waters. In addition, the chromatographic method of separating components of the perfume waters under analysis was subjected to an optimization procedure. The chromatograms thus obtained were used as sources of data for the chemometric method of cluster analysis (CA). The result was a classification of a set comprising 39 perfumed water samples with a similar composition at a specified level of probability (level of agglomeration). A comparison of the classification with the manufacturer's declarations reveals a good degree of consistency and demonstrates similarity between samples in different classes. A combination of the chromatographic method with cluster analysis (HPLC UV/VIS - CA) makes it possible to quickly assess the repeatability of composition of perfumed waters at selected levels of probability. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  4. Performance Analysis of Cluster Formation in Wireless Sensor Networks

    Edgar Romo Montiel

    2017-12-01

    Full Text Available Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes.

  5. Performance Analysis of Cluster Formation in Wireless Sensor Networks.

    Montiel, Edgar Romo; Rivero-Angeles, Mario E; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo

    2017-12-13

    Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes.

  6. High performance data transfer

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  7. Distributed MDSplus database performance with Linux clusters

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  8. Clustering Professional Basketball Players by Performance

    Patel, Riki

    2017-01-01

    Basketball players are traditionally grouped into five distinct positions, but these designationsare quickly becoming outdated. We attempt to reclassify players into new groupsbased on personal performance in the 2016-2017 NBA regular season. Two dimensionalityreduction techniques, t-Distributed Stochastic Neighbor Embedding (t-SNE) and principalcomponent analysis (PCA), were employed to reduce 18 classic metrics down to two dimensionsfor visualization. k-means clustering discovered four grou...

  9. Simultaneous determination of 19 flavonoids in commercial trollflowers by using high-performance liquid chromatography and classification of samples by hierarchical clustering analysis.

    Song, Zhiling; Hashi, Yuki; Sun, Hongyang; Liang, Yi; Lan, Yuexiang; Wang, Hong; Chen, Shizhong

    2013-12-01

    The flowers of Trollius species, named Jin Lianhua in Chinese, are widely used traditional Chinese herbs with vital biological activity that has been used for several decades in China to treat upper respiratory infections, pharyngitis, tonsillitis, and bronchitis. We developed a rapid and reliable method for simultaneous quantitative analysis of 19 flavonoids in trollflowers by using high-performance liquid chromatography (HPLC). Chromatography was performed on Inertsil ODS-3 C18 column, with gradient elution methanol-acetonitrile-water with 0.02% (v/v) formic acid. Content determination was used to evaluate the quality of commercial trollflowers from different regions in China, while three Trollius species (Trollius chinensis Bunge, Trollius ledebouri Reichb, Trollius buddae Schipcz) were explicitly distinguished by using hierarchical clustering analysis. The linearity, precision, accuracy, limit of detection, and limit of quantification were validated for the quantification method, which proved sensitive, accurate and reproducible indicating that the proposed approach was applicable for the routine analysis and quality control of trollflowers. © 2013.

  10. Performance criteria for graph clustering and Markov cluster experiments

    S. van Dongen

    2000-01-01

    textabstractIn~[1] a cluster algorithm for graphs was introduced called the Markov cluster algorithm or MCL~algorithm. The algorithm is based on simulation of (stochastic) flow in graphs by means of alternation of two operators, expansion and inflation. The results in~[2] establish an intrinsic

  11. Clustering high dimensional data using RIA

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  12. Dark matter phenomenology of high-speed galaxy cluster collisions

    Mishchenko, Yuriy [Izmir University of Economics, Faculty of Engineering, Izmir (Turkey); Ji, Chueng-Ryong [North Carolina State University, Department of Physics, Raleigh, NC (United States)

    2017-08-15

    We perform a general computational analysis of possible post-collision mass distributions in high-speed galaxy cluster collisions in the presence of self-interacting dark matter. Using this analysis, we show that astrophysically weakly self-interacting dark matter can impart subtle yet measurable features in the mass distributions of colliding galaxy clusters even without significant disruptions to the dark matter halos of the colliding galaxy clusters themselves. Most profound such evidence is found to reside in the tails of dark matter halos' distributions, in the space between the colliding galaxy clusters. Such features appear in our simulations as shells of scattered dark matter expanding in alignment with the outgoing original galaxy clusters, contributing significant densities to projected mass distributions at large distances from collision centers and large scattering angles of up to 90 {sup circle}. Our simulations indicate that as much as 20% of the total collision's mass may be deposited into such structures without noticeable disruptions to the main galaxy clusters. Such structures at large scattering angles are forbidden in purely gravitational high-speed galaxy cluster collisions. Convincing identification of such structures in real colliding galaxy clusters would be a clear indication of the self-interacting nature of dark matter. Our findings may offer an explanation for the ring-like dark matter feature recently identified in the long-range reconstructions of the mass distribution of the colliding galaxy cluster CL0024+017. (orig.)

  13. Dark matter phenomenology of high-speed galaxy cluster collisions

    Mishchenko, Yuriy; Ji, Chueng-Ryong

    2017-01-01

    We perform a general computational analysis of possible post-collision mass distributions in high-speed galaxy cluster collisions in the presence of self-interacting dark matter. Using this analysis, we show that astrophysically weakly self-interacting dark matter can impart subtle yet measurable features in the mass distributions of colliding galaxy clusters even without significant disruptions to the dark matter halos of the colliding galaxy clusters themselves. Most profound such evidence is found to reside in the tails of dark matter halos' distributions, in the space between the colliding galaxy clusters. Such features appear in our simulations as shells of scattered dark matter expanding in alignment with the outgoing original galaxy clusters, contributing significant densities to projected mass distributions at large distances from collision centers and large scattering angles of up to 90 "c"i"r"c"l"e. Our simulations indicate that as much as 20% of the total collision's mass may be deposited into such structures without noticeable disruptions to the main galaxy clusters. Such structures at large scattering angles are forbidden in purely gravitational high-speed galaxy cluster collisions. Convincing identification of such structures in real colliding galaxy clusters would be a clear indication of the self-interacting nature of dark matter. Our findings may offer an explanation for the ring-like dark matter feature recently identified in the long-range reconstructions of the mass distribution of the colliding galaxy cluster CL0024+017. (orig.)

  14. Performance clustering and incentives in the UK pension fund industry

    David Blake; Bruce N. Lehmann; Allan Timmermann

    2002-01-01

    Despite pension fund managers being largely unconstrained in their investment decisions, this paper reports evidence of clustering in the performance of a large cross-section of UK pension fund managers around the median fund manager. This finding is explained in terms of the predominance of a single investment style (balanced management), the fee structures and incentives operating in the UK pension fund industry to maximise relative rather than absolute performance, the high concentration i...

  15. Marketing Strategies Influences On SMEs Cluster Performance

    Satria Tirtayasa

    2017-06-01

    Full Text Available The contribution of larges businesses to make the economy growth is very important likewise the role of Small Business Enterprise SMEs where this industry has been believed so strong from the effect of crisis economic in Indonesia. In generaly SMEs owner are business manwoman that have minimum assets Rp. 500 million where their gathers together to make the cooperative assosiation in one area SMEs Cluster . So in this cooperative assosiation can be expected the SMEs businessman help each others with making colaboration internal or external networking to enhance their business performance. The internal networking simmilar with marketing strategies for instance strategi Promotion Distribution Production and Raw material supplay where the objectives are to enchance the SMEs performance. This study intends to examine the relationship between marketing strategies Promotion Distribution Production Raw material supplay on SMSEs performance. The marketing strategies consist of our dimentions as follows promotion distribution production and raw material supply. Meanwile SMEs business performance indicators are market share and profit margin. The sample of research is 70 SMEs businessman where spread of Medan Tembung District East Medan District and Medan Perjuangan District. This research had used questionnaires method to collect the data from SMEs businessman. The analysis method had used multiple regression and the result of the research reveals that promotion had positive and significant relationship with SMEs performance.Distribution had positive and significant relationship with SMEs performance Production has positive and significant relationship with SMEs performance.Raw material supplay had positiveve and significant relationship with SMEs performance. Furthermore Marketing Strategies Promotion Distribution Production Raw material supplay simultaneous have significant relationship on SMEs Performances.

  16. Cluster analysis of received constellations for optical performance monitoring

    van Weerdenburg, J.J.A.; van Uden, R.; Sillekens, E.; de Waardt, H.; Koonen, A.M.J.; Okonkwo, C.

    2016-01-01

    Performance monitoring based on centroid clustering to investigate constellation generation offsets. The tool allows flexibility in constellation generation tolerances by forwarding centroids to the demapper. The relation of fibre nonlinearities and singular value decomposition of intra-cluster

  17. Performance Evaluation of Spectral Clustering Algorithm using Various Clustering Validity Indices

    M. T. Somashekara; D. Manjunatha

    2014-01-01

    In spite of the popularity of spectral clustering algorithm, the evaluation procedures are still in developmental stage. In this article, we have taken benchmarking IRIS dataset for performing comparative study of twelve indices for evaluating spectral clustering algorithm. The results of the spectral clustering technique were also compared with k-mean algorithm. The validity of the indices was also verified with accuracy and (Normalized Mutual Information) NMI score. Spectral clustering algo...

  18. Using iterative cluster merging with improved gap statistics to perform online phenotype discovery in the context of high-throughput RNAi screens

    Sun Youxian

    2008-06-01

    Full Text Available Abstract Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of

  19. Synthesis of Fe3O4 cluster microspheres/graphene aerogels composite as anode for high-performance lithium ion battery

    Zhou, Shuai; Zhou, Yu; Jiang, Wei; Guo, Huajun; Wang, Zhixing; Li, Xinhai

    2018-05-01

    Iron oxides are considered as attractive electrode materials because of their capability of lithium storage, but their poor conductivity and large volume expansion lead to unsatisfactory cycling stability. We designed and synthesized a novel Fe3O4 cluster microspheres/Graphene aerogels composite (Fe3O4/GAs), where Fe3O4 nanoparticles were assembled into cluster microspheres and then embedded in 3D graphene aerogels framework. In the spheres, the sufficient free space between Fe3O4 nanoparticles could accommodate the volume change during cycling process. Graphene aerogel works as flexible and conductive matrix, which can not only significantly increase the mechanical stress, but also further improve the storage properties. The Fe3O4/GAs composite as an anode material exhibits high reversible capability and excellent cyclic capacity for lithium ion batteries (LIBs). A reversible capability of 650 mAh g-1 after 500 cycles at a current density of 1 A g-1 can be maintained. The superior storage capabilities of the composites make them potential anode materials for LIBs.

  20. Improved Ant Colony Clustering Algorithm and Its Performance Study

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  1. High-dimensional cluster analysis with the Masked EM Algorithm

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  2. Cluster-based DBMS Management Tool with High-Availability

    Jae-Woo Chang

    2005-02-01

    Full Text Available A management tool which is needed for monitoring and managing cluster-based DBMSs has been little studied. So, we design and implement a cluster-based DBMS management tool with high-availability that monitors the status of nodes in a cluster system as well as the status of DBMS instances in a node. The tool enables users to recognize a single virtual system image and provides them with the status of all the nodes and resources in the system by using a graphic user interface (GUI. By using a load balancer, our management tool can increase the performance of a cluster-based DBMS as well as can overcome the limitation of the existing parallel DBMSs.

  3. Performance Evaluation of Incremental K-means Clustering Algorithm

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  4. Enhanced high-order harmonic generation from Argon-clusters

    Tao, Yin; Hagmeijer, Rob; Bastiaens, Hubertus M.J.; Goh, S.J.; van der Slot, P.J.M.; Biedron, S.; Milton, S.; Boller, Klaus J.

    2017-01-01

    High-order harmonic generation (HHG) in clusters is of high promise because clusters appear to offer an increased optical nonlinearity. We experimentally investigate HHG from Argon clusters in a supersonic gas jet that can generate monomer-cluster mixtures with varying atomic number density and

  5. Towards Urban High-technolgy Clusters: An International Comparison

    Júlia Bosch

    2012-01-01

    Full Text Available This paper presents the results of a comparative study of 23 urban or regional high-technology clusters (media, ICT, energy, biotechnology all over the world, analyzing how they were created, how they are managed and how they operate, and the strategies followed to improve and excel in their fields of action. Special attention is given to issues related to descriptive aspects, R&D, performance of the clusters, location factors and incentives to attract companies. The empirical analysis applied to the identified clusters was done through a questionnaire sent to the representatives of the cluster’s management. When analyzing the data, the study has combined quantitative and qualitative methods, depending on the information to be processed. The data collection was done through a selection of indicators chosen in order to cover the different elements that cluster literature coincide in consider essential to develop a competitive economic cluster in urban regions. The main obstacle we find with the information available to carry out this study has been its heterogeneity and different quality of the data. 22@Barcelona appears to be in a good position to compete with other excelling clusters, but it still needs to improve in areas such as financial supply for R&D and start-ups and coordination between the different actors involved in urban economic development. Our research also contributes to the discussion on the role of public institutions in the cluster development policies. In the clusters studied here, especially in Barcelona, we have seen that a capable and resourceful public administration can determine the success of the cluster initiative.

  6. Communication Software Performance for Linux Clusters with Mesh Connections

    Jie Chen; William Watson

    2003-09-01

    Recent progress in copper based commodity Gigabit Ethernet interconnects enables constructing clusters to achieve extremely high I/O bandwidth at low cost with mesh connections. However, the TCP/IP protocol stack cannot match the improved performance of Gigabit Ethernet networks especially in the case of multiple interconnects on a single host. In this paper, we evaluate and compare the performance characteristics of TCP/IP and M-VIA software that is an implementation of VIA.In particular, we focus on the performance of the software systems for a mesh communication architecture and demonstrate the feasibility of using multiple Gigabit Ethernet cards on one host to achieve aggregated bandwidth and latency that are not only better than what TCP provides but also compare favorably to some of the special purpose high-speed networks. In addition, implementation of a new M-VIA driver for one type of Gigabit Ethernet card will be discussed.

  7. Electron acceleration via high contrast laser interacting with submicron clusters

    Zhang Lu; Chen Liming; Wang Weiming; Yan Wenchao; Yuan Dawei; Mao Jingyi; Wang Zhaohua; Liu Cheng; Shen Zhongwei; Li Yutong; Dong Quanli; Lu Xin; Ma Jinglong; Wei Zhiyi; Faenov, Anatoly; Pikuz, Tatiana; Li Dazhang; Sheng Zhengming; Zhang Jie

    2012-01-01

    We experimentally investigated electron acceleration from submicron size argon clusters-gas target irradiated by a 100 fs, 10 TW laser pulses having a high-contrast. Electron beams are observed in the longitudinal and transverse directions to the laser propagation. The measured energy of the longitudinal electron reaches 600 MeV and the charge of the electron beam in the transverse direction is more than 3 nC. A two-dimensional particle-in-cell simulation of the interaction has been performed and it shows an enhancement of electron charge by using the cluster-gas target.

  8. The Optical Resolution of Chiral Tetrahedrone-type Clusters Contai- ning SCoFeM (M=Mo or W) Using High Performance Liquid Chromatography Chiral Stationary Phase

    2002-01-01

    Amylose tris (phenylcarbamate) chiral stationary phase (ATPC-CSP) was prepared and used for optical resolution of clusters 1 and 2. n-Hexane/2-propanol ( 99/1; v/v) were found to be the most suitable mobile phase on ATPC-CSP.

  9. Implementation and performance of the ATLAS pixel clustering neural networks

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2018-01-01

    The high particle densities produced by the Large Hadron Collider (LHC) mean that in the ATLAS pixel detector the clusters of deposited charge start to merge. A neural network-based approach is used to estimate the number of particles contributing to each cluster, and to accurately estimate the hit positions even in the presence of multiple particles. This talk thoroughly describes the algorithm and its implementation as well as present a set of benchmark performance measurements. The problem is most acute in the core of high-momentum jets where the average separation between particles becomes comparable to the detector granularity. This is further complicated by the high number of interactions per bunch crossing. Both these issues will become worse as the Run 3 and HL-LHC programme require analysis of higher and higher pT jets, while the interaction multiplicity rises. Future prospects in the context of LHC Run 3 and the upcoming ATLAS inner detector upgrade are also discussed.

  10. Performance of the cluster-jet target for PANDA

    Hergemoeller, Ann-Katrin; Bonaventura, Daniel; Grieser, Silke; Hetz, Benjamin; Koehler, Esperanza; Khoukaz, Alfons [Institut fuer Kernphysik, Westfaelische Wilhelms-Universitaet Muenster, 48149 Muenster (Germany)

    2016-07-01

    The success of storage ring experiments strongly depends on the choice of the target. For this purpose, a very appropriate internal target for such an experiment is a cluster-jet target, which will be the first operated target at the PANDA experiment at FAIR. In this kind of target the cluster beam itself is formed due to the expansion of pre-cooled gases within a Laval nozzle and is prepared afterwards via two orifices, the skimmer and the collimator. The target prototype, operating successfully for years at the University of Muenster, provides routinely target thicknesses of more than 2 x 10{sup 15} (atoms)/(cm{sup 2}) in a distance of 2.1 m behind the nozzle. Based on the results of the performance of the cluster target prototype the final cluster-jet target source was designed and set into operation in Muenster as well. Besides the monitoring of the cluster beam itself and the thickness with two different monitoring systems at this target, investigations on the cluster mass via Mie scattering will be performed. In this presentation an overview of the cluster target design, its performance and the Mie scattering method are presented and discussed.

  11. Confined SnO2 quantum-dot clusters in graphene sheets as high-performance anodes for lithium-ion batteries

    Zhu, Chengling; Zhu, Shenmin; Zhang, Kai; Hui, Zeyu; Pan, Hui; Chen, Zhixin; Li, Yao; Zhang, Di; Wang, Da-Wei

    2016-01-01

    Construction of metal oxide nanoparticles as anodes is of special interest for next-generation lithium-ion batteries. The main challenge lies in their rapid capacity fading caused by the structural degradation and instability of solid-electrolyte interphase (SEI) layer during charge/discharge process. Herein, we address these problems by constructing a novel-structured SnO2-based anode. The novel structure consists of mesoporous clusters of SnO2 quantum dots (SnO2 QDs), which are wrapped with...

  12. GALAXY CLUSTERS AT HIGH REDSHIFT AND EVOLUTION OF BRIGHTEST CLUSTER GALAXIES

    Wen, Z. L.; Han, J. L.

    2011-01-01

    Identification of high-redshift clusters is important for studies of cosmology and cluster evolution. Using photometric redshifts of galaxies, we identify 631 clusters from the Canada-France-Hawaii Telescope (CFHT) wide field, 202 clusters from the CFHT deep field, 187 clusters from the Cosmic Evolution Survey (COSMOS) field, and 737 clusters from the Spitzer Wide-area InfraRed Extragalactic Survey (SWIRE) field. The redshifts of these clusters are in the range 0.1 ∼ + - m 3.6 μ m colors of the BCGs are consistent with a stellar population synthesis model in which the BCGs are formed at redshift z f ≥ 2 and evolved passively. The g' - z' and B - m 3.6μm colors of the BCGs at redshifts z > 0.8 are systematically bluer than the passive evolution model for galaxies formed at z f ∼ 2, indicating star formation in high-redshift BCGs.

  13. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  14. Academic Performance and Lifestyle Behaviors in Australian School Children: A Cluster Analysis.

    Dumuid, Dorothea; Olds, Timothy; Martín-Fernández, Josep-Antoni; Lewis, Lucy K; Cassidy, Leah; Maher, Carol

    2017-12-01

    Poor academic performance has been linked with particular lifestyle behaviors, such as unhealthy diet, short sleep duration, high screen time, and low physical activity. However, little is known about how lifestyle behavior patterns (or combinations of behaviors) contribute to children's academic performance. We aimed to compare academic performance across clusters of children with common lifestyle behavior patterns. We clustered participants (Australian children aged 9-11 years, n = 284) into four mutually exclusive groups of distinct lifestyle behavior patterns, using the following lifestyle behaviors as cluster inputs: light, moderate, and vigorous physical activity; sedentary behavior and sleep, derived from 24-hour accelerometry; self-reported screen time and diet. Differences in academic performance (measured by a nationally administered standardized test) were detected across the clusters, with scores being lowest in the Junk Food Screenies cluster (unhealthy diet/high screen time) and highest in the Sitters cluster (high nonscreen sedentary behavior/low physical activity). These findings suggest that reduction in screen time and an improved diet may contribute positively to academic performance. While children with high nonscreen sedentary time performed better academically in this study, they also accumulated low levels of physical activity. This warrants further investigation, given the known physical and mental benefits of physical activity.

  15. Performance of networks of artificial neurons: The role of clustering

    Kim, Beom Jun

    2004-01-01

    The performance of the Hopfield neural network model is numerically studied on various complex networks, such as the Watts-Strogatz network, the Barabasi-Albert network, and the neuronal network of Caenorhabditis elegans. Through the use of a systematic way of controlling the clustering coefficient, with the degree of each neuron kept unchanged, we find that the networks with the lower clustering exhibit much better performance. The results are discussed in the practical viewpoint of application, and the biological implications are also suggested

  16. High Performance Marine Vessels

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  17. Cluster Cooperation in Wireless-Powered Sensor Networks: Modeling and Performance Analysis

    Chao Zhang

    2017-09-01

    Full Text Available A wireless-powered sensor network (WPSN consisting of one hybrid access point (HAP, a near cluster and the corresponding far cluster is investigated in this paper. These sensors are wireless-powered and they transmit information by consuming the harvested energy from signal ejected by the HAP. Sensors are able to harvest energy as well as store the harvested energy. We propose that if sensors in near cluster do not have their own information to transmit, acting as relays, they can help the sensors in a far cluster to forward information to the HAP in an amplify-and-forward (AF manner. We use a finite Markov chain to model the dynamic variation process of the relay battery, and give a general analyzing model for WPSN with cluster cooperation. Though the model, we deduce the closed-form expression for the outage probability as the metric of this network. Finally, simulation results validate the start point of designing this paper and correctness of theoretical analysis and show how parameters have an effect on system performance. Moreover, it is also known that the outage probability of sensors in far cluster can be drastically reduced without sacrificing the performance of sensors in near cluster if the transmit power of HAP is fairly high. Furthermore, in the aspect of outage performance of far cluster, the proposed scheme significantly outperforms the direct transmission scheme without cooperation.

  18. Cluster Cooperation in Wireless-Powered Sensor Networks: Modeling and Performance Analysis.

    Zhang, Chao; Zhang, Pengcheng; Zhang, Weizhan

    2017-09-27

    A wireless-powered sensor network (WPSN) consisting of one hybrid access point (HAP), a near cluster and the corresponding far cluster is investigated in this paper. These sensors are wireless-powered and they transmit information by consuming the harvested energy from signal ejected by the HAP. Sensors are able to harvest energy as well as store the harvested energy. We propose that if sensors in near cluster do not have their own information to transmit, acting as relays, they can help the sensors in a far cluster to forward information to the HAP in an amplify-and-forward (AF) manner. We use a finite Markov chain to model the dynamic variation process of the relay battery, and give a general analyzing model for WPSN with cluster cooperation. Though the model, we deduce the closed-form expression for the outage probability as the metric of this network. Finally, simulation results validate the start point of designing this paper and correctness of theoretical analysis and show how parameters have an effect on system performance. Moreover, it is also known that the outage probability of sensors in far cluster can be drastically reduced without sacrificing the performance of sensors in near cluster if the transmit power of HAP is fairly high. Furthermore, in the aspect of outage performance of far cluster, the proposed scheme significantly outperforms the direct transmission scheme without cooperation.

  19. High performance systems

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  20. Cluster-assembled overlayers and high-temperature superconductors

    Ohno, T.R.; Yang, Y.; Kroll, G.H.; Krause, K.; Schmidt, L.D.; Weaver, J.H.; Kimachi, Y.; Hidaka, Y.; Pan, S.H.; de Lozanne, A.L.

    1991-01-01

    X-ray photoemission results for interfaces prepared by cluster assembly with nanometer-size clusters deposited on high-T c superconductors (HTS's) show a reduction in reactivity because atom interactions with the surface are replaced by cluster interactions. Results for conventional atom deposition show the formation of overlayer oxides that are related to oxygen depletion and disruption of the near-surface region of the HTS's. For cluster assembly of Cr and Cu, there is a very thin reacted region on single-crystal Bi 2 Sr 2 CaCu 2 O 8 . Reduced reactivity is observed for Cr cluster deposition on single-crystal YBa 2 Cu 3 O 7 -based interfaces. There is no evidence of chemical modification of the surface for Ge and Au cluster assembly on Bi 2 Sr 2 CaCu 2 O 8 (100). The overlayer grown by Au cluster assembly on Bi 2 Sr 2 CaCu 2 O 8 covers the surface at low temperature but roughening occurs upon warming to 300 K. Scanning-tunneling-microscopy results for the Au(cluster)/Bi 2 Sr 2 CaCu 2 O 8 system warmed to 300 K shows individual clusters that have coalesced into large clusters. These results offer insight into the role of surface energies and cluster interactions in determining the overlayer morphology. Transmission-electron-microscopy results for Cu cluster assembly on silica show isolated irregularly shaped clusters that do not interact at low coverage. Sintering and labyrinth formation is observed at intermediate coverage and, ultimately, a continuous film is achieved at high coverage. Silica surface wetting by Cu clusters demonstrates that dispersive force are important for these small clusters

  1. Performance comparison analysis library communication cluster system using merge sort

    Wulandari, D. A. R.; Ramadhan, M. E.

    2018-04-01

    Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.

  2. Evaluating Clustering in Subspace Projections of High Dimensional Data

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  3. Responsive design high performance

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  4. High Performance Macromolecular Material

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  5. Comparison of wind mill cluster performance: A multicriteria approach

    Rajakumar, D.G.; Nagesha, N. [Visvesvaraya Technological Univ., Karnataka (India)

    2012-07-01

    Energy is a crucial input for the economic and social development of any nation. Both renewable and non-renewable energy contribute in meeting the total requirement of the economy. As an affordable and clean energy source, wind energy is amongst the world's fastest growing renewable energy forms. Though there are several wind-mill clusters producing energy in different geographical locations, evaluating their performance is a complex task and not much of literature is available in this area. In this backdrop, an attempt is made in the current paper to estimate the performance of a wind-mill cluster through an index called Cluster Performance Index (CPI) adopting a multi-criteria approach. The proposed CPI comprises four criteria viz., Technical Performance Indicators (TePI), Economic Performance Indicators (EcPI), Environmental Performance Indicators (EnPI), and Sociological Performance Indicators (SoPI). Under each performance criterion a total of ten parameters are considered with five subjective and five objective oriented responses. The methodology is implemented by collecting empirical data from three wind-mill clusters located at Chitradurga, Davangere, and Gadag in the southern Indian State of Karnataka. Totally fifteen different stake holders are consulted through a set of structured researcher administered questionnaire to collect the relevant data in each wind farm. Stake holders involved engineers working in wind farms, wind farm developers, Government officials from energy department and a few selected residential people near the wind farms. The results of the study revealed that Chitradurga wind farm performed much better with a CPI of 45.267 as compared to Gadag (CPI of 28.362) and Davangere (CPI of 19.040) wind farms. (Author)

  6. Performance Analysis of Unsupervised Clustering Methods for Brain Tumor Segmentation

    Tushar H Jaware

    2013-10-01

    Full Text Available Medical image processing is the most challenging and emerging field of neuroscience. The ultimate goal of medical image analysis in brain MRI is to extract important clinical features that would improve methods of diagnosis & treatment of disease. This paper focuses on methods to detect & extract brain tumour from brain MR images. MATLAB is used to design, software tool for locating brain tumor, based on unsupervised clustering methods. K-Means clustering algorithm is implemented & tested on data base of 30 images. Performance evolution of unsupervised clusteringmethods is presented.

  7. Farmer Performance under Competitive Pressure in Agro-cluster Regions

    Wardhana, D.; Ihle, R.; Heijman, W.J.M.

    2017-01-01

    Agro-clusters would allow farmers to acquire positive and negative externalities. On one hand, smallholder farmers in spatial proximity are likely to benefit from this concentration; on the other hand, they incur high competitive pressure from other neighboring farmers. We examine the link between

  8. On the performance limiting behavior of defect clusters in commercial silicon solar cells

    Sopori, B.L.; Chen, W.; Jones, K. [National Renewable Energy Lab., Golden, CO (United States); Gee, J. [Sandia National Labs., Albuquerque, NM (United States)

    1998-09-01

    The authors report the observation of defect clusters in high-quality, commercial silicon solar cell substrates. The nature of the defect clusters, their mechanism of formation, and precipitation of metallic impurities at the defect clusters are discussed. This defect configuration influences the device performance in a unique way--by primarily degrading the voltage-related parameters. Network modeling is used to show that, in an N/P junction device, these regions act as shunts that dissipate power generated within the cell.

  9. Quantitative Analysis and Comparison of Four Major Flavonol Glycosides in the Leaves of Toona sinensis (A. Juss.) Roemer (Chinese Toon) from Various Origins by High-Performance Liquid Chromatography-Diode Array Detector and Hierarchical Clustering Analysis

    Sun, Xiaoxiang; Zhang, Liting; Cao, Yaqi; Gu, Qinying; Yang, Huan; Tam, James P.

    2016-01-01

    Background: Toona sinensis (A. Juss.) Roemer is an endemic species of Toona genus native to Asian area. Its dried leaves are applied in the treatment of many diseases; however, few investigations have been reported for the quantitative analysis and comparison of major bioactive flavonol glycosides in the leaves harvested from various origins. Objective: To quantitatively analyze four major flavonol glycosides including rutinoside, quercetin-3-O-β-D-glucoside, quercetin-3-O-α-L-rhamnoside, and kaempferol-3-O-α-L-rhamnoside in the leaves from different production sites and classify them according to the content of these glycosides. Materials and Methods: A high-performance liquid chromatography-diode array detector (HPLC-DAD) method for their simultaneous determination was developed and validated for linearity, precision, accuracy, stability, and repeatability. Moreover, the method established was then employed to explore the difference in the content of these four glycosides in raw materials. Finally, a hierarchical clustering analysis was performed to classify 11 voucher specimens. Results: The separation was performed on a Waters XBridge Shield RP18 column (150 mm × 4.6 mm, 3.5 μm) kept at 35°C, and acetonitrile and H2O containing 0.30% trifluoroacetic acid as mobile phase was driven at 1.0 mL/min during the analysis. Ten microliters of solution were injected and 254 nm was selected to monitor the separation. A strong linear relationship between the peak area and concentration of four analytes was observed. And, the method was also validated to be repeatable, stable, precise, and accurate. Conclusion: An efficient and reliable HPLC-DAD method was established and applied in the assays for the samples from 11 origins successfully. Moreover, the content of those flavonol glycosides varied much among different batches, and the flavonoids could be considered as biomarkers to control the quality of Chinese Toon. SUMMARY Four major flavonol glycosides in the leaves

  10. Clojure high performance programming

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  11. High Performance Concrete

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  12. High performance polymeric foams

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  13. High performance conductometry

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  14. Danish High Performance Concretes

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  15. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  16. High performance computing in Windows Azure cloud

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  17. High-performance computing — an overview

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  18. Innovation performance and clusters : a dynamic capability perspective on regional technology clusters

    Röttmer, Nicole

    2009-01-01

    This research provides a novel, empirically tested, actionable theory of cluster innovativeness. Cluster innovativeness has for long been subject of research and resulting policy efforts. The cluster's endowment with assets, such as specialized labor, firms, research institutes, existing regional

  19. High-Performance Networking

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  20. Performance Based Clustering for Benchmarking of Container Ports: an Application of Dea and Cluster Analysis Technique

    Jie Wu

    2010-12-01

    Full Text Available The operational performance of container ports has received more and more attentions in both academic and practitioner circles, the performance evaluation and process improvement of container ports have also been the focus of several studies. In this paper, Data Envelopment Analysis (DEA, an effective tool for relative efficiency assessment, is utilized for measuring the performances and benchmarking of the 77 world container ports in 2007. The used approaches in the current study consider four inputs (Capacity of Cargo Handling Machines, Number of Berths, Terminal Area and Storage Capacity and a single output (Container Throughput. The results for the efficiency scores are analyzed, and a unique ordering of the ports based on average cross efficiency is provided, also cluster analysis technique is used to select the more appropriate targets for poorly performing ports to use as benchmarks.

  1. A High-Order CFS Algorithm for Clustering Big Data

    Fanyu Bu

    2016-01-01

    Full Text Available With the development of Internet of Everything such as Internet of Things, Internet of People, and Industrial Internet, big data is being generated. Clustering is a widely used technique for big data analytics and mining. However, most of current algorithms are not effective to cluster heterogeneous data which is prevalent in big data. In this paper, we propose a high-order CFS algorithm (HOCFS to cluster heterogeneous data by combining the CFS clustering algorithm and the dropout deep learning model, whose functionality rests on three pillars: (i an adaptive dropout deep learning model to learn features from each type of data, (ii a feature tensor model to capture the correlations of heterogeneous data, and (iii a tensor distance-based high-order CFS algorithm to cluster heterogeneous data. Furthermore, we verify our proposed algorithm on different datasets, by comparison with other two clustering schemes, that is, HOPCM and CFS. Results confirm the effectiveness of the proposed algorithm in clustering heterogeneous data.

  2. Centroid based clustering of high throughput sequencing reads based on n-mer counts.

    Solovyov, Alexander; Lipkin, W Ian

    2013-09-08

    Many problems in computational biology require alignment-free sequence comparisons. One of the common tasks involving sequence comparison is sequence clustering. Here we apply methods of alignment-free comparison (in particular, comparison using sequence composition) to the challenge of sequence clustering. We study several centroid based algorithms for clustering sequences based on word counts. Study of their performance shows that using k-means algorithm with or without the data whitening is efficient from the computational point of view. A higher clustering accuracy can be achieved using the soft expectation maximization method, whereby each sequence is attributed to each cluster with a specific probability. We implement an open source tool for alignment-free clustering. It is publicly available from github: https://github.com/luscinius/afcluster. We show the utility of alignment-free sequence clustering for high throughput sequencing analysis despite its limitations. In particular, it allows one to perform assembly with reduced resources and a minimal loss of quality. The major factor affecting performance of alignment-free read clustering is the length of the read.

  3. Benzoate-Induced High-Nuclearity Silver Thiolate Clusters.

    Su, Yan-Min; Liu, Wei; Wang, Zhi; Wang, Shu-Ao; Li, Yan-An; Yu, Fei; Zhao, Quan-Qin; Wang, Xing-Po; Tung, Chen-Ho; Sun, Di

    2018-04-03

    Compared with the well-known anion-templated effects in shaping silver thiolate clusters, the influence from the organic ligands in the outer shell is still poorly understood. Herein, three new benzoate-functionalized high-nuclearity silver(I) thiolate clusters are isolated and characterized for the first time in the presence of diverse anion templates such as S 2- , α-[Mo 5 O 18 ] 6- , and MoO 4 2- . Single-crystal X-ray analysis reveals that the nuclearities of the three silver clusters (SD/Ag28, SD/Ag29, SD/Ag30) vary from 32 to 38 to 78 with co-capped tBuS - and benzoate ligands on the surface. SD/Ag28 is a turtle-like cluster comprising a Ag 29 shell caging a Ag 3 S 3 trigon in the center, whereas SD/Ag29 is a prolate Ag 38 sphere templated by the α-[Mo 5 O 18 ] 6- anion. Upon changing from benzoate to methoxyl-substituted benzoate, SD/Ag30 is isolated as a very complicated core-shell spherical cluster composed of a Ag 57 shell and a vase-like Ag 21 S 13 core. Four MoO 4 2- anions are arranged in a supertetrahedron and located in the interstice between the core and shell. Introduction of the bulky benzoate changes elaborately the nuclearity and arrangements of silver polygons on the shell of silver clusters, which is exemplified by comparing SD/Ag28 and a known similar silver thiolate cluster. The three new clusters emit luminescence in the near-infrared (NIR) region and show different thermochromic luminescence properties. This work presents a flexible approach to synthetic studies of high-nuclearity silver clusters decorated by different benzoates, and structural modulations are also achieved. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. High performance sapphire windows

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  5. Gas expulsion in highly substructured embedded star clusters

    Farias, J. P.; Fellhauer, M.; Smith, R.; Domínguez, R.; Dabringhausen, J.

    2018-06-01

    We investigate the response of initially substructured, young, embedded star clusters to instantaneous gas expulsion of their natal gas. We introduce primordial substructure to the stars and the gas by simplistically modelling the star formation process so as to obtain a variety of substructure distributed within our modelled star-forming regions. We show that, by measuring the virial ratio of the stars alone (disregarding the gas completely), we can estimate how much mass a star cluster will retain after gas expulsion to within 10 per cent accuracy, no matter how complex the background structure of the gas is, and we present a simple analytical recipe describing this behaviour. We show that the evolution of the star cluster while still embedded in the natal gas, and the behaviour of the gas before being expelled, is crucial process that affect the time-scale on which the cluster can evolve into a virialized spherical system. Embedded star clusters that have high levels of substructure are subvirial for longer times, enabling them to survive gas expulsion better than a virialized and spherical system. By using a more realistic treatment for the background gas than our previous studies, we find it very difficult to destroy the young clusters with instantaneous gas expulsion. We conclude that gas removal may not be the main culprit for the dissolution of young star clusters.

  6. Effects of Cluster Location on Human Performance on the Traveling Salesperson Problem

    MacGregor, James N.

    2013-01-01

    Most models of human performance on the traveling salesperson problem involve clustering of nodes, but few empirical studies have examined effects of clustering in the stimulus array. A recent exception varied degree of clustering and concluded that the more clustered a stimulus array, the easier a TSP is to solve (Dry, Preiss, & Wagemans,…

  7. Cluster size matters: Size-driven performance of subnanometer clusters in catalysis, electrocatalysis and Li-air batteries

    Vajda, Stefan

    2015-03-01

    This paper discusses the strongly size-dependent performance of subnanometer cluster based catalysts in 1) heterogeneous catalysis, 2) electrocatalysis and 3) Li-air batteries. The experimental studies are based on I. fabrication of ultrasmall clusters with atomic precision control of particle size and their deposition on oxide and carbon based supports; II. test of performance, III. in situand ex situ X-ray characterization of cluster size, shape and oxidation state; and IV.electron microscopies. Heterogeneous catalysis. The pronounced effect of cluster size and support on the performance of the catalyst (catalyst activity and the yield of Cn products) will be illustrated on the example of nickel and cobalt clusters in Fischer-Tropsch reaction. Electrocatalysis. The study of the oxygen evolution reaction (OER) on size-selected palladium clusters supported on ultrananocrystalline diamond show pronounced size effects. While Pd4 clusters show no reaction, Pd6 and Pd17 clusters are among the most active catalysts known (in in terms of turnover rate per Pd atom). The system (soft-landed Pd4, Pd6, or Pd17 clusters on an UNCD Si coated electrode) shows stable electrochemical potentials over several cycles, and the characterization of the electrodes show no evidence for evolution or dissolution of either the support Theoretical calculations suggest that this striking difference may be a demonstration that bridging Pd-Pd sites, which are only present in three-dimensional clusters, are active for the oxygen evolution reaction in Pd6O6. Li-air batteries. The studies show that sub-nm silver clusters have dramatic size-dependent effect on the lowering of the overpotential, charge capacity, morphology of the discharge products, as well as on the morphology of the nm size building blocks of the discharge products. The results suggest that by precise control of the active surface sites on the cathode, the performance of Li-air cells can be significantly improved

  8. Performance assessment of the SIMFAP parallel cluster at IFIN-HH Bucharest

    Adam, Gh.; Adam, S.; Ayriyan, A.; Dushanov, E.; Hayryan, E.; Korenkov, V.; Lutsenko, A.; Mitsyn, V.; Sapozhnikova, T.; Sapozhnikov, A; Streltsova, O.; Buzatu, F.; Dulea, M.; Vasile, I.; Sima, A.; Visan, C.; Busa, J.; Pokorny, I.

    2008-01-01

    Performance assessment and case study outputs of the parallel SIMFAP cluster at IFIN-HH Bucharest point to its effective and reliable operation. A comparison with results on the supercomputing system in LIT-JINR Dubna adds insight on resource allocation for problem solving by parallel computing. The solution of models asking for very large numbers of knots in the discretization mesh needs the migration to high performance computing based on parallel cluster architectures. The acquisition of ready-to-use parallel computing facilities being beyond limited budgetary resources, the solution at IFIN-HH was to buy the hardware and the inter-processor network, and to implement by own efforts the open software concerning both the operating system and the parallel computing standard. The present paper provides a report demonstrating the successful solution of these tasks. The implementation of the well-known HPL (High Performance LINPACK) Benchmark points to the effective and reliable operation of the cluster. The comparison of HPL outputs obtained on parallel clusters of different magnitudes shows that there is an optimum range of the order N of the linear algebraic system over which a given parallel cluster provides optimum parallel solutions. For the SIMFAP cluster, this range can be inferred to correspond to about 1 to 2 x 10 4 linear algebraic equations. For an algorithm of polynomial complexity N α the task sharing among p processors within a parallel solution mainly follows an (N/p)α behaviour under peak performance achievement. Thus, while the problem complexity remains the same, a substantial decrease of the coefficient of the leading order of the polynomial complexity is achieved. (authors)

  9. Innovation performance and clusters: a dynamic capability perspective on regional technology clusters

    Röttmer, Nicole

    2009-01-01

    This research provides a novel, empirically tested, actionable theory of cluster innovativeness. Cluster innovativeness has for long been subject of research and resulting policy efforts. The cluster's endowment with assets, such as specialized labor, firms, research institutes, existing regional networks and a specific culture are, among others, recognized as sources of innovativeness. While the asset structure of clusters as been subject to a variety of research efforts, the evidence on the...

  10. Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

    Allada, Veerendra, Benjegerdes, Troy; Bode, Brett

    2009-08-31

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

  11. Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

    Allada, Veerendra; Benjegerdes, Troy; Bode, Brett

    2009-01-01

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

  12. Direct growth of metal-organic frameworks thin film arrays on glassy carbon electrode based on rapid conversion step mediated by copper clusters and hydroxide nanotubes for fabrication of a high performance non-enzymatic glucose sensing platform.

    Shahrokhian, Saeed; Khaki Sanati, Elnaz; Hosseini, Hadi

    2018-07-30

    The direct growth of self-supported metal-organic frameworks (MOFs) thin film can be considered as an effective strategy for fabrication of the advanced modified electrodes in sensors and biosensor applications. However, most of the fabricated MOFs-based sensors suffer from some drawbacks such as time consuming for synthesis of MOF and electrode making, need of a binder or an additive layer, need of expensive equipment and use of hazardous solvents. Here, a novel free-standing MOFs-based modified electrode was fabricated by the rapid direct growth of MOFs on the surface of the glassy carbon electrode (GCE). In this method, direct growth of MOFs was occurred by the formation of vertically aligned arrays of Cu clusters and Cu(OH) 2 nanotubes, which can act as both mediator and positioning fixing factor for the rapid formation of self-supported MOFs on GCE surface. The effect of both chemically and electrochemically formed Cu(OH) 2 nanotubes on the morphological and electrochemical performance of the prepared MOFs were investigated. Due to the unique properties of the prepared MOFs thin film electrode such as uniform and vertically aligned structure, excellent stability, high electroactive surface area, and good availability to analyte and electrolyte diffusion, it was directly used as the electrode material for non-enzymatic electrocatalytic oxidation of glucose. Moreover, the potential utility of this sensing platform for the analytical determination of glucose concentration was evaluated by the amperometry technique. The results proved that the self-supported MOFs thin film on GCE is a promising electrode material for fabricating and designing non-enzymatic glucose sensors. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. R high performance programming

    Lim, Aloysius

    2015-01-01

    This book is for programmers and developers who want to improve the performance of their R programs by making them run faster with large data sets or who are trying to solve a pesky performance problem.

  14. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  15. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  16. A CLUSTER IN THE MAKING: ALMA REVEALS THE INITIAL CONDITIONS FOR HIGH-MASS CLUSTER FORMATION

    Rathborne, J. M.; Contreras, Y.; Longmore, S. N.; Bastian, N.; Jackson, J. M.; Alves, J. F.; Bally, J.; Foster, J. B.; Garay, G.; Kruijssen, J. M. D.; Testi, L.; Walsh, A. J.

    2015-01-01

    G0.253+0.016 is a molecular clump that appears to be on the verge of forming a high-mass cluster: its extremely low dust temperature, high mass, and high density, combined with its lack of prevalent star formation, make it an excellent candidate for an Arches-like cluster in a very early stage of formation. Here we present new Atacama Large Millimeter/Sub-millimeter Array observations of its small-scale (∼0.07 pc) 3 mm dust continuum and molecular line emission from 17 different species that probe a range of distinct physical and chemical conditions. The data reveal a complex network of emission features with a complicated velocity structure: there is emission on all spatial scales, the morphology of which ranges from small, compact regions to extended, filamentary structures that are seen in both emission and absorption. The dust column density is well traced by molecules with higher excitation energies and critical densities, consistent with a clump that has a denser interior. A statistical analysis supports the idea that turbulence shapes the observed gas structure within G0.253+0.016. We find a clear break in the turbulent power spectrum derived from the optically thin dust continuum emission at a spatial scale of ∼0.1 pc, which may correspond to the spatial scale at which gravity has overcome the thermal pressure. We suggest that G0.253+0.016 is on the verge of forming a cluster from hierarchical, filamentary structures that arise from a highly turbulent medium. Although the stellar distribution within high-mass Arches-like clusters is compact, centrally condensed, and smooth, the observed gas distribution within G0.253+0.016 is extended, with no high-mass central concentration, and has a complex, hierarchical structure. If this clump gives rise to a high-mass cluster and its stars are formed from this initially hierarchical gas structure, then the resulting cluster must evolve into a centrally condensed structure via a dynamical process

  17. High performance work practices, innovation and performance

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  18. Clusters of PCS for high-speed computation for modelling of the climate

    Pabon C, Jose Daniel; Eslava R, Jesus Antonio; Montoya G, Gerardo de Jesus

    2001-01-01

    In order to create high speed computing capability, the Program of Post grade in Meteorology of the Department of Geosciences, National University of Colombia installed a cluster of 8 PCs for parallel processing. This high-speed processing machine was tested with the Climate Community Model (CCM3). In this paper, the results related to the performance of this machine are presented

  19. High Intensity Femtosecond XUV Pulse Interactions with Atomic Clusters: Final Report

    Ditmire, Todd [Univ. of Texas, Austin, TX (United States). Center for High Energy Density Science

    2016-10-12

    We propose to expand our recent studies on the interactions of intense extreme ultraviolet (XUV) femtosecond pulses with atomic and molecular clusters. The work described follows directly from work performed under BES support for the past grant period. During this period we upgraded the THOR laser at UT Austin by replacing the regenerative amplifier with optical parametric amplification (OPA) using BBO crystals. This increased the contrast of the laser, the total laser energy to ~1.2 J , and decreased the pulse width to below 30 fs. We built a new all reflective XUV harmonic beam line into expanded lab space. This enabled an increase influence by a factor of 25 and an increase in the intensity by a factor of 50. The goal of the program proposed in this renewal is to extend this class of experiments to available higher XUV intensity and a greater range of wavelengths. In particular we plan to perform experiments to confirm our hypothesis about the origin of the high charge states in these exploding clusters, an effect which we ascribe to plasma continuum lowering (ionization potential depression) in a cluster nano-­plasma. To do this we will perform experiments in which XUV pulses of carefully chosen wavelength irradiate clusters composed of only low-Z atoms and clusters with a mixture of this low-­Z atom with higher Z atoms. The latter clusters will exhibit higher electron densities and will serve to lower the ionization potential further than in the clusters composed only of low Z atoms. This should have a significant effect on the charge states produced in the exploding cluster. We will also explore the transition of explosions in these XUV irradiated clusters from hydrodynamic expansion to Coulomb explosion. The work proposed here will explore clusters of a wider range of constituents, including clusters from solids. Experiments on clusters from solids will be enabled by development we performed during the past grant period in which we constructed and

  20. A hybridized K-means clustering approach for high dimensional ...

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  1. High-mass stars in Milky Way clusters

    Negueruela, Ignacio

    2017-11-01

    Young open clusters are our laboratories for studying high-mass star formation and evolution. Unfortunately, the information that they provide is difficult to interpret, and sometimes contradictory. In this contribution, I present a few examples of the uncertainties that we face when confronting observations with theoretical models and our own assumptions.

  2. Cost/Performance Ratio Achieved by Using a Commodity-Based Cluster

    Lopez, Isaac

    2001-01-01

    Researchers at the NASA Glenn Research Center acquired a commodity cluster based on Intel Corporation processors to compare its performance with a traditional UNIX cluster in the execution of aeropropulsion applications. Since the cost differential of the clusters was significant, a cost/performance ratio was calculated. After executing a propulsion application on both clusters, the researchers demonstrated a 9.4 cost/performance ratio in favor of the Intel-based cluster. These researchers utilize the Aeroshark cluster as one of the primary testbeds for developing NPSS parallel application codes and system software. The Aero-shark cluster provides 64 Intel Pentium II 400-MHz processors, housed in 32 nodes. Recently, APNASA - a code developed by a Government/industry team for the design and analysis of turbomachinery systems was used for a simulation on Glenn's Aeroshark cluster.

  3. Python high performance programming

    Lanaro, Gabriele

    2013-01-01

    An exciting, easy-to-follow guide illustrating the techniques to boost the performance of Python code, and their applications with plenty of hands-on examples.If you are a programmer who likes the power and simplicity of Python and would like to use this language for performance-critical applications, this book is ideal for you. All that is required is a basic knowledge of the Python programming language. The book will cover basic and advanced topics so will be great for you whether you are a new or a seasoned Python developer.

  4. Implications of multiple high-redshift galaxy clusters

    Hoyle, Ben; Jimenez, Raul; Verde, Licia

    2011-01-01

    To date, 14 high-redshift (z>1.0) galaxy clusters with mass measurements have been observed, spectroscopically confirmed, and are reported in the literature. These objects should be exceedingly rare in the standard Λ cold dark matter (ΛCDM) model. We conservatively approximate the selection functions of these clusters' parent surveys and quantify the tension between the abundances of massive clusters as predicted by the standard ΛCDM model and the observed ones. We alleviate the tension, considering non-Gaussian primordial perturbations of the local type, characterized by the parameter f NL , and derive constraints on f NL arising from the mere existence of these clusters. At the 95% confidence level, f NL >467, with cosmological parameters fixed to their most likely WMAP5 values, or f NL > or approx. 123 (at 95% confidence) if we marginalize over prior WMAP5 parameters. In combination with f NL constraints from cosmic microwave background and halo bias, this determination implies a scale dependence of f NL at ≅3σ. Given the assumptions made in the analysis, we expect any future improvements to the modeling of the non-Gaussian mass function, survey volumes, or selection functions to increase the significance of f NL >0 found here. In order to reconcile these massive, high-z clusters with f NL =0, their masses would need to be systematically lowered by 1.5σ, or the σ 8 parameter should be ∼3σ higher than cosmic microwave background (and large-scale structure) constraints. The existence of these objects is a puzzle: it either represents a challenge to the ΛCDM paradigm or it is an indication that the mass estimates of clusters are dramatically more uncertain than we think.

  5. High performance germanium MOSFETs

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  6. High performance germanium MOSFETs

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  7. High Performance Computing Multicast

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  8. NGINX high performance

    Sharma, Rahul

    2015-01-01

    System administrators, developers, and engineers looking for ways to achieve maximum performance from NGINX will find this book beneficial. If you are looking for solutions such as how to handle more users from the same system or load your website pages faster, then this is the book for you.

  9. Comparative Performance Of Using PCA With K-Means And Fuzzy C Means Clustering For Customer Segmentation

    Fahmida Afrin

    2015-08-01

    Full Text Available Abstract Data mining is the process of analyzing data and discovering useful information. Sometimes it is called knowledge Discovery. Clustering refers to groups whereas data are grouped in such a way that the data in one cluster are similar data in different clusters are dissimilar. Many data mining technologies are developed for customer segmentation. PCA is working as a preprocessor of Fuzzy C means and K- means for reducing the high dimensional and noisy data. There are many clustering method apply on customer segmentation. In this paper the performance of Fuzzy C means and K-means after implementing Principal Component Analysis is analyzed. We analyze the performance on a standard dataset for these algorithms. The results indicate that PCA based fuzzy clustering produces better results than PCA based K-means and is a more stable method for customer segmentation.

  10. A high-speed DAQ framework for future high-level trigger and event building clusters

    Caselle, M.; Perez, L.E. Ardila; Balzer, M.; Dritschler, T.; Kopmann, A.; Mohr, H.; Rota, L.; Vogelgesang, M.; Weber, M.

    2017-01-01

    Modern data acquisition and trigger systems require a throughput of several GB/s and latencies of the order of microseconds. To satisfy such requirements, a heterogeneous readout system based on FPGA readout cards and GPU-based computing nodes coupled by InfiniBand has been developed. The incoming data from the back-end electronics is delivered directly into the internal memory of GPUs through a dedicated peer-to-peer PCIe communication. High performance DMA engines have been developed for direct communication between FPGAs and GPUs using 'DirectGMA (AMD)' and 'GPUDirect (NVIDIA)' technologies. The proposed infrastructure is a candidate for future generations of event building clusters, high-level trigger filter farms and low-level trigger system. In this paper the heterogeneous FPGA-GPU architecture will be presented and its performance be discussed.

  11. High performance proton accelerators

    Favale, A.J.

    1989-01-01

    In concert with this theme this paper briefly outlines how Grumman, over the past 4 years, has evolved from a company that designed and fabricated a Radio Frequency Quadrupole (RFQ) accelerator from the Los Alamos National Laboratory (LANL) physics and specifications to a company who, as prime contractor, is designing, fabricating, assembling and commissioning the US Army Strategic Defense Commands (USA SDC) Continuous Wave Deuterium Demonstrator (CWDD) accelerator as a turn-key operation. In the case of the RFQ, LANL scientists performed the physics analysis, established the specifications supported Grumman on the mechanical design, conducted the RFQ tuning and tested the RFQ at their laboratory. For the CWDD Program Grumman has the responsibility for the physics and engineering designs, assembly, testing and commissioning albeit with the support of consultants from LANL, Lawrence Berkeley Laboratory (LBL) and Brookhaven National laboratory. In addition, Culham Laboratory and LANL are team members on CWDD. LANL scientists have reviewed the physics design as well as a USA SDC review board. 9 figs

  12. Family and academic performance: identifying high school student profiles

    Alicia Aleli Chaparro Caso López

    2016-01-01

    Full Text Available The objective of this study was to identify profiles of high school students, based on variables related to academic performance, socioeconomic status, cultural capital and family organization. A total of 21,724 high school students, from the five municipalities of the state of Baja California, took part. A K-means cluster analysis was performed to identify the profiles. The analyses identified two clearly-defined clusters: Cluster 1 grouped together students with high academic performance and who achieved higher scores for socioeconomic status, cultural capital and family involvement, whereas Cluster 2 brought together students with low academic achievement, and who also obtained lower scores for socioeconomic status and cultural capital, and had less family involvement. It is concluded that the family variables analyzed form student profiles that can be related to academic achievement.

  13. High Speed White Dwarf Asteroseismology with the Herty Hall Cluster

    Gray, Aaron; Kim, A.

    2012-01-01

    Asteroseismology is the process of using observed oscillations of stars to infer their interior structure. In high speed asteroseismology, we complete that by quickly computing hundreds of thousands of models to match the observed period spectra. Each model on a single processor takes five to ten seconds to run. Therefore, we use a cluster of sixteen Dell Workstations with dual-core processors. The computers use the Ubuntu operating system and Apache Hadoop software to manage workloads.

  14. High intensive short laser pulse interaction with submicron clusters media

    Faenov, A. Ya

    2008-01-01

    The interaction of short intense laser pulses with structured targets, such as clusters, exhibits unique features, stemming from the enhanced absorption of the incident laser light compared to solid targets. Due to the increased absorption, these targets are heated significantly, leading to enhanced emission of x rays in the keV range and generation of electrons and multiple charged ions with kinetic energies from tens of keV to tens of MeV. Possible applications of these targets can be an electron/ion source for a table top accelerator, a neutron source for a material damage study, or an x ray source for microscopy or lithography. The overview of recent results, obtained by the high intensive short laser pulse interaction with different submicron clusters media will be presented. High resolution K and L shell spectra of plasma generated by superintense laser irradiation of micron sized Ar, Kr and Xe clusters have been measured with intensity 10"17"-10"19"W/cm"2"and a pulse duration of 30-1000fs. It is found that hot electrons produced by high contrast laser pulses allow the isochoric heating of clusters and shift the ion balance toward the higher charge states, which enhances both the X ray line yield and the ion kinetic energy. Irradiation of clusters, produced from such gas mixture, by a fs Ti:Sa laser pulses allows to enhance the soft X ray radiation of Heβ(665.7eV)and Lyα(653.7eV)of Oxygen in 2-8 times compare with the case of using as targets pure CO"2"or N"2"O clusters and reach values 2.8x10"10"(∼3μJ)and 2.7x10"10"(∼2.9μJ)ph/(sr·pulse), respectively. Nanostructure conventional soft X ray images of 100nm thick Mo and Zr foils in a wide field of view (cm"2"scale)with high spatial resolution (700nm)are obtained using the LiF crystals as soft X ray imaging detectors. When the target used for the ion acceleration studies consists of solid density clusters embedded into the background gas, its irradiation by high intensity laser light makes the target

  15. A highly efficient multi-core algorithm for clustering extremely large datasets

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  16. AN EFFECTIVE MULTI-CLUSTERING ANONYMIZATION APPROACH USING DISCRETE COMPONENT TASK FOR NON-BINARY HIGH DIMENSIONAL DATA SPACES

    L.V. Arun Shalin

    2016-01-01

    Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization

  17. A Fast General-Purpose Clustering Algorithm Based on FPGAs for High-Throughput Data Processing

    Annovi, A; The ATLAS collaboration; Castegnaro, A; Gatta, M

    2012-01-01

    We present a fast general-purpose algorithm for high-throughput clustering of data ”with a two dimensional organization”. The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data has high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or Super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In ...

  18. Symptom clusters in patients with high-grade glioma.

    Fox, Sherry W; Lyon, Debra; Farace, Elana

    2007-01-01

    To describe the co-occurring symptoms (depression, fatigue, pain, sleep disturbance, and cognitive impairment), quality of life (QoL), and functional status in patients with high-grade glioma. Correlational, descriptive study of 73 participants with high-grade glioma in the U.S. Nine brief measures were obtained with a mailed survey. Participants were recruited from the online message board of The Healing Exchange BRAIN TRUST, a nonprofit organization dedicated to improving quality of life for people with brain tumors. Two symptom cluster models were examined. Four co-occurring symptoms were significantly correlated with each other and explained 29% of the variance in QoL: depression, fatigue, sleep disturbance, and cognitive impairment. Depression, fatigue, sleep disturbance, cognitive impairment, and pain were significantly correlated with each other and explained 62% of the variance in functional status. The interrelationships of the symptoms examined in this study and their relationships with QoL and functional status meet the criteria for defining a symptom cluster. The differences in the models of QoL and functional status indicates that symptom clusters may have unique characteristics in patients with gliomas.

  19. THE XMM CLUSTER SURVEY: THE BUILD-UP OF STELLAR MASS IN BRIGHTEST CLUSTER GALAXIES AT HIGH REDSHIFT

    Stott, J. P.; Collins, C. A.; Hilton, M.; Capozzi, D.; Sahlen, M.; Lloyd-Davies, E.; Hosmer, M.; Liddle, A. R.; Mehrtens, N.; Romer, A. K.; Miller, C. J.; Stanford, S. A.; Viana, P. T. P.; Davidson, M.; Hoyle, B.; Kay, S. T.; Nichol, R. C.

    2010-01-01

    We present deep J- and K s -band photometry of 20 high redshift galaxy clusters between z = 0.8 and1.5, 19 of which are observed with the MOIRCS instrument on the Subaru telescope. By using near-infrared light as a proxy for stellar mass we find the surprising result that the average stellar mass of Brightest Cluster Galaxies (BCGs) has remained constant at ∼9 x 10 11 M sun since z ∼ 1.5. We investigate the effect on this result of differing star formation histories generated by three well-known and independent stellar population codes and find it to be robust for reasonable, physically motivated choices of age and metallicity. By performing Monte Carlo simulations we find that the result is unaffected by any correlation between BCG mass and cluster mass in either the observed or model clusters. The large stellar masses imply that the assemblage of these galaxies took place at the same time as the initial burst of star formation. This result leads us to conclude that dry merging has had little effect on the average stellar mass of BCGs over the last 9-10 Gyr in stark contrast to the predictions of semi-analytic models, based on the hierarchical merging of dark matter halos, which predict a more protracted mass build-up over a Hubble time. However, we discuss that there is potential for reconciliation between observation and theory if there is a significant growth of material in the intracluster light over the same period.

  20. Cluster-cluster clustering

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.; Yale Univ., New Haven, CT; California Univ., Santa Barbara; Cambridge Univ., England; Sussex Univ., Brighton, England)

    1985-01-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references

  1. ENHANCING PERFORMANCE OF AN HPC CLUSTER BY ADOPTING NONDEDICATED NODES

    Pil Seong Park

    2015-01-01

    Persona-sized HPC clusters are widely used in many small labs, because they are cost-effective and easy to build. Instead of adding costly new nodes to old clusters, we may try to make use of some servers’ idle times by including them working independently on the same LAN, especially during the night. However such extension across a firewall raises not only some security problem with NFS but also a load balancing problem caused by heterogeneity. In this paper, we propose a meth...

  2. clusters

    2017-09-27

    Sep 27, 2017 ... Author for correspondence (zh4403701@126.com). MS received 15 ... lic clusters using density functional theory (DFT)-GGA of the DMOL3 package. ... In the process of geometric optimization, con- vergence thresholds ..... and Postgraduate Research & Practice Innovation Program of. Jiangsu Province ...

  3. clusters

    environmental as well as technical problems during fuel gas utilization. ... adsorption on some alloys of Pd, namely PdAu, PdAg ... ried out on small neutral and charged Au24,26,27, Cu,28 ... study of Zanti et al.29 on Pdn (n = 1–9) clusters.

  4. Mesophase Formation Stabilizes High-purity Magic-sized Clusters

    Nevers, Douglas R.; Williamson, Curtis B.; Savitzky, Benjamin H; Hadar, Ido; Banin, Uri; Kourkoutis, Lena F.; Hanrath, Tobias; Robinson, Richard D.

    2018-01-01

    Magic-sized clusters (MSCs) are renowned for their identical size and closed-shell stability that inhibit conventional nanoparticle (NP) growth processes. Though MSCs have been of increasing interest, understanding the reaction pathways toward their nucleation and stabilization is an outstanding issue. In this work, we demonstrate that high concentration synthesis (1000 mM) promotes a well-defined reaction pathway to form high-purity MSCs (>99.9%). The MSCs are resistant to typical growth and dissolution processes. Based on insights from in-situ X-ray scattering analysis, we attribute this stability to the accompanying production of a large, hexagonal organic-inorganic mesophase (>100 nm grain size) that arrests growth of the MSCs and prevents NP growth. At intermediate concentrations (500 mM), the MSC mesophase forms, but is unstable, resulting in NP growth at the expense of the assemblies. These results provide an alternate explanation for the high stability of MSCs. Whereas the conventional mantra has been that the stability of MSCs derives from the precise arrangement of the inorganic structures (i.e., closed-shell atomic packing), we demonstrate that anisotropic clusters can also be stabilized by self-forming fibrous mesophase assemblies. At lower concentration (<200 mM or >16 acid-to-metal), MSCs are further destabilized and NPs formation dominates that of MSCs. Overall, the high concentration approach intensifies and showcases inherent concentration-dependent surfactant phase behavior that is not accessible in conventional (i.e., dilute) conditions. This work provides not only a robust method to synthesize, stabilize, and study identical MSC products, but also uncovers an underappreciated stabilizing interaction between surfactants and clusters.

  5. Mesophase Formation Stabilizes High-purity Magic-sized Clusters

    Nevers, Douglas R.

    2018-01-27

    Magic-sized clusters (MSCs) are renowned for their identical size and closed-shell stability that inhibit conventional nanoparticle (NP) growth processes. Though MSCs have been of increasing interest, understanding the reaction pathways toward their nucleation and stabilization is an outstanding issue. In this work, we demonstrate that high concentration synthesis (1000 mM) promotes a well-defined reaction pathway to form high-purity MSCs (>99.9%). The MSCs are resistant to typical growth and dissolution processes. Based on insights from in-situ X-ray scattering analysis, we attribute this stability to the accompanying production of a large, hexagonal organic-inorganic mesophase (>100 nm grain size) that arrests growth of the MSCs and prevents NP growth. At intermediate concentrations (500 mM), the MSC mesophase forms, but is unstable, resulting in NP growth at the expense of the assemblies. These results provide an alternate explanation for the high stability of MSCs. Whereas the conventional mantra has been that the stability of MSCs derives from the precise arrangement of the inorganic structures (i.e., closed-shell atomic packing), we demonstrate that anisotropic clusters can also be stabilized by self-forming fibrous mesophase assemblies. At lower concentration (<200 mM or >16 acid-to-metal), MSCs are further destabilized and NPs formation dominates that of MSCs. Overall, the high concentration approach intensifies and showcases inherent concentration-dependent surfactant phase behavior that is not accessible in conventional (i.e., dilute) conditions. This work provides not only a robust method to synthesize, stabilize, and study identical MSC products, but also uncovers an underappreciated stabilizing interaction between surfactants and clusters.

  6. Performance prediction model for distributed applications on multicore clusters

    Khanyile, NP

    2012-07-01

    Full Text Available discusses some of the short comings of this law in the current age. We propose a theoretical model for predicting the behavior of a distributed algorithm given the network restrictions of the cluster used. The paper focuses on the impact of latency...

  7. Effects of cluster vs. traditional plyometric training sets on maximal-intensity exercise performance

    Abbas Asadi

    2016-01-01

    Conclusions: Although both plyometric training methods improved lower body maximal-intensity exercise performance, the traditional sets methods resulted in greater adaptations in sprint performance, while the cluster sets method resulted in greater jump and agility adaptations.

  8. High Performance Networks for High Impact Science

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  9. Topological cell clustering in the ATLAS calorimeters and its performance in LHC Run 1

    Aad, G. [CPPM, Aix-Marseille Univ. et CNRS/IN2P3, Marseille (France); Abbott, B. [Oklahoma Univ., Norman, OK (United States). Homer L. Dodge Dept. of Physics and Astronomy; Abdallah, J. [Academia Sinica, Taipei (China). Inst. of Physics; Collaboration: ATLAS Collaboration; and others

    2017-07-15

    The reconstruction of the signal from hadrons and jets emerging from the proton-proton collisions at the Large Hadron Collider (LHC) and entering the ATLAS calorimeters is based on a three-dimensional topological clustering of individual calorimeter cell signals. The cluster formation follows cell signal-significance patterns generated by electromagnetic and hadronic showers. In this, the clustering algorithm implicitly performs a topological noise suppression by removing cells with insignificant signals which are not in close proximity to cells with significant signals. The resulting topological cell clusters have shape and location information, which is exploited to apply a local energy calibration and corrections depending on the nature of the cluster. Topological cell clustering is established as a well-performing calorimeter signal definition for jet and missing transverse momentum reconstruction in ATLAS. (orig.)

  10. Performance quantification of clustering algorithms for false positive removal in fMRI by ROC curves

    André Salles Cunha Peres

    Full Text Available Abstract Introduction Functional magnetic resonance imaging (fMRI is a non-invasive technique that allows the detection of specific cerebral functions in humans based on hemodynamic changes. The contrast changes are about 5%, making visual inspection impossible. Thus, statistic strategies are applied to infer which brain region is engaged in a task. However, the traditional methods like general linear model and cross-correlation utilize voxel-wise calculation, introducing a lot of false-positive data. So, in this work we tested post-processing cluster algorithms to diminish the false-positives. Methods In this study, three clustering algorithms (the hierarchical cluster, k-means and self-organizing maps were tested and compared for false-positive removal in the post-processing of cross-correlation analyses. Results Our results showed that the hierarchical cluster presented the best performance to remove the false positives in fMRI, being 2.3 times more accurate than k-means, and 1.9 times more accurate than self-organizing maps. Conclusion The hierarchical cluster presented the best performance in false-positive removal because it uses the inconsistency coefficient threshold, while k-means and self-organizing maps utilize a priori cluster number (centroids and neurons number; thus, the hierarchical cluster avoids clustering scattered voxels, as the inconsistency coefficient threshold allows only the voxels to be clustered that are at a minimum distance to some cluster.

  11. A high performance scientific cloud computing environment for materials simulations

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  12. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  13. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    Minetti Andrea

    2012-10-01

    Full Text Available Abstract Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i health areas not requiring supplemental activities; ii health areas requiring additional vaccination; iii health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3, standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15 are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  14. RavenDB high performance

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  15. [Electronic and structural properties of individual nanometer-size supported metallic clusters]. Final performance report

    Reifenberger, R.

    1993-09-01

    This report summarizes the work performed under contract DOE-FCO2-84ER45162. During the past ten years, our study of electron emission from laser-illuminated field emission tips has taken on a broader scope by addressing problems of direct interest to those concerned with the unique physical and chemical properties of nanometer-size clusters. The work performed has demonstrated that much needed data can be obtained on individual nanometer-size clusters supported on a wide-variety of different substrates. The work was performed in collaboration with R.P. Andres in the School of Chemical Engineering at Purdue University. The Multiple Expansion Cluster Source developed by Andres and his students was essential for producing the nanometer-size clusters studied. The following report features a discussion of these results. This report provides a motivation for studying the properties of nanometer-size clusters and summarizes the results obtained.

  16. High-Performance Operating Systems

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  17. Analysis of SCTP and TCP based communication in high-speed clusters

    Kozlovszky, M.; Berceli, T.; Kutor, L.

    2006-01-01

    Performance and financial constraints are pushing modern DAQs (Data Acquisition Systems) to use distributed cluster environments instead of monolith one-box systems. Inside clusters application communication layers should support outstanding high performance requirements. We are currently investigating different network protocols that could meet the requirements of high speed/low latency peer-to-peer communication within DAQ clusters. We have carried out various performance measurements with TCP and SCTP over Fast and Gigabit Ethernet. We are focusing on Ethernet Technologies, because this transport medium is broad deployed, cost efficient and it has much better cost/throughput ratio than other available communication alternatives (e.g.: Myrinet, Infiniband). During this study, a protocol performance measurement application with different peer transport components has been developed. In the first part of the paper, we give a short comparison of the two protocols (SCTP and TCP), and an introduction of the transport layer structure developed. Later on we discuss the performance results of single/multi-stream peer-to-peer communication, give overview about application code transition possibilities from application developer point of view between the two protocols, and draw conclusions about usability

  18. Charge-sign-clustering observed in high-multiplicity, high-energy heavy-ion collisions

    Takahashi, Y.; Gregory, J.C.; Hayashi, T.

    1989-01-01

    Charge-sign distribution in 200 GeV/amu heavy-ion collisions is studied with the Magnetic-Interferometric-Emulsion-Chamber (MAGIC) for central collision events in 16 O + Pb and 32 S + Pb interactions. Charge-sign clustering is observed in most of the fully-analyzed events. A statistical 'run-test' is performed for each measured event, which shows significant deviation from a Gaussian distribution (0,1) expected for random-charge distribution. Candidates of charge clusters have 5 - 10 multiplicity of like-sign particles, and are often accompanied by opposite-sign clusters. Observed clustering of identical charges is more significant in the fragmentation region than in the central region. Two-particle Bose-Einstein interference and other effects are discussed for the run-test examination. (author)

  19. High resolution photoelectron spectroscopy of clusters of Group V elements

    Wang, Lai-sheng; Niu, B.; Lee, Y.T.; Shirley, D.A.

    1989-07-01

    High resolution HeI (580 angstrom) photoelectron spectra of As 2 , As 4 , and P 4 were obtained with a newly-built high temperature molecular beam source. Vibrational structure was resolved in the photoelectron spectra of the three cluster species. The Jahn-Teller effect is discussed for the 2 E and 2 T 2 states of P 4 + and As 4 + . As a result of the Jahn-Teller effect, the 2 E state splits into two bands, and the 2 T 2 state splits into three bands, in combination with the spin-orbit effect. It was observed that the ν 2 normal vibrational mode was involved in the vibronic interaction of the 2 E state, while both the ν 2 and ν 3 modes were active in the 2 T 2 state. 26 refs., 5 figs., 3 tabs

  20. Assessment of In-Cloud Enterprise Resource Planning System Performed in a Virtual Cluster

    Bao Rong Chang

    2015-01-01

    Full Text Available This paper introduces a high-performed high-availability in-cloud enterprise resources planning (in-cloud ERP which has deployed in the virtual machine cluster. The proposed approach can resolve the crucial problems of ERP failure due to unexpected downtime and failover between physical hosts in enterprises, causing operation termination and hence data loss. Besides, the proposed one together with the access control authentication and network security is capable of preventing intrusion hacked and/or malicious attack via internet. Regarding system assessment, cost-performance (C-P ratio, a remarkable cost effectiveness evaluation, has been applied to several remarkable ERP systems. As a result, C-P ratio evaluated from the experiments shows that the proposed approach outperforms two well-known benchmark ERP systems, namely, in-house ECC 6.0 and in-cloud ByDesign.

  1. Family-based clusters of cognitive test performance in familial schizophrenia

    Partonen Timo

    2004-07-01

    Full Text Available Abstract Background Cognitive traits derived from neuropsychological test data are considered to be potential endophenotypes of schizophrenia. Previously, these traits have been found to form a valid basis for clustering samples of schizophrenia patients into homogeneous subgroups. We set out to identify such clusters, but apart from previous studies, we included both schizophrenia patients and family members into the cluster analysis. The aim of the study was to detect family clusters with similar cognitive test performance. Methods Test scores from 54 randomly selected families comprising at least two siblings with schizophrenia spectrum disorders, and at least two unaffected family members were included in a complete-linkage cluster analysis with interactive data visualization. Results A well-performing, an impaired, and an intermediate family cluster emerged from the analysis. While the neuropsychological test scores differed significantly between the clusters, only minor differences were observed in the clinical variables. Conclusions The visually aided clustering algorithm was successful in identifying family clusters comprising both schizophrenia patients and their relatives. The present classification method may serve as a basis for selecting phenotypically more homogeneous groups of families in subsequent genetic analyses.

  2. FLOCK cluster analysis of mast cell event clustering by high-sensitivity flow cytometry predicts systemic mastocytosis.

    Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty

    2015-11-01

    In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.

  3. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  4. Relevant Subspace Clustering

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2009-01-01

    Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace...... clusters. Their results are typically highly redundant, i.e. many clusters are detected multiple times in several projections. In this work, we propose a novel model for relevant subspace clustering (RESCU). We present a global optimization which detects the most interesting non-redundant subspace clusters...... achieves top clustering quality while competing approaches show greatly varying performance....

  5. Relics in galaxy clusters at high radio frequencies

    Kierdorf, M.; Beck, R.; Hoeft, M.; Klein, U.; van Weeren, R. J.; Forman, W. R.; Jones, C.

    2017-04-01

    Aims: We investigated the magnetic properties of radio relics located at the peripheries of galaxy clusters at high radio frequencies, where the emission is expected to be free of Faraday depolarization. The degree of polarization is a measure of the magnetic field compression and, hence, the Mach number. Polarization observations can also be used to confirm relic candidates. Methods: We observed three radio relics in galaxy clusters and one radio relic candidate at 4.85 and 8.35 GHz in total emission and linearly polarized emission with the Effelsberg 100-m telescope. In addition, we observed one radio relic candidate in X-rays with the Chandra telescope. We derived maps of polarization angle, polarization degree, and Faraday rotation measures. Results: The radio spectra of the integrated emission below 8.35 GHz can be well fitted by single power laws for all four relics. The flat spectra (spectral indices of 0.9 and 1.0) for the so-called Sausage relic in cluster CIZA J2242+53 and the so-called Toothbrush relic in cluster 1RXS 06+42 indicate that models describing the origin of relics have to include effects beyond the assumptions of diffuse shock acceleration. The spectra of the radio relics in ZwCl 0008+52 and in Abell 1612 are steep, as expected from weak shocks (Mach number ≈2.4). Polarization observations of radio relics offer a method of measuring the strength and geometry of the shock front. We find polarization degrees of more than 50% in the two prominent Mpc-sized radio relics, the Sausage and the Toothbrush, which are among the highest percentages of linear polarization detected in any extragalactic radio source to date. This is remarkable because the large beam size of the Effelsberg single-dish telescope corresponds to linear extensions of about 300 kpc at 8.35 GHz at the distances of the relics. The high degree of polarization indicates that the magnetic field vectors are almost perfectly aligned along the relic structure, as expected for shock

  6. Evolution of highly compact binary stellar systems in globular clusters

    Krolik, J.H.; Meiksin, A.; Joss, P.C.

    1984-01-01

    We have calculated the secular evolution of a highly compact binary stellar system, composed of a collapsed object and a low-mass secondary star, in the core of a globular cluster. The binary evolves under the combined influences of (i) gravitational radiation losses from the system, (ii) the evolution of the secondary star, (iii) the resultant gradual mass transfer, if any, from the secondary to the collapsed object, and (iv) occasional encounters with passing field stars. We calculate all these effects in detail, utilizing some simplifying approximations appropriate to low-mass secondaries. The times of encounters with field stars, and the initial parameter specifying those encounters, were chosen by use of a Monte Carlo technique; the subsequent gravitational interactions were calculated utilzing a three-body integrator, and the changes in the binary orbital parmeters were thereby determined. We carried out a total of 20 such evolutionary calculations for each of two cluster core densities (1 and 3 x 10 3 stars pc -3 ). Each calculation was continued until the binary was disrupted or until 2 x 10 10 yr had elapsed

  7. Screen media usage, sleep time and academic performance in adolescents: clustering a self-organizing maps analysis.

    Peiró-Velert, Carmen; Valencia-Peris, Alexandra; González, Luis M; García-Massó, Xavier; Serra-Añó, Pilar; Devís-Devís, José

    2014-01-01

    Screen media usage, sleep time and socio-demographic features are related to adolescents' academic performance, but interrelations are little explored. This paper describes these interrelations and behavioral profiles clustered in low and high academic performance. A nationally representative sample of 3,095 Spanish adolescents, aged 12 to 18, was surveyed on 15 variables linked to the purpose of the study. A Self-Organizing Maps analysis established non-linear interrelationships among these variables and identified behavior patterns in subsequent cluster analyses. Topological interrelationships established from the 15 emerging maps indicated that boys used more passive videogames and computers for playing than girls, who tended to use mobile phones to communicate with others. Adolescents with the highest academic performance were the youngest. They slept more and spent less time using sedentary screen media when compared to those with the lowest performance, and they also showed topological relationships with higher socioeconomic status adolescents. Cluster 1 grouped boys who spent more than 5.5 hours daily using sedentary screen media. Their academic performance was low and they slept an average of 8 hours daily. Cluster 2 gathered girls with an excellent academic performance, who slept nearly 9 hours per day, and devoted less time daily to sedentary screen media. Academic performance was directly related to sleep time and socioeconomic status, but inversely related to overall sedentary screen media usage. Profiles from the two clusters were strongly differentiated by gender, age, sedentary screen media usage, sleep time and academic achievement. Girls with the highest academic results had a medium socioeconomic status in Cluster 2. Findings may contribute to establishing recommendations about the timing and duration of screen media usage in adolescents and appropriate sleep time needed to successfully meet the demands of school academics and to improve

  8. Screen media usage, sleep time and academic performance in adolescents: clustering a self-organizing maps analysis.

    Carmen Peiró-Velert

    Full Text Available Screen media usage, sleep time and socio-demographic features are related to adolescents' academic performance, but interrelations are little explored. This paper describes these interrelations and behavioral profiles clustered in low and high academic performance. A nationally representative sample of 3,095 Spanish adolescents, aged 12 to 18, was surveyed on 15 variables linked to the purpose of the study. A Self-Organizing Maps analysis established non-linear interrelationships among these variables and identified behavior patterns in subsequent cluster analyses. Topological interrelationships established from the 15 emerging maps indicated that boys used more passive videogames and computers for playing than girls, who tended to use mobile phones to communicate with others. Adolescents with the highest academic performance were the youngest. They slept more and spent less time using sedentary screen media when compared to those with the lowest performance, and they also showed topological relationships with higher socioeconomic status adolescents. Cluster 1 grouped boys who spent more than 5.5 hours daily using sedentary screen media. Their academic performance was low and they slept an average of 8 hours daily. Cluster 2 gathered girls with an excellent academic performance, who slept nearly 9 hours per day, and devoted less time daily to sedentary screen media. Academic performance was directly related to sleep time and socioeconomic status, but inversely related to overall sedentary screen media usage. Profiles from the two clusters were strongly differentiated by gender, age, sedentary screen media usage, sleep time and academic achievement. Girls with the highest academic results had a medium socioeconomic status in Cluster 2. Findings may contribute to establishing recommendations about the timing and duration of screen media usage in adolescents and appropriate sleep time needed to successfully meet the demands of school academics and

  9. High-performance scientific computing in the cloud

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  10. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  11. Identifying High Performance ERP Projects

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  12. INL High Performance Building Strategy

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  13. Inter-firm relations in SME clusters and the link to marketing performance

    Lamprinopoulou, C.; Tregear, A.

    2011-01-01

    Purpose – Networks are increasingly recognised as being important to successful marketing amongst small and medium-sized enterprises (SMEs). Thepurpose of this study is to investigate the structure and content of network relations amongst SME clusters, and explore the link to marketing performance.Design/methodology/approach – Following a review of the literature on SME networks and marketing performance, case study analysis isperformed on four SME clusters in the Greek agrifood sector.Findin...

  14. Understanding the stable boron clusters: A bond model and first-principles calculations based on high-throughput screening

    Xu, Shao-Gang; Liao, Ji-Hai; Zhao, Yu-Jun; Yang, Xiao-Bao

    2015-01-01

    The unique electronic property induced diversified structure of boron (B) cluster has attracted much interest from experimentalists and theorists. B 30–40 were reported to be planar fragments of triangular lattice with proper concentrations of vacancies recently. Here, we have performed high-throughput screening for possible B clusters through the first-principles calculations, including various shapes and distributions of vacancies. As a result, we have determined the structures of B n clusters with n = 30–51 and found a stable planar cluster of B 49 with a double-hexagon vacancy. Considering the 8-electron rule and the electron delocalization, a concise model for the distribution of the 2c–2e and 3c–2e bonds has been proposed to explain the stability of B planar clusters, as well as the reported B cages

  15. High performance fuel technology development

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  16. High Performance Bulk Thermoelectric Materials

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  17. High resolution imaging of colliding blast waves in cluster media

    Smith, Roland A [Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2AZ (United Kingdom); Lazarus, James [Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2AZ (United Kingdom); Hohenberger, Matthias [Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2AZ (United Kingdom); Marocchino, Alberto [Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2AZ (United Kingdom); Robinson, Joseph S [Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2AZ (United Kingdom); Chittenden, Jeremy P [Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2AZ (United Kingdom); Moore, Alastair S [Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2AZ (United Kingdom); Gumbrell, Edward T [Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2AZ (United Kingdom); Dunne, Mike [Central Laser Facility, Rutherford Appleton Laboratory, Chilton, Didcot OX11 OQX (United Kingdom)

    2007-12-15

    Strong shocks and blast wave collisions are commonly observed features in astrophysical objects such as nebulae and supernova remnants. Numerical simulations often underpin our understanding of these complex systems, however modelling of such extreme phenomena remains challenging, particularly so for the case of radiative or colliding shocks. This highlights the need for well-characterized laboratory experiments both to guide physical insight and to provide robust data for code benchmarking. Creating a sufficiently high-energy-density gas medium for conducting scaled laboratory astrophysics experiments has historically been problematic, but the unique ability of atomic cluster gases to efficiently couple to intense pulses of laser light now enables table top scale (1 J input energy) studies to be conducted at gas densities of >10{sup 19} particles cm{sup -3} with an initial energy density >5 x 10{sup 9} J g{sup -1}. By laser heating atomic cluster gas media we can launch strong (up to Mach 55) shocks in a range of geometries, with and without radiative precursors. These systems have been probed with a range of optical and interferometric diagnostics in order to retrieve electron density profiles and blast wave trajectories. Colliding cylindrical shock systems have also been studied, however the strongly asymmetric density profiles and radial and longitudinal mass flow that result demand a more complex diagnostic technique based on tomographic phase reconstruction. We have used the 3D magnetoresistive hydrocode GORGON to model these systems and to highlight interesting features such as the formation of a Mach stem for further study.

  18. High performance in software development

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  19. In-clustering effects in InAlN and InGaN revealed by high pressure studies

    Gorczyca, I.; Suski, T.; Kaminska, A.

    2010-01-01

    results are compared with the results of photoluminescence measurements performed at high hydrostatic pressures on InAlN and InGaN quasi-bulk epilayers. We discuss the modification of the uppermost valence band due to formation of In clusters which, together with the related lattice relaxations, may......Electronic band structure calculations of InAlN and InGaN under pressure are presented for two different arrangements of the In atoms, uniform and clustered. The band gap pressure coefficients exhibit strong bowing, and the effect is especially large when indium atoms are clustered. The theoretical...

  20. A high performance scientific cloud computing environment for materials simulations

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  1. Micromagnetics on high-performance workstation and mobile computational platforms

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  2. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  3. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  4. Neo4j high performance

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  5. Dimensioning storage and computing clusters for efficient high throughput computing

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  6. High-Intensity Femtosecond Laser Interaction with Rare Gas Clusters

    林亚风; 钟钦; 曾淳; 陈哲

    2001-01-01

    With a 45 fs multiterawatt 790 nm laser system and jets of argon and krypton atomic clusters, a study of the interaction of fs intense laser pulses with large size rare gas dusters was conducted. The maximum laser intensity of about 7 × 1016 W/cm2 and dusters composed of thousands of atoms which were determined through Rayleigh scattering measurements were involved inthe experiments. On the one hand, the results indicate that the interaction is strongly cluster size dependent. The stronger the interaction, the larger the clusters are. On the other hand, a saturation followed by a drop of the energy of ions ejected from the interaction will occur when the laser intensity exceeds a definite value for clusters of a certain size.

  7. Effect of pretreatment temperature on catalytic performance of the catalysts derived from cobalt carbonyl cluster in Fischer-Tropsch Synthesis

    Byambasuren O

    2017-02-01

    Full Text Available The monometallic cobalt-based catalysts were prepared by pretreating the catalysts derived from carbonyl cluster precursor (CO6Co2CC(COOH2 supported on γ-Al2O3 with hydrogen at 180, 220, and 260°C respectively. The temperature effect of the pretreatments on the structure evolution of cluster precursors and the catalytic performance of the Fischer-Tropsch (F-T synthesis was investigated. The pretreated catalyst at 220°C with unique phase structure exhibited best catalytic activity and selectivity among three pretreated catalysts. Moreover, the catalysts exhibited high dispersion due to the formation of hydrogen bonds between the cluster precursor and γ-Al2O3 support.

  8. Graphic-Card Cluster for Astrophysics (GraCCA) -- Performance Tests

    Schive, Hsi-Yu; Chien, Chia-Hung; Wong, Shing-Kwong; Tsai, Yu-Chih; Chiueh, Tzihong

    2007-01-01

    In this paper, we describe the architecture and performance of the GraCCA system, a Graphic-Card Cluster for Astrophysics simulations. It consists of 16 nodes, with each node equipped with 2 modern graphic cards, the NVIDIA GeForce 8800 GTX. This computing cluster provides a theoretical performance of 16.2 TFLOPS. To demonstrate its performance in astrophysics computation, we have implemented a parallel direct N-body simulation program with shared time-step algorithm in this system. Our syste...

  9. A Facile Synthesis of Graphene-WO3 Nanowire Clusters with High Photocatalytic Activity for O2 Evolution

    M.-J. Zhou

    2014-01-01

    Full Text Available In the present work, graphene-WO3 nanowire clusters were synthesized via a facile hydrothermal method. The obtained graphene-WO3 nanowire clusters were characterized by X-ray diffraction (XRD, transmission electron microscopy (TEM, X-ray photoelectron spectroscopy (XPS, Fourier transform infrared spectroscopy (FT-IR, Raman spectroscopy, and ultraviolet-visible diffuse reflectance spectroscopy (DRS techniques. The photocatalytic oxygen (O2 evolution properties of the as-synthesized samples were investigated by measuring the amount of evolved O2 from water splitting. The graphene-WO3 nanowire clusters exhibited enhanced performance compared to pure WO3 nanowire clusters for O2 evolution. The amount of evolved O2 from water splitting after 8 h for the graphene-WO3 nanowire clusters is ca. 0.345 mmol/L, which is more than 1.9 times as much as that of the pure WO3 nanowire clusters (ca. 0.175 mmol/L. The high photocatalytic activity of the graphene-WO3 nanowire clusters was attributed to a high charge transfer rate in the presence of graphene.

  10. High performance MEAs. Final report

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  11. A critical cluster analysis of 44 indicators of author-level performance

    Wildgaard, Lorna Elizabeth

    2015-01-01

    . Publication and citation data for 741 researchers across Astronomy, Environmental Science, Philosophy and Public Health was collected in Web of Science (WoS). Forty-four indicators of individual performance were computed using the data. A two-step cluster analysis using IBM SPSS version 22 was performed...

  12. Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment.

    Aline Guttmann

    Full Text Available In cluster detection of disease, the use of local cluster detection tests (CDTs is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps.

  13. Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment

    Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih

    2015-01-01

    In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911

  14. High Performance Proactive Digital Forensics

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  15. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  16. Unsupervised Performance Evaluation Strategy for Bridge Superstructure Based on Fuzzy Clustering and Field Data

    Yubo Jiao

    2013-01-01

    Full Text Available Performance evaluation of a bridge is critical for determining the optimal maintenance strategy. An unsupervised bridge superstructure state assessment method is proposed in this paper based on fuzzy clustering and bridge field measured data. Firstly, the evaluation index system of bridge is constructed. Secondly, a certain number of bridge health monitoring data are selected as clustering samples to obtain the fuzzy similarity matrix and fuzzy equivalent matrix. Finally, different thresholds are selected to form dynamic clustering maps and determine the best classification based on statistic analysis. The clustering result is regarded as a sample base, and the bridge state can be evaluated by calculating the fuzzy nearness between the unknown bridge state data and the sample base. Nanping Bridge in Jilin Province is selected as the engineering project to verify the effectiveness of the proposed method.

  17. A critical cluster analysis of 44 indicators of author-level performance

    Wildgaard, Lorna Elizabeth

    2016-01-01

    -four indicators of individual researcher performance were computed using the data. The clustering solution was supported by continued reference to the researcher’s curriculum vitae, an effect analysis and a risk analysis. Disciplinary appropriate indicators were identified and used to divide the researchers......This paper explores a 7-stage cluster methodology as a process to identify appropriate indicators for evaluation of individual researchers at a disciplinary and seniority level. Publication and citation data for 741 researchers from 4 disciplines was collected in Web of Science. Forty...... of statistics in research evaluation. The strength of the 7-stage cluster methodology is that it makes clear that in the evaluation of individual researchers, statistics cannot stand alone. The methodology is reliant on contextual information to verify the bibliometric values and cluster solution...

  18. A new physical performance classification system for elite handball players: cluster analysis

    Chirosa, Ignacio J.; Robinson, Joseph E.; van der Tillaar, Roland; Chirosa, Luis J.; Martín, Isidoro Martínez

    2016-01-01

    Abstract The aim of the present study was to identify different cluster groups of handball players according to their physical performance level assessed in a series of physical assessments, which could then be used to design a training program based on individual strengths and weaknesses, and to determine which of these variables best identified elite performance in a group of under-19 [U19] national level handball players. Players of the U19 National Handball team (n=16) performed a set of tests to determine: 10 m (ST10) and 20 m (ST20) sprint time, ball release velocity (BRv), countermovement jump (CMJ) height and squat jump (SJ) height. All players also performed an incremental-load bench press test to determine the 1 repetition maximum (1RMest), the load corresponding to maximum mean power (LoadMP), the mean propulsive phase power at LoadMP (PMPPMP) and the peak power at LoadMP (PPEAKMP). Cluster analyses of the test results generated four groupings of players. The variables best able to discriminate physical performance were BRv, ST20, 1RMest, PPEAKMP and PMPPMP. These variables could help coaches identify talent or monitor the physical performance of athletes in their team. Each cluster of players has a particular weakness related to physical performance and therefore, the cluster results can be applied to a specific training programmed based on individual needs. PMID:28149376

  19. A new physical performance classification system for elite handball players: cluster analysis

    Bautista Iker J.

    2016-06-01

    Full Text Available The aim of the present study was to identify different cluster groups of handball players according to their physical performance level assessed in a series of physical assessments, which could then be used to design a training program based on individual strengths and weaknesses, and to determine which of these variables best identified elite performance in a group of under-19 [U19] national level handball players. Players of the U19 National Handball team (n=16 performed a set of tests to determine: 10 m (ST10 and 20 m (ST20 sprint time, ball release velocity (BRv, countermovement jump (CMJ height and squat jump (SJ height. All players also performed an incremental-load bench press test to determine the 1 repetition maximum (1RMest, the load corresponding to maximum mean power (LoadMP, the mean propulsive phase power at LoadMP (PMPPMP and the peak power at LoadMP (PPEAKMP. Cluster analyses of the test results generated four groupings of players. The variables best able to discriminate physical performance were BRv, ST20, 1RMest, PPEAKMP and PMPPMP. These variables could help coaches identify talent or monitor the physical performance of athletes in their team. Each cluster of players has a particular weakness related to physical performance and therefore, the cluster results can be applied to a specific training programmed based on individual needs.

  20. Clustering Dycom

    Minku, Leandro L.

    2017-10-06

    Background: Software Effort Estimation (SEE) can be formulated as an online learning problem, where new projects are completed over time and may become available for training. In this scenario, a Cross-Company (CC) SEE approach called Dycom can drastically reduce the number of Within-Company (WC) projects needed for training, saving the high cost of collecting such training projects. However, Dycom relies on splitting CC projects into different subsets in order to create its CC models. Such splitting can have a significant impact on Dycom\\'s predictive performance. Aims: This paper investigates whether clustering methods can be used to help finding good CC splits for Dycom. Method: Dycom is extended to use clustering methods for creating the CC subsets. Three different clustering methods are investigated, namely Hierarchical Clustering, K-Means, and Expectation-Maximisation. Clustering Dycom is compared against the original Dycom with CC subsets of different sizes, based on four SEE databases. A baseline WC model is also included in the analysis. Results: Clustering Dycom with K-Means can potentially help to split the CC projects, managing to achieve similar or better predictive performance than Dycom. However, K-Means still requires the number of CC subsets to be pre-defined, and a poor choice can negatively affect predictive performance. EM enables Dycom to automatically set the number of CC subsets while still maintaining or improving predictive performance with respect to the baseline WC model. Clustering Dycom with Hierarchical Clustering did not offer significant advantage in terms of predictive performance. Conclusion: Clustering methods can be an effective way to automatically generate Dycom\\'s CC subsets.

  1. Performance enhancement of a web-based picture archiving and communication system using commercial off-the-shelf server clusters.

    Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  2. Performance Enhancement of a Web-Based Picture Archiving and Communication System Using Commercial Off-the-Shelf Server Clusters

    Yan-Lin Liu

    2014-01-01

    Full Text Available The rapid development of picture archiving and communication systems (PACSs thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital’s operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB, distributed file system (DFS, and structured query language (SQL duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR, computed tomography (CT, and magnetic resonance (MR images simultaneously to simulate the clinical situation. The average transmission rate (ATR was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  3. Electron scattering in large water clusters from photoelectron imaging with high harmonic radiation.

    Gartmann, Thomas E; Hartweg, Sebastian; Ban, Loren; Chasovskikh, Egor; Yoder, Bruce L; Signorell, Ruth

    2018-06-06

    Low-energy electron scattering in water clusters (H2O)n with average cluster sizes of n < 700 is investigated by angle-resolved photoelectron spectroscopy using high harmonic radiation at photon energies of 14.0, 20.3, and 26.5 eV for ionization from the three outermost valence orbitals. The measurements probe the evolution of the photoelectron anisotropy parameter β as a function of cluster size. A remarkably steep decrease of β with increasing cluster size is observed, which for the largest clusters reaches liquid bulk values. Detailed electron scattering calculations reveal that neither gas nor condensed phase scattering can explain the cluster data. Qualitative agreement between experiment and simulations is obtained with scattering calculations that treat cluster scattering as an intermediate case between gas and condensed phase scattering.

  4. Spectral energy distributions for galaxies in high-redshift clusters

    Ellis, R.S.; Couch, W.J.; MacLaren, Iain

    1985-01-01

    The distant cluster 0016+16 (z=0.54) has been imaged through six intermediate-bandwidth filters ranging in wavelength from 418 to 862 nm, maintaining a photometric precision of 10 per cent to a limiting magnitude of F=22. It is found that the field-subtracted colour distributions are not compatible with a single uniformly red population of early-type members at z=0.54. A significant intermediate colour component identified with a spectroscopic object at z=0.30 is also present, thus reducing the possibility that the z=0.54 cluster exhibits an excess of blue galaxies. It is demonstrated how the six-colour data can be used to individually classify the galaxies by type and approximate redshift so that it is possible to identify which objects are members of the z=0.54 cluster. (author)

  5. High performance light water reactor

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  6. Performance Analysis of a Cluster-Based MAC Protocol for Wireless Ad Hoc Networks

    Jesús Alonso-Zárate

    2010-01-01

    Full Text Available An analytical model to evaluate the non-saturated performance of the Distributed Queuing Medium Access Control Protocol for Ad Hoc Networks (DQMANs in single-hop networks is presented in this paper. DQMAN is comprised of a spontaneous, temporary, and dynamic clustering mechanism integrated with a near-optimum distributed queuing Medium Access Control (MAC protocol. Clustering is executed in a distributed manner using a mechanism inspired by the Distributed Coordination Function (DCF of the IEEE 802.11. Once a station seizes the channel, it becomes the temporary clusterhead of a spontaneous cluster and it coordinates the peer-to-peer communications between the clustermembers. Within each cluster, a near-optimum distributed queuing MAC protocol is executed. The theoretical performance analysis of DQMAN in single-hop networks under non-saturation conditions is presented in this paper. The approach integrates the analysis of the clustering mechanism into the MAC layer model. Up to the knowledge of the authors, this approach is novel in the literature. In addition, the performance of an ad hoc network using DQMAN is compared to that obtained when using the DCF of the IEEE 802.11, as a benchmark reference.

  7. Clustering for high-dimension, low-sample size data using distance vectors

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  8. Observations of High Dispersion Clusters of Galaxies: Constraints on Cold Dark Matter

    Oegerle, William R.; Hill, John M.; Fitchett, Michael J.

    1995-07-01

    We have studied the dynamics of several Abell clusters of galaxies, which were previously reported to have large velocity dispersions, and hence very large masses. In particular, we have investigated the assertion of Frenk et al. (1990) that clusters with intrinsic velocity dispersions ~> 1200 km s^-1^ are extremely rare in the universe, and that large observed dispersions are due to projection effects. We report redshifts for 303 galaxies in the fields of A1775, A2029, A2142, and A2319, obtained with the Nessie multifiber spectrograph at the Mayall 4 m telescope. A1775 appears to be two poor, interacting clusters, separated in velocity space by ~3075 km s^-1^ (in the cluster rest frame). A2029 has a velocity dispersion of 1436 km s^-1^, based on 85 cluster member redshifts. There is evidence that a group or poor cluster of galaxies of slightly different redshift is projected onto (or is merging with) the core of A2029. However, the combined kinematic and x-ray data for A2029 argue for an intrinsically large dispersion for this cluster. Based on redshifts for 103 members of A2142, we find a dispersion of 1280 km s^-1^, and evidence for subclustering. With 130 redshifts in the A2319 field, we have isolated a subcluster ~10' NW of the cD galaxy. After its removal, A2319 has a velocity dispersion of 1324 km s^-1^. The data obtained here have been combined with recent optical and X-ray data for other supposedly high-mass clusters to study the cluster velocity dispersion distribution in a sample of Abell clusters. We find that clusters with true velocity dispersions ~> 1200 km s^-1^ are not extremely rare, but account for ~5% of all Abell clusters with R >= 0. If these clusters are in virial equilibrium, then our results are inconsistent with a high-bias (b~>22), high-density CDM model.

  9. High Performance Circularly Polarized Microstrip Antenna

    Bondyopadhyay, Probir K. (Inventor)

    1997-01-01

    A microstrip antenna for radiating circularly polarized electromagnetic waves comprising a cluster array of at least four microstrip radiator elements, each of which is provided with dual orthogonal coplanar feeds in phase quadrature relation achieved by connection to an asymmetric T-junction power divider impedance notched at resonance. The dual fed circularly polarized reference element is positioned with its axis at a 45 deg angle with respect to the unit cell axis. The other three dual fed elements in the unit cell are positioned and fed with a coplanar feed structure with sequential rotation and phasing to enhance the axial ratio and impedance matching performance over a wide bandwidth. The centers of the radiator elements are disposed at the corners of a square with each side of a length d in the range of 0.7 to 0.9 times the free space wavelength of the antenna radiation and the radiator elements reside in a square unit cell area of sides equal to 2d and thereby permit the array to be used as a phased array antenna for electronic scanning and is realizable in a high temperature superconducting thin film material for high efficiency.

  10. Development of high performance cladding

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  11. Scalability of DL_POLY on High Performance Computing Platform

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  12. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  13. A General Purpose High Performance Linux Installation Infrastructure

    Wachsmann, Alf

    2002-01-01

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then uses kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation

  14. Performance Evaluation of a Cluster-Based Service Discovery Protocol for Heterogeneous Wireless Sensor Networks

    Marin Perianu, Raluca; Scholten, Johan; Havinga, Paul J.M.; Hartel, Pieter H.

    2006-01-01

    Abstract—This paper evaluates the performance in terms of resource consumption of a service discovery protocol proposed for heterogeneous Wireless Sensor Networks (WSNs). The protocol is based on a clustering structure, which facilitates the construction of a distributed directory. Nodes with higher

  15. The cluster environments of powerful, high-redshift radio galaxies

    Yates, M.G.

    1989-01-01

    We present deep imaging of a sample of 25 powerful radio galaxies in the redshift range 0.15 gr ) about each source, a measure of the richness of environment. The powerful radio galaxies in this sample at z>0.3 occupy environments nearly as rich on average as Abell class 0 clusters of galaxies, about three times richer than the environments of the z<0.3 radio galaxies. This trend in cluster environment is consistent with that seen in radio-loud quasars over the same redshift range. Our previous work on the 3CR sample suggested that the fundamental parameter which correlates with the richness of environment might be the radio luminosity of the galaxy, rather than its redshift. Our direct imaging confirms that the most powerful radio galaxies do inhabit rich environments. (author)

  16. Methodology for Clustering High-Resolution Spatiotemporal Solar Resource Data

    Getman, Dan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-09-01

    In this report, we introduce a methodology to achieve multiple levels of spatial resolution reduction of solar resource data, with minimal impact on data variability, for use in energy systems modeling. The selection of an appropriate clustering algorithm, parameter selection including cluster size, methods of temporal data segmentation, and methods of cluster evaluation are explored in the context of a repeatable process. In describing this process, we illustrate the steps in creating a reduced resolution, but still viable, dataset to support energy systems modeling, e.g. capacity expansion or production cost modeling. This process is demonstrated through the use of a solar resource dataset; however, the methods are applicable to other resource data represented through spatiotemporal grids, including wind data. In addition to energy modeling, the techniques demonstrated in this paper can be used in a novel top-down approach to assess renewable resources within many other contexts that leverage variability in resource data but require reduction in spatial resolution to accommodate modeling or computing constraints.

  17. Ionization of water clusters by fast Highly Charged Ions: Stability, fragmentation, energetics and charge mobility

    Legendre, S; Maisonny, R; Capron, M; Bernigaud, V; Cassimi, A; Gervais, B; Grandin, J-P; Huber, B A; Manil, B; Rousseau, P; Tarisien, M; Adoui, L; Lopez-Tarifa, P; AlcamI, M; MartIn, F; Politis, M-F; Penhoat, M A Herve du; Vuilleumier, R; Gaigeot, M-P; Tavernelli, I

    2009-01-01

    We study dissociative ionization of water clusters by impact of fast Ni ions. Cold Target Recoil Ion Momentum Spectroscopy (COLTRIMS) is used to obtain information about stability, energetics and charge mobility of the ionized clusters. An unusual stability of the (H 2 O) 4 H ''+ ion is observed, which could be the signature of the so called ''Eigen'' structure in gas phase water clusters. High charge mobility, responsible for the formation of protonated water clusters that dominate the mass spectrum, is evidenced. These results are supported by CPMD and TDDFT simulations, which also reveal the mechanisms of such mobility.

  18. MixSim : An R Package for Simulating Data to Study Performance of Clustering Algorithms

    Volodymyr Melnykov

    2012-11-01

    Full Text Available The R package MixSim is a new tool that allows simulating mixtures of Gaussian distributions with different levels of overlap between mixture components. Pairwise overlap, defined as a sum of two misclassification probabilities, measures the degree of interaction between components and can be readily employed to control the clustering complexity of datasets simulated from mixtures. These datasets can then be used for systematic performance investigation of clustering and finite mixture modeling algorithms. Among other capabilities of MixSim, there are computing the exact overlap for Gaussian mixtures, simulating Gaussian and non-Gaussian data, simulating outliers and noise variables, calculating various measures of agreement between two partitionings, and constructing parallel distribution plots for the graphical display of finite mixture models. All features of the package are illustrated in great detail. The utility of the package is highlighted through a small comparison study of several popular clustering algorithms.

  19. Performance study of a cluster calculation; parallelization and application under geant4

    Trabelsi, Abir

    2007-01-01

    This work concretizes the final studies project for engineering computer sciences, it is archived within the national center of nuclear sciences and technology. The project consists in studying the performance of a set of machines in order to determine the best architecture to assemble them in a cluster. As well as the parallelism and the parallel implementation of GEANT4, as a tool of simulation. The realisation of this project consists on : 1) programming with C++ and executing the two benchmarks P MV and PMM on each station; 2) Interpreting this result in order to show the best architecture of the cluster; 3) parallelism with TOP-C the two benchmarks; 4) Executing the two Top-C versions on the cluster; 5) Generalizing this results; 6)parallelism et executing the parallel version of GEANT4. (Author). 14 refs

  20. Effects of cluster vs. traditional plyometric training sets on maximal-intensity exercise performance.

    Asadi, Abbas; Ramírez-Campillo, Rodrigo

    2016-01-01

    The aim of this study was to compare the effects of 6-week cluster versus traditional plyometric training sets on jumping ability, sprint and agility performance. Thirteen college students were assigned to a cluster sets group (N=6) or traditional sets group (N=7). Both training groups completed the same training program. The traditional group completed five sets of 20 repetitions with 2min of rest between sets each session, while the cluster group completed five sets of 20 [2×10] repetitions with 30/90-s rest each session. Subjects were evaluated for countermovement jump (CMJ), standing long jump (SLJ), t test, 20-m and 40-m sprint test performance before and after the intervention. Both groups had similar improvements (Psets methods resulted in greater adaptations in sprint performance, while the cluster sets method resulted in greater jump and agility adaptations. Copyright © 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  1. Constraining omega from X-ray properties of clusters of galaxies at high redshifts

    Sadat, R.; Blanchard, A.; Oukbir, J.

    1997-01-01

    Properties of high redshift clusters are a fundamental source of information for cosmology. It has been shown by Oukbir and Blanchard (1997) that the combined knowledge of the redshift distribution of X-ray clusters of galaxies and the luminosity-temperature correlation, L-X - T-X, provides a pow...

  2. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    Simon Fong

    2014-01-01

    Full Text Available Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  3. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  4. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  5. High-accuracy coupled cluster calculations of atomic properties

    Borschevsky, A. [School of Chemistry, Tel Aviv University, 69978 Tel Aviv, Israel and Centre for Theoretical Chemistry and Physics, The New Zealand Institute for Advanced Study, Massey University Auckland, Private Bag 102904, 0745 Auckland (New Zealand); Yakobi, H.; Eliav, E.; Kaldor, U. [School of Chemistry, Tel Aviv University, 69978 Tel Aviv (Israel)

    2015-01-22

    The four-component Fock-space coupled cluster and intermediate Hamiltonian methods are implemented to evaluate atomic properties. The latter include the spectra of nobelium and lawrencium (elements 102 and 103) in the range 20000-30000 cm{sup −1}, the polarizabilities of elements 112-114 and 118, required for estimating their adsorption enthalpies on surfaces used to separate them in accelerators, and the nuclear quadrupole moments of some heavy atoms. The calculations on superheavy elements are supported by the very good agreement with experiment obtained for the lighter homologues.

  6. High-accuracy coupled cluster calculations of atomic properties

    Borschevsky, A.; Yakobi, H.; Eliav, E.; Kaldor, U.

    2015-01-01

    The four-component Fock-space coupled cluster and intermediate Hamiltonian methods are implemented to evaluate atomic properties. The latter include the spectra of nobelium and lawrencium (elements 102 and 103) in the range 20000-30000 cm −1 , the polarizabilities of elements 112-114 and 118, required for estimating their adsorption enthalpies on surfaces used to separate them in accelerators, and the nuclear quadrupole moments of some heavy atoms. The calculations on superheavy elements are supported by the very good agreement with experiment obtained for the lighter homologues

  7. Performance Evaluation of Hadoop-based Large-scale Network Traffic Analysis Cluster

    Tao Ran

    2016-01-01

    Full Text Available As Hadoop has gained popularity in big data era, it is widely used in various fields. The self-design and self-developed large-scale network traffic analysis cluster works well based on Hadoop, with off-line applications running on it to analyze the massive network traffic data. On purpose of scientifically and reasonably evaluating the performance of analysis cluster, we propose a performance evaluation system. Firstly, we set the execution times of three benchmark applications as the benchmark of the performance, and pick 40 metrics of customized statistical resource data. Then we identify the relationship between the resource data and the execution times by a statistic modeling analysis approach, which is composed of principal component analysis and multiple linear regression. After training models by historical data, we can predict the execution times by current resource data. Finally, we evaluate the performance of analysis cluster by the validated predicting of execution times. Experimental results show that the predicted execution times by trained models are within acceptable error range, and the evaluation results of performance are accurate and reliable.

  8. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  9. Energy Efficient Clustering Protocol to Enhance Performance of Heterogeneous Wireless Sensor Network: EECPEP-HWSN

    Santosh V. Purkar

    2018-01-01

    Full Text Available Heterogeneous wireless sensor network (HWSN fulfills the requirements of researchers in the design of real life application to resolve the issues of unattended problem. But, the main constraint faced by researchers is the energy source available with sensor nodes. To prolong the life of sensor nodes and thus HWSN, it is necessary to design energy efficient operational schemes. One of the most suitable approaches to enhance energy efficiency is the clustering scheme, which enhances the performance parameters of WSN. A novel solution proposed in this article is to design an energy efficient clustering protocol for HWSN, to enhance performance parameters by EECPEP-HWSN. The proposed protocol is designed with three level nodes namely normal, advanced, and super, respectively. In the clustering process, for selection of cluster head we consider different parameters available with sensor nodes at run time that is, initial energy, hop count, and residual energy. This protocol enhances the energy efficiency of HWSN and hence improves energy remaining in the network, stability, lifetime, and hence throughput. It has been found that the proposed protocol outperforms than existing well-known LEACH, DEEC, and SEP with about 188, 150, and 141 percent respectively.

  10. Analysis and modeling of social influence in high performance computing workloads

    Zheng, Shuai; Shae, Zon Yin; Zhang, Xiangliang; Jamjoom, Hani T.; Fong, Liana

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies

  11. Studies on high performance Timeslice building on the CBM FLES

    Hartmann, Helvi [Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    In contrast to already existing high energy physics experiments the Compressed Baryonic Matter (CBM) experiment collects all data untriggered. The First-level Event Selector (FLES), which denotes a high performance computer cluster, processes the very high incoming data rate of 1 TByte/s and performs a full online event reconstruction. For this task it needs to access the raw detector data in time intervals referred to as Timeslices. In order to construct the Timeslices, the FLES Timeslice building has to combine data from all input links and distribute them via a high-performance network to the compute nodes. For fast data transfer the Infiniband network has proven to be appropriate. One option to address the network is using Infiniband (RDMA) Verbs directly and potentially making best use of Infiniband. However, it is a very low-level implementation relying on the hardware and neglecting other possible network technologies in the future. Another approach is to apply a high-level API like MPI which is independent of the underlying hardware and suitable for less error prone software development. I present the given possibilities and show the results of benchmarks ran on high-performance computing clusters. The solutions are evaluated regarding the Timeslice building in CBM.

  12. Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters

    Younge, Andrew J.

    2016-01-01

    With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…

  13. High-dimensional neural network potentials for solvation: The case of protonated water clusters in helium

    Schran, Christoph; Uhl, Felix; Behler, Jörg; Marx, Dominik

    2018-03-01

    The design of accurate helium-solute interaction potentials for the simulation of chemically complex molecules solvated in superfluid helium has long been a cumbersome task due to the rather weak but strongly anisotropic nature of the interactions. We show that this challenge can be met by using a combination of an effective pair potential for the He-He interactions and a flexible high-dimensional neural network potential (NNP) for describing the complex interaction between helium and the solute in a pairwise additive manner. This approach yields an excellent agreement with a mean absolute deviation as small as 0.04 kJ mol-1 for the interaction energy between helium and both hydronium and Zundel cations compared with coupled cluster reference calculations with an energetically converged basis set. The construction and improvement of the potential can be performed in a highly automated way, which opens the door for applications to a variety of reactive molecules to study the effect of solvation on the solute as well as the solute-induced structuring of the solvent. Furthermore, we show that this NNP approach yields very convincing agreement with the coupled cluster reference for properties like many-body spatial and radial distribution functions. This holds for the microsolvation of the protonated water monomer and dimer by a few helium atoms up to their solvation in bulk helium as obtained from path integral simulations at about 1 K.

  14. Wind Energy Development in India and a Methodology for Evaluating Performance of Wind Farm Clusters

    Sanjeev H. Kulkarni

    2016-01-01

    Full Text Available With maturity of advanced technologies and urgent requirement for maintaining a healthy environment with reasonable price, India is moving towards a trend of generating electricity from renewable resources. Wind energy production, with its relatively safer and positive environmental characteristics, has evolved from a marginal activity into a multibillion dollar industry today. Wind energy power plants, also known as wind farms, comprise multiple wind turbines. Though there are several wind-mill clusters producing energy in different geographical locations across the world, evaluating their performance is a complex task and is an important focus for stakeholders. In this work an attempt is made to estimate the performance of wind clusters employing a multicriteria approach. Multiple factors that affect wind farm operations are analyzed by taking experts opinions, and a performance ranking of the wind farms is generated. The weights of the selection criteria are determined by pairwise comparison matrices of the Analytic Hierarchy Process (AHP. The proposed methodology evaluates wind farm performance based on technical, economic, environmental, and sociological indicators. Both qualitative and quantitative parameters were considered. Empirical data were collected through questionnaire from the selected wind farms of Belagavi district in the Indian State of Karnataka. This proposed methodology is a useful tool for cluster analysis.

  15. Vector dark energy and high-z massive clusters

    Carlesi, Edoardo; Knebe, Alexander; Yepes, Gustavo; Gottlöber, Stefan; Jiménez, Jose Beltrán.; Maroto, Antonio L.

    2011-12-01

    The detection of extremely massive clusters at z > 1 such as SPT-CL J0546-5345, SPT-CL J2106-5844 and XMMU J2235.3-2557 has been considered by some authors as a challenge to the standard Λ cold dark matter cosmology. In fact, assuming Gaussian initial conditions, the theoretical expectation of detecting such objects is as low as ≤1 per cent. In this paper we discuss the probability of the existence of such objects in the light of the vector dark energy paradigm, showing by means of a series of N-body simulations that chances of detection are substantially enhanced in this non-standard framework.

  16. Learning Apache Solr high performance

    Mohan, Surendra

    2014-01-01

    This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected.

  17. High-performance composite chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  18. Overlapping communities from dense disjoint and high total degree clusters

    Zhang, Hongli; Gao, Yang; Zhang, Yue

    2018-04-01

    Community plays an important role in the field of sociology, biology and especially in domains of computer science, where systems are often represented as networks. And community detection is of great importance in the domains. A community is a dense subgraph of the whole graph with more links between its members than between its members to the outside nodes, and nodes in the same community probably share common properties or play similar roles in the graph. Communities overlap when nodes in a graph belong to multiple communities. A vast variety of overlapping community detection methods have been proposed in the literature, and the local expansion method is one of the most successful techniques dealing with large networks. The paper presents a density-based seeding method, in which dense disjoint local clusters are searched and selected as seeds. The proposed method selects a seed by the total degree and density of local clusters utilizing merely local structures of the network. Furthermore, this paper proposes a novel community refining phase via minimizing the conductance of each community, through which the quality of identified communities is largely improved in linear time. Experimental results in synthetic networks show that the proposed seeding method outperforms other seeding methods in the state of the art and the proposed refining method largely enhances the quality of the identified communities. Experimental results in real graphs with ground-truth communities show that the proposed approach outperforms other state of the art overlapping community detection algorithms, in particular, it is more than two orders of magnitude faster than the existing global algorithms with higher quality, and it obtains much more accurate community structure than the current local algorithms without any priori information.

  19. Performance of amplify-and-forward multihop transmission over relay clusters with different routing strategies

    Yilmaz, Ferkan; Khan, Fahd Ahmed; Alouini, Mohamed-Slim

    2014-01-01

    We consider a multihop relay network in which two terminals are communicating with each other via a number of cluster of relays. Performance of such networks depends on the routing protocols employed. In this paper, we find the expressions for the Average Symbol Error Probability (ASEP) performance of Amplify-and-Forward (AF) multihop transmission for the simplest routing protocol in which the relay transmits using the channel having the best Signal to Noise Ratio (SNR). The ASEP performance of a better protocol proposed in Gui et al. (2009) known as the adhoc protocol is also analyzed. The derived expressions for the performance are a convenient tool to analyze the performance of AF multihop transmission over relay clusters. Monte-Carlo simulations verify the correctness of the proposed formulation and are in agreement with analytical results. Furthermore, we propose new generalized protocols termed as last-nhop selection protocol, the dual path protocol, the forward-backward last-n-hop selection protocol and the forward-backward dual path protocol, to get improved ASEP performances. The ASEP performance of these proposed schemes is analysed by computer simulations. It is shown that close to optimal performance can be achieved by using the last-n-hop selection protocol and its forwardbackward variant. The complexity of the protocols is also studied. Copyright © 2014 Inderscience Enterprises Ltd.

  20. High-Performance Composite Chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  1. Toward High-Performance Organizations.

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  2. Preparation of aligned W{sub 18}O{sub 49} nanowire clusters with high photocatalytic activity

    Zhang, Ning [State Key Laboratory of Inorganic Synthesis & Preparative Chemistry, Jilin University, Changchun 130012 (China); Zhao, Yafei, E-mail: zhaoyafei007@126.com [State Key Laboratory of Inorganic Synthesis & Preparative Chemistry, Jilin University, Changchun 130012 (China); School of Chemical Engineering and Energy, Zhengzhou University, Zhengzhou, Henan 450001 (China); Lu, Yanjie [School of Chemical Engineering and Energy, Zhengzhou University, Zhengzhou, Henan 450001 (China); Zhu, Guangshan, E-mail: zhugs@jlu.edu.cn [State Key Laboratory of Inorganic Synthesis & Preparative Chemistry, Jilin University, Changchun 130012 (China)

    2017-04-15

    Highlights: • Aligned W{sub 18}O{sub 49} nanowire clusters were prepared by a facile hydrothermal method. • W{sub 18}O{sub 49} has unique structure, high degree of crystallinity and large surface area. • W{sub 18}O{sub 49} nanowire clusters exhibited high photocatalytic degradation activity. - Abstract: The aligned W{sub 18}O{sub 49} nanowire clusters were synthesized via a facile and economic ethanol-assisted hydrothermal method using peroxopolytungstic acid as precursor. Results show that the as-prepared W{sub 18}O{sub 49} exhibits a high yield and ultrathin structure with preferential growth direction along [0 1 0]. The amount of peroxopolytungstic acid and reaction time play significant role on the morphology of W{sub 18}O{sub 49} nanowires. The nanowires have unique structure, high degree of crystallinity, large specific surface area, and large number of defects such as oxygen vacancies, which are responsible for their high photocatalytic performance for degradation of methylene blue. The photocatalytic conversion of methylene blue can reach above 98% after degradation. W{sub 18}O{sub 49} also exhibits good photodegradation stability after five cycles of reuse. The results demonstrate that the as-prepared W{sub 18}O{sub 49} nanowire clusters are expected to be a promising material for applications in the field of environment.

  3. Clustering performances in the NBA according to players' anthropometric attributes and playing experience.

    Zhang, Shaoliang; Lorenzo, Alberto; Gómez, Miguel-Angel; Mateus, Nuno; Gonçalves, Bruno; Sampaio, Jaime

    2018-04-20

    The aim of this study was: (i) to group basketball players into similar clusters based on a combination of anthropometric characteristics and playing experience; and (ii) explore the distribution of players (included starters and non-starters) from different levels of teams within the obtained clusters. The game-related statistics from 699 regular season balanced games were analyzed using a two-step cluster model and a discriminant analysis. The clustering process allowed identifying five different player profiles: Top height and weight (HW) with low experience, TopHW-LowE; Middle HW with middle experience, MiddleHW-MiddleE; Middle HW with top experience, MiddleHW-TopE; Low HW with low experience, LowHW-LowE; Low HW with middle experience, LowHW-MiddleE. Discriminant analysis showed that TopHW-LowE group was highlighted by two-point field goals made and missed, offensive and defensive rebounds, blocks, and personal fouls; whereas the LowHW-LowE group made fewest passes and touches. The players from weaker teams were mostly distributed in LowHW-LowE group, whereas players from stronger teams were mainly grouped in LowHW-MiddleE group; and players that participated in the finals were allocated in the MiddleHW-MiddleE group. These results provide alternative references for basketball staff concerning the process of evaluating performance.

  4. Mass distribution and multiple fragmentation events in high energy cluster-cluster collisions: evidence for a predicted phase transition

    Farizon, B.; Farizon, M.; Gaillard, M.J.; Genre, R.; Louc, S.; Martin, J.; Senn, G.; Scheier, P.; Maerk, T.D.

    1996-09-01

    Fragment size distributions including multiple fragmentation events have been measured for high energy H 25 + cluster ions (60 keV/amu) colliding with a neutral C 60 target. In contrast to earlier collision experiments with a helium target the present studies do not show a U-shaped fragment mass distribution, but a single power-law falloff with increasing fragment mass. This behaviour is similar to what is known for the intermediate regime in nuclear collision physics and thus confirms a recently predicted scaling from nuclear to molecular collisions

  5. Functional High Performance Financial IT

    Berthold, Jost; Filinski, Andrzej; Henglein, Fritz

    2011-01-01

    at the University of Copenhagen that attacks this triple challenge of increased performance, transparency and productivity in the financial sector by a novel integration of financial mathematics, domain-specific language technology, parallel functional programming, and emerging massively parallel hardware. HIPERFIT......The world of finance faces the computational performance challenge of massively expanding data volumes, extreme response time requirements, and compute-intensive complex (risk) analyses. Simultaneously, new international regulatory rules require considerably more transparency and external...... auditability of financial institutions, including their software systems. To top it off, increased product variety and customisation necessitates shorter software development cycles and higher development productivity. In this paper, we report about HIPERFIT, a recently etablished strategic research center...

  6. Cluster-level statistical inference in fMRI datasets: The unexpected behavior of random fields in high dimensions.

    Bansal, Ravi; Peterson, Bradley S

    2018-06-01

    Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal

  7. High performance Mo adsorbent PZC

    Anon,

    1998-10-01

    We have developed Mo adsorbents for natural Mo(n, {gamma}){sup 99}Mo-{sup 99m}Tc generator. Among them, we called the highest performance adsorbent PZC that could adsorb about 250 mg-Mo/g. In this report, we will show the structure, adsorption mechanism of Mo, and the other useful properties of PZC when you carry out the examination of Mo adsorption and elution of {sup 99m}Tc. (author)

  8. Fragmentation of neutral carbon clusters formed by high velocity atomic collision

    Martinet, G.

    2004-05-01

    The aim of this work is to understand the fragmentation of small neutral carbon clusters formed by high velocity atomic collision on atomic gas. In this experiment, the main way of deexcitation of neutral clusters formed by electron capture with ionic species is the fragmentation. To measure the channels of fragmentation, a new detection tool based on shape analysis of current pulse delivered by semiconductor detectors has been developed. For the first time, all branching ratios of neutral carbon clusters are measured in an unambiguous way for clusters size up to 10 atoms. The measurements have been compared to a statistical model in microcanonical ensemble (Microcanonical Metropolis Monte Carlo). In this model, various structural properties of carbon clusters are required. These data have been calculated with Density Functional Theory (DFT-B3LYP) to find the geometries of the clusters and then with Coupled Clusters (CCSD(T)) formalism to obtain dissociation energies and other quantities needed to compute fragmentation calculations. The experimental branching ratios have been compared to the fragmentation model which has allowed to find an energy distribution deposited in the collision. Finally, specific cluster effect has been found namely a large population of excited states. This behaviour is completely different of the atomic carbon case for which the electron capture in the ground states predominates. (author)

  9. Indoor Air Quality in High Performance Schools

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  10. Bimetallic Ag-Pt Sub-nanometer Supported Clusters as Highly Efficient and Robust Oxidation Catalysts

    Negreiros, Fabio R. [CNR-ICCOM & IPCF, Consiglio Nazionale delle Ricerche, Pisa Italy; Halder, Avik [Materials Science Division, Argonne National Laboratory, Lemont IL USA; Yin, Chunrong [Materials Science Division, Argonne National Laboratory, Lemont IL USA; Singh, Akansha [Harish-Chandra Research Institute, HBNI, Chhatnag Road Jhunsi Allahabad 211019 India; Barcaro, Giovanni [CNR-ICCOM & IPCF, Consiglio Nazionale delle Ricerche, Pisa Italy; Sementa, Luca [CNR-ICCOM & IPCF, Consiglio Nazionale delle Ricerche, Pisa Italy; Tyo, Eric C. [Materials Science Division, Argonne National Laboratory, Lemont IL USA; Pellin, Michael J. [Materials Science Division, Argonne National Laboratory, Lemont IL USA; Bartling, Stephan [Institut für Physik, Universität Rostock, Rostock Germany; Meiwes-Broer, Karl-Heinz [Institut für Physik, Universität Rostock, Rostock Germany; Seifert, Sönke [X-ray Science Division, Argonne National Laboratory, Lemont IL USA; Sen, Prasenjit [Harish-Chandra Research Institute, HBNI, Chhatnag Road Jhunsi Allahabad 211019 India; Nigam, Sandeep [Chemistry Division, Bhabha Atomic Research Centre, Trombay Mumbai- 400 085 India; Majumder, Chiranjib [Chemistry Division, Bhabha Atomic Research Centre, Trombay Mumbai- 400 085 India; Fukui, Nobuyuki [East Tokyo Laboratory, Genesis Research Institute, Inc., Ichikawa Chiba 272-0001 Japan; Yasumatsu, Hisato [Cluster Research Laboratory, Toyota Technological Institute: in, East Tokyo Laboratory, Genesis Research Institute, Inc. Ichikawa, Chiba 272-0001 Japan; Vajda, Stefan [Materials Science Division, Argonne National Laboratory, Lemont IL USA; Nanoscience and Technology Division, Argonne National Laboratory, Lemont IL USA; Institute for Molecular Engineering, University of Chicago, Chicago IL USA; Fortunelli, Alessandro [CNR-ICCOM & IPCF, Consiglio Nazionale delle Ricerche, Pisa Italy; Materials and Process Simulation Center, California Institute of Technology, Pasadena CA USA

    2017-12-29

    A combined experimental and theoretical investigation of Ag-Pt sub-nanometer clusters as heterogeneous catalysts in the CO -> CO2 reaction (COox) is presented. Ag9Pt2 and Ag9Pt3 clusters are size-selected in the gas phase, deposited on an ultrathin amorphous alumina support, and tested as catalysts experimentally under realistic conditions and by first-principles simulations at realistic coverage. Insitu GISAXS/TPRx demonstrates that the clusters do not sinter or deactivate even after prolonged exposure to reactants at high temperature, and present comparable, extremely high COox catalytic efficiency. Such high activity and stability are ascribed to a synergic role of Ag and Pt in ultranano-aggregates, in which Pt anchors the clusters to the support and binds and activates two CO molecules, while Ag binds and activates O-2, and Ag/Pt surface proximity disfavors poisoning by CO or oxidized species.

  11. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1977-01-01

    Inertial confinement fusion (ICF) designs are considered which may have very high gains (approximately 1000) and low power requirements (<100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  12. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1978-01-01

    Inertial confinement fusion (ICF) target designs are considered which may have very high gains (approximately 1000) and low power requirements (< 100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  13. High performance nuclear fuel element

    Mordarski, W.J.; Zegler, S.T.

    1980-01-01

    A fuel-pellet composition is disclosed for use in fast breeder reactors. Uranium carbide particles are mixed with a powder of uraniumplutonium carbides having a stable microstructure. The resulting mixture is formed into fuel pellets. The pellets thus produced exhibit a relatively low propensity to swell while maintaining a high density

  14. High Performance JavaScript

    Zakas, Nicholas

    2010-01-01

    If you're like most developers, you rely heavily on JavaScript to build interactive and quick-responding web applications. The problem is that all of those lines of JavaScript code can slow down your apps. This book reveals techniques and strategies to help you eliminate performance bottlenecks during development. You'll learn how to improve execution time, downloading, interaction with the DOM, page life cycle, and more. Yahoo! frontend engineer Nicholas C. Zakas and five other JavaScript experts -- Ross Harmes, Julien Lecomte, Steven Levithan, Stoyan Stefanov, and Matt Sweeney -- demonstra

  15. Post-Newtonian Dynamics in Dense Star Clusters: Highly Eccentric, Highly Spinning, and Repeated Binary Black Hole Mergers.

    Rodriguez, Carl L; Amaro-Seoane, Pau; Chatterjee, Sourav; Rasio, Frederic A

    2018-04-13

    We present models of realistic globular clusters with post-Newtonian dynamics for black holes. By modeling the relativistic accelerations and gravitational-wave emission in isolated binaries and during three- and four-body encounters, we find that nearly half of all binary black hole mergers occur inside the cluster, with about 10% of those mergers entering the LIGO/Virgo band with eccentricities greater than 0.1. In-cluster mergers lead to the birth of a second generation of black holes with larger masses and high spins, which, depending on the black hole natal spins, can sometimes be retained in the cluster and merge again. As a result, globular clusters can produce merging binaries with detectable spins regardless of the birth spins of black holes formed from massive stars. These second-generation black holes would also populate any upper mass gap created by pair-instability supernovae.

  16. Post-Newtonian Dynamics in Dense Star Clusters: Highly Eccentric, Highly Spinning, and Repeated Binary Black Hole Mergers

    Rodriguez, Carl L.; Amaro-Seoane, Pau; Chatterjee, Sourav; Rasio, Frederic A.

    2018-04-01

    We present models of realistic globular clusters with post-Newtonian dynamics for black holes. By modeling the relativistic accelerations and gravitational-wave emission in isolated binaries and during three- and four-body encounters, we find that nearly half of all binary black hole mergers occur inside the cluster, with about 10% of those mergers entering the LIGO/Virgo band with eccentricities greater than 0.1. In-cluster mergers lead to the birth of a second generation of black holes with larger masses and high spins, which, depending on the black hole natal spins, can sometimes be retained in the cluster and merge again. As a result, globular clusters can produce merging binaries with detectable spins regardless of the birth spins of black holes formed from massive stars. These second-generation black holes would also populate any upper mass gap created by pair-instability supernovae.

  17. Carpet Aids Learning in High Performance Schools

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  18. Conveyor Performance based on Motor DC 12 Volt Eg-530ad-2f using K-Means Clustering

    Arifin, Zaenal; Artini, Sri DP; Much Ibnu Subroto, Imam

    2017-04-01

    To produce goods in industry, a controlled tool to improve production is required. Separation process has become a part of production process. Separation process is carried out based on certain criteria to get optimum result. By knowing the characteristics performance of a controlled tools in separation process the optimum results is also possible to be obtained. Clustering analysis is popular method for clustering data into smaller segments. Clustering analysis is useful to divide a group of object into a k-group in which the member value of the group is homogeny or similar. Similarity in the group is set based on certain criteria. The work in this paper based on K-Means method to conduct clustering of loading in the performance of a conveyor driven by a dc motor 12 volt eg-530-2f. This technique gives a complete clustering data for a prototype of conveyor driven by dc motor to separate goods in term of height. The parameters involved are voltage, current, time of travelling. These parameters give two clusters namely optimal cluster with center of cluster 10.50 volt, 0.3 Ampere, 10.58 second, and unoptimal cluster with center of cluster 10.88 volt, 0.28 Ampere and 40.43 second.

  19. Segmentation of High Angular Resolution Diffusion MRI using Sparse Riemannian Manifold Clustering

    Wright, Margaret J.; Thompson, Paul M.; Vidal, René

    2015-01-01

    We address the problem of segmenting high angular resolution diffusion imaging (HARDI) data into multiple regions (or fiber tracts) with distinct diffusion properties. We use the orientation distribution function (ODF) to represent HARDI data and cast the problem as a clustering problem in the space of ODFs. Our approach integrates tools from sparse representation theory and Riemannian geometry into a graph theoretic segmentation framework. By exploiting the Riemannian properties of the space of ODFs, we learn a sparse representation for each ODF and infer the segmentation by applying spectral clustering to a similarity matrix built from these representations. In cases where regions with similar (resp. distinct) diffusion properties belong to different (resp. same) fiber tracts, we obtain the segmentation by incorporating spatial and user-specified pairwise relationships into the formulation. Experiments on synthetic data evaluate the sensitivity of our method to image noise and the presence of complex fiber configurations, and show its superior performance compared to alternative segmentation methods. Experiments on phantom and real data demonstrate the accuracy of the proposed method in segmenting simulated fibers, as well as white matter fiber tracts of clinical importance in the human brain. PMID:24108748

  20. Excited states of virtual clusters in a nucleus and the processes of quasi-elastic cluster knock-out at high energies

    Golovanova, N.F.; Il'in, I.M.; Neudatchin, V.G.; Smirnov, Yu.F.; Tchuvil'sky, Yu.M.

    1976-01-01

    The quasi-elastic knock-out of nucleon clusters from nuclei by an incident high-energy hadron is considered within the framework of the Glauber-Sitenko multiple scattering theory. It is shown that the significant contribution to the cross section for the process comes not only from the hadron elastic scattering by a nonexcited virtual cluster but also from collisions with an excited virtual cluster, accompanied by de-excitation of this cluster. This necessitates modification of the usual theory of quasi-elastic cluster knock-out. First, the angular correlations of the knocked-out cluster and scattered hadron are no longer determined by the momentum distribution of the cluster in the nucleus. They are determined by another form factor F(q) which can be called the modified momentum distribution. Secondly, the meaning and values of the effective numbers of clusters Nsup(eff) have been changed. Thirdly, the characteristics of the processes depend not only on the modulus of momentum q, which the cluster had in the nucleus, but also on its direction relative to an incident beam. A method has been developed for the calculation of the fractional parentage coefficients, which are necessary for the calculation of the cluster knock-out from the p-shell nuclei. (Auth.)

  1. Performance of clustering techniques for solving multi depot vehicle routing problem

    Eliana M. Toro-Ocampo

    2016-01-01

    Full Text Available The vehicle routing problem considering multiple depots is classified as NP-hard. MDVRP determines simultaneously the routes of a set of vehicles and aims to meet a set of clients with a known demand. The objective function of the problem is to minimize the total distance traveled by the routes given that all customers must be served considering capacity constraints in depots and vehicles. This paper presents a hybrid methodology that combines agglomerative clustering techniques to generate initial solutions with an iterated local search algorithm (ILS to solve the problem. Although previous studies clustering methods have been proposed like strategies to generate initial solutions, in this work the search is intensified on the information generated after applying the clustering technique. Besides an extensive analysis on the performance of techniques, and their effect in the final solution is performed. The operation of the proposed methodology is feasible and effective to solve the problem regarding the quality of the answers and computational times obtained on request evaluated literature

  2. Depressive Symptom Clusters and Neuropsychological Performance in Mild Alzheimer's and Cognitively Normal Elderly

    James R. Hall

    2011-01-01

    Full Text Available Objectives. Determine the relationship between depressive symptom clusters and neuropsychological test performance in an elderly cohort of cognitively normal controls and mild Alzheimer's disease (AD. Design. Cross-sectional analysis. Setting. Four health science centers in Texas. Participants. 628 elderly individuals (272 diagnosed with mild AD and 356 controls from ongoing longitudinal study of Alzheimer's disease. Measurements. Standard battery of neuropsychological tests and the 30-item Geriatric Depression Scale with regressions model generated on GDS-30 subscale scores (dysphoria, apathy, meaninglessness and cognitive impairment as predictors and neuropsychological tests as outcome variables. Follow-up analyses by gender were conducted. Results. For AD, all symptom clusters were related to specific neurocognitive domains; among controls apathy and cognitive impairment were significantly related to neuropsychological functioning. The relationship between performance and symptom clusters was significantly different for males and females in each group. Conclusion. Findings suggest the need to examine disease status and gender when considering the impact of depressive symptoms on cognition.

  3. High performance electromagnetic simulation tools

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  4. High-Performance Data Converters

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  5. High performance soft magnetic materials

    2017-01-01

    This book provides comprehensive coverage of the current state-of-the-art in soft magnetic materials and related applications, with particular focus on amorphous and nanocrystalline magnetic wires and ribbons and sensor applications. Expert chapters cover preparation, processing, tuning of magnetic properties, modeling, and applications. Cost-effective soft magnetic materials are required in a range of industrial sectors, such as magnetic sensors and actuators, microelectronics, cell phones, security, automobiles, medicine, health monitoring, aerospace, informatics, and electrical engineering. This book presents both fundamentals and applications to enable academic and industry researchers to pursue further developments of these key materials. This highly interdisciplinary volume represents essential reading for researchers in materials science, magnetism, electrodynamics, and modeling who are interested in working with soft magnets. Covers magnetic microwires, sensor applications, amorphous and nanocrystalli...

  6. High performance polyethylene nanocomposite fibers

    A. Dorigato

    2012-12-01

    Full Text Available A high density polyethylene (HDPE matrix was melt compounded with 2 vol% of dimethyldichlorosilane treated fumed silica nanoparticles. Nanocomposite fibers were prepared by melt spinning through a co-rotating twin screw extruder and drawing at 125°C in air. Thermo-mechanical and morphological properties of the resulting fibers were then investigated. The introduction of nanosilica improved the drawability of the fibers, allowing the achievement of higher draw ratios with respect to the neat matrix. The elastic modulus and creep stability of the fibers were remarkably improved upon nanofiller addition, with a retention of the pristine tensile properties at break. Transmission electronic microscope (TEM images evidenced that the original morphology of the silica aggregates was disrupted by the applied drawing.

  7. Clustering predicts memory performance in networks of spiking and non-spiking neurons

    Weiliang eChen

    2011-03-01

    Full Text Available The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.

  8. Dispersed metal cluster catalysts by design. Synthesis, characterization, structure, and performance

    Arslan, Ilke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dixon, David A. [Univ. of Alabama, Tuscaloosa, AL (United States); Gates, Bruce C. [Univ. of California, Davis, CA (United States); Katz, Alexander [Univ. of California, Berkeley, CA (United States)

    2015-09-30

    ligands on the metals and their reactions; EXAFS spectroscopy and high-resolution STEM to determine cluster framework structures and changes resulting from reactant treatment and locations of metal atoms on support surfaces; X-ray diffraction crystallography to determine full structures of cluster-ligand combinations in the absence of a support, and TEM with tomographic methods to observe individual metal atoms and determine three-dimensional structures of catalysts. Electronic structure calculations were used to verify and interpret spectra and extend the understanding of reactivity beyond what is measurable experimentally.

  9. Using intelligent clustering techniques to classify the energy performance of school buildings

    Santamouris, M.; Sfakianaki, K.; Papaglastra, M.; Pavlou, C.; Doukas, P.; Geros, V.; Assimakopoulos, M.N.; Zerefos, S. [University of Athens, Department of Physics, Division of Applied Physics, Laboratory of Meteorology, Athens (Greece); Mihalakakou, G.; Gaitani, N. [University of Ioannina, Department of Environmental and Natural Resources Management, Agrinio (Greece); Patargias, P. [University of Peloponnesus, Faculty of Human Sciences and Cultural Studies, Department of History, Kalamata (Greece); Primikiri, E. [University of Patras, Department of Architecture, Patras (Greece); Mitoula, R. [Charokopion University of Athens, Athens (Greece)

    2007-07-01

    The present paper deals with the energy performance, energy classification and rating and the global environmental quality of school buildings. A new energy classification technique based on intelligent clustering methodologies is proposed. Energy rating of school buildings provides specific information on their energy consumption and efficiency relative to the other buildings of similar nature and permits a better planning of interventions to improve its energy performance. The overall work reported in the present paper, is carried out in three phases. During the first phase energy consumption data have been collected through energy surveys performed in 320 schools in Greece. In the second phase an innovative energy rating scheme based on fuzzy clustering techniques has been developed, while in the third phase, 10 schools have been selected and detailed measurements of their energy efficiency and performance as well as of the global environmental quality have been performed using a specific experimental protocol. The proposed energy rating method has been applied while the main environmental and energy problems have been identified. The potential for energy and environmental improvements has been assessed. (author)

  10. HIGH-PERFORMANCE COATING MATERIALS

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  11. The first high resolution image of coronal gas in a starbursting cool core cluster

    Johnson, Sean

    2017-08-01

    Galaxy clusters represent a unique laboratory for directly observing gas cooling and feedback due to their high masses and correspondingly high gas densities and temperatures. Cooling of X-ray gas observed in 1/3 of clusters, known as cool-core clusters, should fuel star formation at prodigious rates, but such high levels of star formation are rarely observed. Feedback from active galactic nuclei (AGN) is a leading explanation for the lack of star formation in most cool clusters, and AGN power is sufficient to offset gas cooling on average. Nevertheless, some cool core clusters exhibit massive starbursts indicating that our understanding of cooling and feedback is incomplete. Observations of 10^5 K coronal gas in cool core clusters through OVI emission offers a sensitive means of testing our understanding of cooling and feedback because OVI emission is a dominant coolant and sensitive tracer of shocked gas. Recently, Hayes et al. 2016 demonstrated that synthetic narrow-band imaging of OVI emission is possible through subtraction of long-pass filters with the ACS+SBC for targets at z=0.23-0.29. Here, we propose to use this exciting new technique to directly image coronal OVI emitting gas at high resolution in Abell 1835, a prototypical starbursting cool-core cluster at z=0.252. Abell 1835 hosts a strong cooling core, massive starburst, radio AGN, and at z=0.252, it offers a unique opportunity to directly image OVI at hi-res in the UV with ACS+SBC. With just 15 orbits of ACS+SBC imaging, the proposed observations will complete the existing rich multi-wavelength dataset available for Abell 1835 to provide new insights into cooling and feedback in clusters.

  12. Two-stage clustering (TSC: a pipeline for selecting operational taxonomic units for the high-throughput sequencing of PCR amplicons.

    Xiao-Tao Jiang

    Full Text Available Clustering 16S/18S rRNA amplicon sequences into operational taxonomic units (OTUs is a critical step for the bioinformatic analysis of microbial diversity. Here, we report a pipeline for selecting OTUs with a relatively low computational demand and a high degree of accuracy. This pipeline is referred to as two-stage clustering (TSC because it divides tags into two groups according to their abundance and clusters them sequentially. The more abundant group is clustered using a hierarchical algorithm similar to that in ESPRIT, which has a high degree of accuracy but is computationally costly for large datasets. The rarer group, which includes the majority of tags, is then heuristically clustered to improve efficiency. To further improve the computational efficiency and accuracy, two preclustering steps are implemented. To maintain clustering accuracy, all tags are grouped into an OTU depending on their pairwise Needleman-Wunsch distance. This method not only improved the computational efficiency but also mitigated the spurious OTU estimation from 'noise' sequences. In addition, OTUs clustered using TSC showed comparable or improved performance in beta-diversity comparisons compared to existing OTU selection methods. This study suggests that the distribution of sequencing datasets is a useful property for improving the computational efficiency and increasing the clustering accuracy of the high-throughput sequencing of PCR amplicons. The software and user guide are freely available at http://hwzhoulab.smu.edu.cn/paperdata/.

  13. International Network Performance and Security Testing Based on Distributed Abyss Storage Cluster and Draft of Data Lake Framework

    ByungRae Cha

    2018-01-01

    Full Text Available The megatrends and Industry 4.0 in ICT (Information Communication & Technology are concentrated in IoT (Internet of Things, BigData, CPS (Cyber Physical System, and AI (Artificial Intelligence. These megatrends do not operate independently, and mass storage technology is essential as large computing technology is needed in the background to support them. In order to evaluate the performance of high-capacity storage based on open source Ceph, we carry out the network performance test of Abyss storage with domestic and overseas sites using KOREN (Korea Advanced Research Network. And storage media and network bonding are tested to evaluate the performance of the storage itself. Additionally, the security test is demonstrated by Cuckoo sandbox and Yara malware detection among Abyss storage cluster and oversea sites. Lastly, we have proposed the draft design of Data Lake framework in order to solve garbage dump problem.

  14. Hall effect measurements of Frenkel defect clustering in aluminium during high-dose reactor irradiation at 4.6 K

    Boening, K.; Mauer, W.; Pfaendner, K.; Rosner, P.

    1976-01-01

    The low-field Hall coefficient R 0 of irradiated aluminium at 4.6 K is independent of the Frenkel defect (FD) concentration, however sensitively dependent of their configuration. Since measurement of R 0 is not too difficult, rather extensive investigations of FD clustering during irradiation can be performed, but only qualitative interpretations are possible. Several pure Al samples have been irradiated with reactor neutrons at 4.6 K up to very high doses phit resp. resistivity increments Δrho 0 (maximum 91% of extrapolated saturation value Δrho 0 sup(sat) approximately 980 nΩcm). The main results are 1.FD clustering within a single displacement cascade is not a very strong effect in Al, since the R 0 values are essentially the same after reactor and after electron irradiation. Rough cascade averages are: volume Vsub(c) approximately 2.1 x 10 5 at.vol. and FD concentration csub(c) approximately 1100 ppm. 2. There is practically no dose-dependent FD clustering up to Δrho 0 approximately 350 nΩcm, since R 0 remains essentially constant there. It follows that dose-dependent FD clustering can only occur for high-order overlap of cascade volumes. The differential dose curve dΔrho 0 /dphit is perfectly linear in Δrho 0 as long as R 0 = const. 3. For Δrho 0 > 350 nΩcm FD clustering becomes increasingly important and R 0 changes strongly. Surprisingly dR 0 /dphit approximately const whence there is a constant rate of cluster size increase in spite of the vanishing rate of FD production, evidence of the continuous regrouping of the lattice and its defects. (author)

  15. Delivering high performance BWR fuel reliably

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  16. The dynamics of cyclone clustering in re-analysis and a high-resolution climate model

    Priestley, Matthew; Pinto, Joaquim; Dacre, Helen; Shaffrey, Len

    2017-04-01

    Extratropical cyclones have a tendency to occur in groups (clusters) in the exit of the North Atlantic storm track during wintertime, potentially leading to widespread socioeconomic impacts. The Winter of 2013/14 was the stormiest on record for the UK and was characterised by the recurrent clustering of intense extratropical cyclones. This clustering was associated with a strong, straight and persistent North Atlantic 250 hPa jet with Rossby wave-breaking (RWB) on both flanks, pinning the jet in place. Here, we provide for the first time an analysis of all clustered events in 36 years of the ERA-Interim Re-analysis at three latitudes (45˚ N, 55˚ N, 65˚ N) encompassing various regions of Western Europe. The relationship between the occurrence of RWB and cyclone clustering is studied in detail. Clustering at 55˚ N is associated with an extended and anomalously strong jet flanked on both sides by RWB. However, clustering at 65(45)˚ N is associated with RWB to the south (north) of the jet, deflecting the jet northwards (southwards). A positive correlation was found between the intensity of the clustering and RWB occurrence to the north and south of the jet. However, there is considerable spread in these relationships. Finally, analysis has shown that the relationships identified in the re-analysis are also present in a high-resolution coupled global climate model (HiGEM). In particular, clustering is associated with the same dynamical conditions at each of our three latitudes in spite of the identified biases in frequency and intensity of RWB.

  17. Ionization and fragmentation of water clusters by fast highly charged ions

    Adoui, L; Cassimi, A; Gervais, B; Grandin, J-P; Guillaume, L; Maisonny, R; Legendre, S; Tarisien, M; Lopez-Tarifa, P; Alcami, M; Martin, F; Politis, M-F; Penhoat, M-A Herve du; Vuilleumier, R; Gaigeot, M-P; Tavernelli, I

    2009-01-01

    We study the dissociative ionization of water clusters by impact of 12 MeV/u Ni 25+ ions. Cold target recoil ion momentum spectroscopy (COLTRIMS) is used to obtain information about stability, energetics and charge mobility of the ionized water clusters. An unusual stability of the H 9 O + 4 ion is observed, which could be the signature of the so-called Eigen structure in gas-phase water clusters. From the analysis of coincidences between charged fragments, we conclude that charge mobility is very high and is responsible for the formation of protonated water clusters, (H 2 O) n H + , that dominate the mass spectrum. These results are supported by Car-Parrinello molecular dynamics and time-dependent density functional theory simulations, which also reveal the mechanisms of such mobility.

  18. Ionized-cluster source based on high-pressure corona discharge

    Lokuliyanage, K.; Huber, D.; Zappa, F.; Scheier, P.

    2006-01-01

    Full text: It has been demonstrated that energetic beams of large clusters, with thousands of atoms, can be a powerful tool for surface modification. Normally ionized cluster beams are obtained by electron impact on neutral beams produced in a supersonic expansion. At the University of Innsbruck we are pursuing the realization of a high current cluster ion source based on the corona discharge.The idea in the present case is that the ionization should occur prior to the supersonic expansion, thus supersede the need of subsequent electron impact. In this contribution we present the project of our source in its initial stage. The intensity distribution of cluster sizes as a function of the source parameters, such as input pressure, temperature and gap voltage, are investigated with the aid of a custom-built time of flight mass spectrometer. (author)

  19. High-order harmonic generation in clusters irradiated by an infrared laser field of moderate intensity

    Zaretsky, D F; Korneev, Ph; Becker, W

    2010-01-01

    Extending the Lewenstein model of high-order harmonic generation (HHG) in a laser-irradiated atom, a model of HHG in a cluster is formulated. The constituent atoms of the cluster are assumed to be partly ionized. An electron freed through tunnelling may recombine either with its parent ion or with another ion in the vicinity. Harmonics due to the former process are coherent within the same cluster and may be coherent between different clusters, while harmonics due to the latter process are incoherent. Depending on the density of available ions, the incoherent mechanism may dominate the total harmonic yield, and the harmonic spectrum, which extends to higher energies, has a less distinct cutoff and an enhanced low-energy part.

  20. ON THE ORIGIN OF HIGH-ALTITUDE OPEN CLUSTERS IN THE MILKY WAY

    Martinez-Medina, L. A.; Pichardo, B.; Moreno, E.; Peimbert, A. [Instituto de Astronomía, Universidad Nacional Autónoma de México, A.P. 70-264, 04510, México, D.F., México (Mexico); Velazquez, H., E-mail: lamartinez@astro.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Apartado Postal 877, 22860 Ensenada, B.C., México (Mexico)

    2016-01-20

    We present a dynamical study of the effect of the bar and spiral arms on the simulated orbits of open clusters in the Galaxy. Specifically, this work is devoted to the puzzling presence of high-altitude open clusters in the Galaxy. For this purpose we employ a very detailed observationally motivated potential model for the Milky Way and a careful set of initial conditions representing the newly born open clusters in the thin disk. We find that the spiral arms are able to raise an important percentage of open clusters (about one-sixth of the total employed in our simulations, depending on the structural parameters of the arms) above the Galactic plane to heights beyond 200 pc, producing a bulge-shaped structure toward the center of the Galaxy. Contrary to what was expected, the spiral arms produce a much greater vertical effect on the clusters than the bar, both in quantity and height; this is due to the sharper concentration of the mass on the spiral arms, when compared to the bar. When a bar and spiral arms are included, spiral arms are still capable of raising an important percentage of the simulated open clusters through chaotic diffusion (as tested from classification analysis of the resultant high-z orbits), but the bar seems to restrain them, diminishing the elevation above the plane by a factor of about two.

  1. INTERNATIONAL BEHAVIOUR AND PERFORMANCE BASED ROMANIAN ENTREPRENEURIAL AND TRADITIONAL FIRM CLUSTERS

    FEDER Emoke - Szidonia

    2015-07-01

    Full Text Available The micro, small and medium-sized firms (SMEs present a key interest at European level due to their potential positive influence on regional, national and firm level competitiveness. At a certain moment in time, internationalisation became an expected and even unavoidable strategy in firms’ future development, growth and evolution. From theoretical perspective, an integrative complementarily approach is adopted concerning the dominant paradigm of stage models from incremental internationalisation theory and the emergent paradigm of international entrepreneurship theory. Several researcher calls for empirical testing of different theoretical frameworks and international firms. Therefore, the first aim of the quantitative study is to empirically prove, the existence of various internationalisation behaviour configuration based clusters, like sporadic and traditional international firms, born-again global and born global firms, within the framework of Romanian SMEs. Secondly, within the research framework the study propose to assess different distinguishing internationalisation behavioural characteristics and patterns for the delimited clusters, in terms of foreign market scope, internationalisation pace and rhythm, initial and current entry modes, international product portfolio and commitment. Thirdly, internationalisation cluster membership and patterns differential influence and contribution is analysed on firm level international business performance, as internationalisation degree, financial and marketing measures. The framework was tested on a transversal sample consisting of 140 Romanian internationalised SMEs. Findings are especially useful for entrepreneurs and SME managers presenting various decisional possibilities and options on internationalisation behaviours and performance. These emphasize the importance of internationalisation scope, pace, object and opportunity seeking, along with positive influence on performance, indifferent

  2. The cluster is not flat. Uneven impacts of brokerage roles on the innovative performance of firms

    Luís Martínez-Cháfer

    2018-01-01

    Full Text Available This paper investigates whether and to what extent individual firms improve their innovation from behaving as brokers connecting other actors in the Spanish ceramic tile cluster. The effects of the brokerage roles are analyzed for different innovation levels by means of quantile regressions. Finally, we speculate about the indirect and interactive effects of the distinct individual organization attributes and these benefits. Results show that brokerage activities unevenly influence the broker's innovative performance. In addition, the intensity of the impact varies for different innovation levels and the firm's absorptive capacity moderate the final effect of acting as a broker.

  3. Performance improvement of haptic collision detection using subdivision surface and sphere clustering.

    A Ram Choi

    Full Text Available Haptics applications such as surgery simulations require collision detections that are more precise than others. An efficient collision detection method based on the clustering of bounding spheres was proposed in our prior study. This paper analyzes and compares the applied effects of the five most common subdivision surface methods on some 3D models for haptic collision detection. The five methods are Butterfly, Catmull-Clark, Mid-point, Loop, and LS3 (Least Squares Subdivision Surface. After performing a number of experiments, we have concluded that LS3 method is the most appropriate for haptic simulations. The more we applied surface subdivision, the more the collision detection results became precise. However, it is observed that the performance becomes better until a certain threshold and degrades afterward. In order to reduce the performance degradation, we adopted our prior work, which was the fast and precise collision detection method based on adaptive clustering. As a result, we obtained a notable improvement of the speed of collision detection.

  4. Swarm v2: highly-scalable and high-resolution amplicon clustering.

    Mahé, Frédéric; Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2015-01-01

    Previously we presented Swarm v1, a novel and open source amplicon clustering program that produced fine-scale molecular operational taxonomic units (OTUs), free of arbitrary global clustering thresholds and input-order dependency. Swarm v1 worked with an initial phase that used iterative single-linkage with a local clustering threshold (d), followed by a phase that used the internal abundance structures of clusters to break chained OTUs. Here we present Swarm v2, which has two important novel features: (1) a new algorithm for d = 1 that allows the computation time of the program to scale linearly with increasing amounts of data; and (2) the new fastidious option that reduces under-grouping by grafting low abundant OTUs (e.g., singletons and doubletons) onto larger ones. Swarm v2 also directly integrates the clustering and breaking phases, dereplicates sequencing reads with d = 0, outputs OTU representatives in fasta format, and plots individual OTUs as two-dimensional networks.

  5. High performance carbon nanocomposites for ultracapacitors

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  6. Strategies and Experiences Using High Performance Fortran

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  7. High-order finite-element seismic wave propagation modeling with MPI on a large GPU cluster

    Komatitsch, Dimitri; Erlebacher, Gordon; Goeddeke, Dominik; Michea, David

    2010-01-01

    We implement a high-order finite-element application, which performs the numerical simulation of seismic wave propagation resulting for instance from earthquakes at the scale of a continent or from active seismic acquisition experiments in the oil industry, on a large cluster of NVIDIA Tesla graphics cards using the CUDA programming environment and non-blocking message passing based on MPI. Contrary to many finite-element implementations, ours is implemented successfully in single precision, maximizing the performance of current generation GPUs. We discuss the implementation and optimization of the code and compare it to an existing very optimized implementation in C language and MPI on a classical cluster of CPU nodes. We use mesh coloring to efficiently handle summation operations over degrees of freedom on an unstructured mesh, and non-blocking MPI messages in order to overlap the communications across the network and the data transfer to and from the device via PCIe with calculations on the GPU. We perform a number of numerical tests to validate the single-precision CUDA and MPI implementation and assess its accuracy. We then analyze performance measurements and depending on how the problem is mapped to the reference CPU cluster, we obtain a speedup of 20x or 12x.

  8. High Performance Grinding and Advanced Cutting Tools

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  9. Strategy Guideline: High Performance Residential Lighting

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  10. Carbon nanomaterials for high-performance supercapacitors

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  11. Highly Stable Monocrystalline Silver Clusters for Plasmonic Applications

    Novikov, Sergey M.; Popok, Vladimir N.; Evlyukhin, Andrey B.

    2017-01-01

    Plasmonic sensor configurations utilizing localized plasmon resonances in silver nanostructures typically suffer from the rapid degradation of silver under ambient atmospheric conditions. In this work, we report on the fabrication and detailed characterization of ensembles of monocrystalline silver......-beam technique and characterized by linear spectroscopy, two-photon-excited photoluminescence, surface-enhanced Raman scattering microscopy, and transmission electron, helium ion, and atomic force microscopies. It is found that the fabricated ensembles of monocrystalline silver NPs preserve their plasmonic...... properties (monitored with optical spectroscopy) and strong field enhancements (revealed by surface-enhanced Raman spectroscopy) at least 5 times longer as compared to chemically synthesized silver NPs with similar sizes. The obtained results are of high practical relevance for the further development...

  12. Whisper, a resonance sounder and wave analyser: Performances and perspectives for the Cluster mission

    Decreau, P.M.E.; Fergeau, P.; KrannoselsKikh, V.

    1997-01-01

    The WHISPER sounder on the Cluster spacecraft is primarily designed to provide an absolute measurement of the total plasma density within the range 0.2-80 cm(-3). This is achieved by means of a resonance sounding technique which has already proved successful in the regions to be explored. The wav...... in the electron foreshock and solar wind, to investigations about small-scale structures via density and high-frequency emission signatures, and to the analysis of the non-thermal continuum in the magnetosphere....

  13. Effect of cluster set warm-up configurations on sprint performance in collegiate male soccer players.

    Nickerson, Brett S; Mangine, Gerald T; Williams, Tyler D; Martinez, Ismael A

    2018-06-01

    The purpose of this study was to determine if back squat cluster sets (CS) with varying inter-repetition rest periods would potentiate greater sprint performance compared with a traditional set parallel back squat in collegiate soccer players. Twelve collegiate male soccer players (age, 21.0 ± 2.0 years; height, 180.0 ± 9.0 cm; body mass, 79.0 ± 9.5 kg) performed a 20-m sprint prior to a potentiation complex and at 1, 4, 7, and 10 min postexercise on 3 separate, randomized occasions. On each occasion, the potentiation complex consisted of 1 set of 3 repetitions at 85% 1-repetition maximum (1RM) for the traditional parallel back squat. However, on 1 occasion the 3-repetition set was performed in a traditional manner (i.e., continuously), whereas on the other 2 occasions, 30s (CS 30 ) and 60 s (CS 60 ) of rest were allotted between each repetition. Repeated-measures ANOVA revealed greater (p = 0.022) mean barbell velocity on CS 60 compared with the traditional set. However, faster (p < 0.040) 20-m sprint times were observed for CS 30 (3.15 ± 0.16 s) compared with traditional (3.20 ± 0.17 s) only at 10 min postexercise. No other differences were observed. These data suggest that a single cluster set of 3 repetitions with 30-s inter-repetition rest periods at 85% 1RM acutely improves 20-m sprinting performance. Strength and conditioning professionals and their athletes might consider its inclusion during the specific warm-up to acutely improve athletic performance during the onset (≤10 min) of training or competition.

  14. Team Development for High Performance Management.

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  15. Mechanism of de-activation and clustering of B in Si at extremely high concentration

    Romano, L.; Piro, A.M.; Privitera, V.; Rimini, E.; Fortunato, G.; Svensson, B.G.; Foad, M.; Grimaldi, M.G.

    2006-01-01

    It is known that B deactivation and clustering occur in the presence of an excess of Si self-interstitials (Is). First principle calculations predicted the path of clusters growth, but the precursor complexes are too small to be visible even by the highest resolution microscopy. Channeling with nuclear reaction analyses allowed to detect the location of small B-Is complexes into the lattice formed as a consequence of the B interaction with the Is. In this work we extend this method to determine the complexes formed during the initial stage of B precipitation in Si doped at extremely high concentration (4 at%) and subjected to thermal treatment. The samples were prepared by excimer laser annealing (ELA) of Si implanted with 1 keV B. The thickness of the molten layer was 100 nm and the B profile was boxlike with a maximum hole concentration of ∼2 x 10 21 cm -3 . The electrical deactivation and carrier mobility of this metastable system has been studied as a function of subsequent annealing in the temperature range between 200 and 850 deg. C. Channeling analyses have been performed to investigate the B lattice location at the initial stage of precipitation. The difference, with respect to previous investigations, is the very small distance (<1 nm) between adjacent B atoms substitutional located in the lattice and the absence of Is that can be released during annealing, since the end of range defects were completely dissolved by ELA. In this way, information on the B complex evolution in a free-of-defects sample have been obtained

  16. Clustering approaches to improve the performance of low cost air pollution sensors.

    Smith, Katie R; Edwards, Peter M; Evans, Mathew J; Lee, James D; Shaw, Marvin D; Squires, Freya; Wilde, Shona; Lewis, Alastair C

    2017-08-24

    Low cost air pollution sensors have substantial potential for atmospheric research and for the applied control of pollution in the urban environment, including more localized warnings to the public. The current generation of single-chemical gas sensors experience degrees of interference from other co-pollutants and have sensitivity to environmental factors such as temperature, wind speed and supply voltage. There are uncertainties introduced also because of sensor-to-sensor response variability, although this is less well reported. The sensitivity of Metal Oxide Sensors (MOS) to volatile organic compounds (VOCs) changed with relative humidity (RH) by up to a factor of five over the range of 19-90% RH and with an uncertainty in the correction of a factor of two at any given RH. The short-term (second to minute) stabilities of MOS and electrochemical CO sensor responses were reasonable. During more extended use, inter-sensor quantitative comparability was degraded due to unpredictable variability in individual sensor responses (to either measurand or interference or both) drifting over timescales of several hours to days. For timescales longer than a week identical sensors showed slow, often downwards, drifts in their responses which diverged across six CO sensors by up to 30% after two weeks. The measurement derived from the median sensor within clusters of 6, 8 and up to 21 sensors was evaluated against individual sensor performance and external reference values. The clustered approach maintained the cost competitiveness of a sensor device, but the median concentration from the ensemble of sensor signals largely eliminated the randomised hour-to-day response drift seen in individual sensors and excluded the effects of small numbers of poorly performing sensors that drifted significantly over longer time periods. The results demonstrate that for individual sensors to be optimally comparable to one another, and to reference instruments, they would likely require

  17. Delivering high performance BWR fuel reliably

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  18. HPTA: High-Performance Text Analytics

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  19. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  20. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  1. Strategy Guideline. Partnering for High Performance Homes

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  2. Performance Analysis of Quality-of-Service Controls in a Cell-Cluster-Based Wireless ATM Network

    Cho, Young Jong [Ajou University, Suwon (Korea, Republic of)

    1997-04-01

    In this paper, an efficient cell-cluster-based call control scheme with guaranteed quality-of-service(QoS) provision ing is presented for next generation wireless ATM networks and its performance is mathematically analyzed using the open queuing network. With the cell-cluster-based call control, at the time a mobile connection is admitted to the network, a virtual cell is constructed by choosing a group of neighboring base stations to which the call may probabilistic ally hand over and by assigning to the call a collection of virtual paths between the base stations. Within a micro cell/pico cell environment, it is seen that the cell-cluster-based call control can support effectively a very high rate of handovers, provides very high system capacity, and guarantees a high degree of frequency reuse over the same geographical region without requiring the intervention of the network call control processor each time a handover occurs. But since mobiles, once admitted, are free to roam within the virtual cell, congestion condition occurs in which the number of calls to be handled by one base station exceeds the cell sites` capacity of radio channel and consequently a predefined QoS provision cannot be guaranteed. So, there must be a call admission control function to limit the number of calls existing in a cell-cluster such that required QoS objectives are met. As call acceptance criteria for constant-bit-rate or realtime variable-bit-rate ATM connections, we define four mobile QoS metrics: new-call blocking probability, wireless channel utilization efficiency, congestion probability and normalized average congestion duration. In addition, for QoS provision ing to available-bit-rate, unspecified-bit-rate or non-realtime variable-bit-rate connections, we further define another QoS metric, the minimum threshold breaking probability. By using the open network queuing model, we derive closed form expressions for the five QoS metrics defined above and show that they can be

  3. Efficient coupling of high intensity short laser pulses into snow clusters

    Palchan, T.; Pecker, S.; Henis, Z.; Eisenmann, S.; Zigler, A.

    2007-01-01

    Measurements of energy absorption of high intensity laser pulses in snow clusters are reported. Targets consisting of sapphire coated with snow nanoparticles were found to absorb more than 95% of the incident light compared to 50% absorption in flat sapphire targets.

  4. Infrared Extinction Performance of Randomly Oriented Microbial-Clustered Agglomerate Materials.

    Li, Le; Hu, Yihua; Gu, Youlin; Zhao, Xinying; Xu, Shilong; Yu, Lei; Zheng, Zhi Ming; Wang, Peng

    2017-11-01

    In this study, the spatial structure of randomly distributed clusters of fungi An0429 spores was simulated using a cluster aggregation (CCA) model, and the single scattering parameters of fungi An0429 spores were calculated using the discrete dipole approximation (DDA) method. The transmittance of 10.6 µm infrared (IR) light in the aggregated fungi An0429 spores swarm is simulated by using the Monte Carlo method. Several parameters that affect the transmittance of 10.6 µm IR light, such as the number and radius of original fungi An0429 spores, porosity of aggregated fungi An0429 spores, and density of aggregated fungi An0429 spores of the formation aerosol area were discussed. Finally, the transmittances of microbial materials with different qualities were measured in the dynamic test platform. The simulation results showed that the parameters analyzed were closely connected with the extinction performance of fungi An0429 spores. By controlling the value of the influencing factors, the transmittance could be lower than a certain threshold to meet the requirement of attenuation in application. In addition, the experimental results showed that the Monte Carlo method could well reflect the attenuation law of IR light in fungi An0429 spore agglomerates swarms.

  5. High-performance ceramics. Fabrication, structure, properties

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  6. High Burnup Fuel Performance and Safety Research

    Bang, Je Keun; Lee, Chan Bok; Kim, Dae Ho (and others)

    2007-03-15

    The worldwide trend of nuclear fuel development is to develop a high burnup and high performance nuclear fuel with high economies and safety. Because the fuel performance evaluation code, INFRA, has a patent, and the superiority for prediction of fuel performance was proven through the IAEA CRP FUMEX-II program, the INFRA code can be utilized with commercial purpose in the industry. The INFRA code was provided and utilized usefully in the universities and relevant institutes domesticallly and it has been used as a reference code in the industry for the development of the intrinsic fuel rod design code.

  7. APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters

    Ammendola, R; Salamon, A; Salina, G; Biagioni, A; Prezza, O; Cicero, F Lo; Lonardo, A; Paolucci, P S; Rossetti, D; Tosoratto, L; Vicini, P; Simula, F

    2011-01-01

    We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera ® FPGA, are provided.

  8. APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters

    Ammendola, R; Salamon, A; Salina, G [INFN Tor Vergata, Roma (Italy); Biagioni, A; Prezza, O; Cicero, F Lo; Lonardo, A; Paolucci, P S; Rossetti, D; Tosoratto, L; Vicini, P [INFN Roma, Roma (Italy); Simula, F [Sapienza Universita di Roma, Roma (Italy)

    2011-12-23

    We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera{sup Registered-Sign} FPGA, are provided.

  9. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  10. High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm

    Dieter Hendricks

    2016-02-01

    Full Text Available We implement a master-slave parallel genetic algorithm with a bespoke log-likelihood fitness function to identify emergent clusters within price evolutions. We use graphics processing units (GPUs to implement a parallel genetic algorithm and visualise the results using disjoint minimal spanning trees. We demonstrate that our GPU parallel genetic algorithm, implemented on a commercially available general purpose GPU, is able to recover stock clusters in sub-second speed, based on a subset of stocks in the South African market. This approach represents a pragmatic choice for low-cost, scalable parallel computing and is significantly faster than a prototype serial implementation in an optimised C-based fourth-generation programming language, although the results are not directly comparable because of compiler differences. Combined with fast online intraday correlation matrix estimation from high frequency data for cluster identification, the proposed implementation offers cost-effective, near-real-time risk assessment for financial practitioners.

  11. Metallicity in galactic clusters from high signal-to-noise spectroscopy

    Boesgaard, A.M.

    1989-01-01

    High-quality spectroscopic data on selected F dwarfs in six Galactic clusters are used to determine global (Fe/H) values for the clusters. For the two youngest clusters, Pleiades and Alpha Per, the (Fe/H) values are solar: 0.017 + or - 0.055. The Hyades and Praesepe are slightly metal-enhanced at (Fe/H) = + 0.125 + or - 0.032, even though they are an order of magnitude older than the Pleiades. Coma and the UMa Group at the age of the Hyades are slightly metal-deficient with (Fe/H) = - 0.082 + or - 0.039. The lack of an age-metallicity relationship indicates that the enrichment and mixing in the Galactic disk have not been uniform on time scales less than a billion years. 39 references

  12. Swarm v2: highly-scalable and high-resolution amplicon clustering

    Frédéric Mahé

    2015-12-01

    Full Text Available Previously we presented Swarm v1, a novel and open source amplicon clustering program that produced fine-scale molecular operational taxonomic units (OTUs, free of arbitrary global clustering thresholds and input-order dependency. Swarm v1 worked with an initial phase that used iterative single-linkage with a local clustering threshold (d, followed by a phase that used the internal abundance structures of clusters to break chained OTUs. Here we present Swarm v2, which has two important novel features: (1 a new algorithm for d = 1 that allows the computation time of the program to scale linearly with increasing amounts of data; and (2 the new fastidious option that reduces under-grouping by grafting low abundant OTUs (e.g., singletons and doubletons onto larger ones. Swarm v2 also directly integrates the clustering and breaking phases, dereplicates sequencing reads with d = 0, outputs OTU representatives in fasta format, and plots individual OTUs as two-dimensional networks.

  13. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  14. High performance liquid chromatographic determination of ...

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  15. Analog circuit design designing high performance amplifiers

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  16. High-performance computing using FPGAs

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  17. Embedded High Performance Scalable Computing Systems

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  18. Gradient High Performance Liquid Chromatography Method ...

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  19. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  20. Characterization of magnetic Ni clusters on graphene scaffold after high vacuum annealing

    Zhang, Zhenjun, E-mail: zzhang1@albany.edu; Matsubayashi, Akitomo, E-mail: norwegianwood.1984@gmail.com; Grisafe, Benjamin, E-mail: bgrisafe@albany.edu; Lee, Ji Ung, E-mail: jlee1@albany.edu; Lloyd, James R., E-mail: JLloyd@sunycnse.com

    2016-02-15

    Magnetic Ni nanoclusters were synthesized by electron beam deposition utilizing CVD graphene as a scaffold. The subsequent clusters were subjected to high vacuum (5−8 x10{sup −7} torr) annealing between 300 and 600 °C. The chemical stability, optical and morphological changes were characterized by X-ray photoemission microscopy, Raman spectroscopy, atomic force microscopy and magnetic measurement. Under ambient exposure, nickel nanoparticles were observed to be oxidized quickly, forming antiferromagnetic nickel oxide. Here, we report that the majority of the oxidized nickel is in non-stoichiometric form and can be reduced under high vacuum at temperature as low as 300 °C. Importantly, the resulting annealed clusters were relatively stable and no further oxidation was detectable after three weeks of air exposure at room temperature. - Highlights: • Random oriented nickel clusters were assembled on monolayer graphene scaffold. • Nickel oxide shell was effectively reduced at moderate temperature. • Coercivity of nickel clusters are greatly improved after high vacuum annealing.

  1. Genetic search for an optimal power flow solution from a high density cluster

    Amarnath, R.V. [Hi-Tech College of Engineering and Technology, Hyderabad (India); Ramana, N.V. [JNTU College of Engineering, Jagityala (India)

    2008-07-01

    This paper proposed a novel method to solve optimal power flow (OPF) problems. The method is based on a genetic algorithm (GA) search from a High Density Cluster (GAHDC). The algorithm of the proposed method includes 3 stages, notably (1) a suboptimal solution is obtained via a conventional analytical method, (2) a high density cluster, which consists of other suboptimal data points from the first stage, is formed using a density-based cluster algorithm, and (3) a genetic algorithm based search is carried out for the exact optimal solution from a low population sized, high density cluster. The final optimal solution thoroughly satisfies the well defined fitness function. A standard IEEE 30-bus test system was considered for the simulation study. Numerical results were presented and compared with the results of other approaches. It was concluded that although there is not much difference in numerical values, the proposed method has the advantage of minimal computational effort and reduced CPU time. As such, the method would be suitable for online applications such as the present Optimal Power Flow problem. 24 refs., 2 tabs., 4 figs.

  2. Governance among Malaysian high performing companies

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  3. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  4. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  5. Comparing Dutch and British high performing managers

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  6. Stochastic clustering of material surface under high-heat plasma load

    Budaev, Viacheslav P.

    2017-11-01

    The results of a study of a surface formed by high-temperature plasma loads on various materials such as tungsten, carbon and stainless steel are presented. High-temperature plasma irradiation leads to an inhomogeneous stochastic clustering of the surface with self-similar granularity - fractality on the scale from nanoscale to macroscales. Cauliflower-like structure of tungsten and carbon materials are formed under high heat plasma load in fusion devices. The statistical characteristics of hierarchical granularity and scale invariance are estimated. They differ qualitatively from the roughness of the ordinary Brownian surface, which is possibly due to the universal mechanisms of stochastic clustering of material surface under the influence of high-temperature plasma.

  7. Performance analysis of clustering techniques over microarray data: A case study

    Dash, Rasmita; Misra, Bijan Bihari

    2018-03-01

    Handling big data is one of the major issues in the field of statistical data analysis. In such investigation cluster analysis plays a vital role to deal with the large scale data. There are many clustering techniques with different cluster analysis approach. But which approach suits a particular dataset is difficult to predict. To deal with this problem a grading approach is introduced over many clustering techniques to identify a stable technique. But the grading approach depends on the characteristic of dataset as well as on the validity indices. So a two stage grading approach is implemented. In this study the grading approach is implemented over five clustering techniques like hybrid swarm based clustering (HSC), k-means, partitioning around medoids (PAM), vector quantization (VQ) and agglomerative nesting (AGNES). The experimentation is conducted over five microarray datasets with seven validity indices. The finding of grading approach that a cluster technique is significant is also established by Nemenyi post-hoc hypothetical test.

  8. HIGH-RESOLUTION XMM-NEWTON SPECTROSCOPY OF THE COOLING FLOW CLUSTER A3112

    Bulbul, G. Esra; Smith, Randall K.; Foster, Adam [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Cottam, Jean; Loewenstein, Michael; Mushotzky, Richard; Shafer, Richard, E-mail: ebulbul@cfa.harvard.edu [NASA Goddard Space Flight Center, Greenbelt, MD (United States)

    2012-03-01

    We examine high signal-to-noise XMM-Newton European Photon Imaging Camera (EPIC) and Reflection Grating Spectrometer (RGS) observations to determine the physical characteristics of the gas in the cool core and outskirts of the nearby rich cluster A3112. The XMM-Newton Extended Source Analysis Software data reduction and background modeling methods were used to analyze the XMM-Newton EPIC data. From the EPIC data, we find that the iron and silicon abundance gradients show significant increase toward the center of the cluster while the oxygen abundance profile is centrally peaked but has a shallower distribution than that of iron. The X-ray mass modeling is based on the temperature and deprojected density distributions of the intracluster medium determined from EPIC observations. The total mass of A3112 obeys the M-T scaling relations found using XMM-Newton and Chandra observations of massive clusters at r{sub 500}. The gas mass fraction f{sub gas} = 0.149{sup +0.036}{sub -0.032} at r{sub 500} is consistent with the seven-year Wilkinson Microwave Anisotropy Probe results. The comparisons of line fluxes and flux limits on the Fe XVII and Fe XVIII lines obtained from high-resolution RGS spectra indicate that there is no spectral evidence for cooler gas associated with the cluster with temperature below 1.0 keV in the central <38'' ({approx}52 kpc) region of A3112. High-resolution RGS spectra also yield an upper limit to the turbulent motions in the compact core of A3112 (206 km s{sup -1}). We find that the contribution of turbulence to total energy is less than 6%. This upper limit is consistent with the energy contribution measured in recent high-resolution simulations of relaxed galaxy clusters.

  9. A High-Efficiency Uneven Cluster Deployment Algorithm Based on Network Layered for Event Coverage in UWSNs

    Shanen Yu

    2016-12-01

    Full Text Available Most existing deployment algorithms for event coverage in underwater wireless sensor networks (UWSNs usually do not consider that network communication has non-uniform characteristics on three-dimensional underwater environments. Such deployment algorithms ignore that the nodes are distributed at different depths and have different probabilities for data acquisition, thereby leading to imbalances in the overall network energy consumption, decreasing the network performance, and resulting in poor and unreliable late network operation. Therefore, in this study, we proposed an uneven cluster deployment algorithm based network layered for event coverage. First, according to the energy consumption requirement of the communication load at different depths of the underwater network, we obtained the expected value of deployment nodes and the distribution density of each layer network after theoretical analysis and deduction. Afterward, the network is divided into multilayers based on uneven clusters, and the heterogeneous communication radius of nodes can improve the network connectivity rate. The recovery strategy is used to balance the energy consumption of nodes in the cluster and can efficiently reconstruct the network topology, which ensures that the network has a high network coverage and connectivity rate in a long period of data acquisition. Simulation results show that the proposed algorithm improves network reliability and prolongs network lifetime by significantly reducing the blind movement of overall network nodes while maintaining a high network coverage and connectivity rate.

  10. High Performance Work Systems for Online Education

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  11. Teacher Accountability at High Performing Charter Schools

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  12. Does training frequency and supervision affect compliance, performance and muscular health? A cluster randomized controlled trial.

    Dalager, Tina; Bredahl, Thomas G V; Pedersen, Mogens T; Boyle, Eleanor; Andersen, Lars L; Sjøgaard, Gisela

    2015-10-01

    The aim was to determine the effect of one weekly hour of specific strength training within working hours, performed with the same total training volume but with different training frequencies and durations, or with different levels of supervision, on compliance, muscle health and performance, behavior and work performance. In total, 573 office workers were cluster-randomized to: 1 WS: one 60-min supervised session/week, 3 WS: three 20-min supervised sessions/week, 9 WS: nine 7-min supervised sessions/week, 3 MS: three 20-min sessions/week with minimal supervision, or REF: a reference group without training. Outcomes were diary-based compliance, total training volume, muscle performance and questionnaire-based health, behavior and work performance. Comparisons were made among the WS training groups and between 3 WS and 3 MS. If no difference, training groups were collapsed (TG) and compared with REF. Results demonstrated similar degrees of compliance, mean(range) of 39(33-44)%, and total training volume, 13.266(11.977-15.096)kg. Musculoskeletal pain in neck and shoulders were reduced with approx. 50% in TG, which was significant compared with REF. Only the training groups improved significantly their muscle strength 8(4-13)% and endurance 27(12-37)%, both being significant compared with REF. No change in workability, productivity or self-rated health was demonstrated. Secondary analysis showed exercise self-efficacy to be a significant predictor of compliance. Regardless of training schedule and supervision, similar degrees of compliance were shown together with reduced musculoskeletal pain and improved muscle performance. These findings provide evidence that a great degree of flexibility is legitimate for companies in planning future implementation of physical exercise programs at the workplace. ClinicalTrials.gov, number NCT01027390. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Advanced high performance solid wall blanket concepts

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  14. Quest for Highly-connected MOF Platforms: Rare-Earth Polynuclear Clusters Versatility Meets Net Topology Needs.

    Alezi, Dalal

    2015-04-07

    Gaining control over the assembly of highly porous rare-earth (RE) based metal-organic frameworks (MOFs) remains challenging. Here we report the latest discoveries on our continuous quest for highly-connected nets. The topological exploration based on the non-compatibility of 12-connected RE polynuclear carboxylate-based cluster, points of extension matching the 12 vertices of the cuboctahedron (cuo), with 3-connected organic ligands led to the discovery of two fascinating and highly-connected minimal edge-transitive nets, pek and aea. The reduced symmetry of the employed triangular tricarboxylate ligand, as compared to the prototype highly symmetrical 1,3,5-benzene(tris)benzoic acid guided the concurrent occurrence of nonanuclear [RE9(μ3-OH)12(μ3-O)2(O2C–)12] and hexanuclear [RE6(OH)8(O2C–)8] carboxylate-based clusters as 12-connected and 8-connected molecular building blocks in the structure of a 3-periodic pek-MOF based on a novel (3,8,12)-c trinodal net. The use of a tricarboxylate ligand with modified angles between carboxylate moieties led to the formation of a second MOF containing solely nonanuclear clusters and exhibiting once more a novel and a highly-connected (3,12,12)-c trinodal net with aea topology. Notably, it is the first time that RE-MOFs with double six-membered ring (d6R) secondary building units are isolated, representing therefore a critical step forward toward the design of novel and highly coordinated materials using the supermolecular building layer approach while considering the d6Rs as building pillars. Lastly, the potential of these new MOFs for gas separation/storage was investigated by performing gas adsorption studies of various probe gas molecules over a wide range of pressures. Noticeably, pek-MOF-1 showed excellent volumetric CO2 and CH4 uptakes at high pressures.

  15. M31 GLOBULAR CLUSTER ABUNDANCES FROM HIGH-RESOLUTION, INTEGRATED-LIGHT SPECTROSCOPY

    Colucci, Janet E.; Bernstein, Rebecca A.; Cameron, Scott; McWilliam, Andrew; Cohen, Judith G.

    2009-01-01

    We report the first detailed chemical abundances for five globular clusters (GCs) in M31 from high-resolution (R ∼ 25,000) spectroscopy of their integrated light (IL). These GCs are the first in a larger set of clusters observed as part of an ongoing project to study the formation history of M31 and its GC population. The data presented here were obtained with the HIRES echelle spectrograph on the Keck I telescope and are analyzed using a new IL spectra analysis method that we have developed. In these clusters, we measure abundances for Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Y, and Ba, ages ≥10 Gyr, and a range in [Fe/H] of -0.9 to -2.2. As is typical of Milky Way GCs, we find these M31 GCs to be enhanced in the α-elements Ca, Si, and Ti relative to Fe. We also find [Mg/Fe] to be low relative to other [α/Fe], and [Al/Fe] to be enhanced in the IL abundances. These results imply that abundances of Mg, Al (and likely O, Na) recovered from IL do display the inter- and intra-cluster abundance variations seen in individual Milky Way GC stars, and that special care should be taken in the future in interpreting low- or high-resolution IL abundances of GCs that are based on Mg-dominated absorption features. Fe-peak and the neutron-capture elements Ba and Y also follow Milky Way abundance trends. We also present high-precision velocity dispersion measurements for all five M31 GCs, as well as independent constraints on the reddening toward the clusters from our analysis.

  16. High Performance Data Transfer for Distributed Data Intensive Sciences

    Fang, Chin [Zettar Inc., Mountain View, CA (United States); Cottrell, R ' Les' A. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Hanushevsky, Andrew B. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Kroeger, Wilko [SLAC National Accelerator Lab., Menlo Park, CA (United States); Yang, Wei [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2017-03-06

    We report on the development of ZX software providing high performance data transfer and encryption. The design scales in: computation power, network interfaces, and IOPS while carefully balancing the available resources. Two U.S. patent-pending algorithms help tackle data sets containing lots of small files and very large files, and provide insensitivity to network latency. It has a cluster-oriented architecture, using peer-to-peer technologies to ease deployment, operation, usage, and resource discovery. Its unique optimizations enable effective use of flash memory. Using a pair of existing data transfer nodes at SLAC and NERSC, we compared its performance to that of bbcp and GridFTP and determined that they were comparable. With a proof of concept created using two four-node clusters with multiple distributed multi-core CPUs, network interfaces and flash memory, we achieved 155Gbps memory-to-memory over a 2x100Gbps link aggregated channel and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.

  17. Performance and role of the breast lesion excision system (BLES) in small clusters of suspicious microcalcifications.

    Scaperrotta, Gianfranco; Ferranti, Claudio; Capalbo, Emanuela; Paolini, Biagio; Marchesini, Monica; Suman, Laura; Folini, Cristina; Mariani, Luigi; Panizza, Pietro

    2016-01-01

    To assess the diagnostic performance of the BLES as a biopsy tool in patients with ≤ 1 cm clusters of BIRADS 4 microcalcifications, in order to possibly avoid surgical excision in selected patients. This is a retrospective study of 105 patients undergone to stereotactic breast biopsy with the BLES. It excises a single specimen containing the whole mammographic target, allowing better histological assessment due to preserved architecture. Our case series consists of 41 carcinomas (39%) and 64 benign lesions (61%). Cancer involved the specimen margins in 20/41 cases (48.8%) or was close to them (≤ 1 mm) in 14 cases (34.1%); margins were disease-free in only 7 DCIS (17.1%). At subsequent excision of 39/41 malignant cases, underestimation occurred for 5/32 DCIS (15.6%), residual disease was found in 15/39 cancers (38.5%) and no cancer in 19/39 cases (48.7%). For DCIS cases, no residual disease occurred for 66.7% G1-G2 cases and for 35.3% G3 cases (P=0.1556) as well as in 83.3%, 40.0% and 43.8% cases respectively for negative, close and positive BLES margins (P=0.2576). The BLES is a good option for removal of small clusters of breast microcalcifications, giving better histological interpretation, lower underestimation rates and possibly reducing the need of subsequent surgical excision in selected patients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. High Performance Embedded System for Real-Time Pattern Matching

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturised version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory (AM) chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering...

  19. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  20. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  1. GLOBULAR CLUSTER ABUNDANCES FROM HIGH-RESOLUTION, INTEGRATED-LIGHT SPECTROSCOPY. II. EXPANDING THE METALLICITY RANGE FOR OLD CLUSTERS AND UPDATED ANALYSIS TECHNIQUES

    Colucci, Janet E.; Bernstein, Rebecca A.; McWilliam, Andrew [The Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101 (United States)

    2017-01-10

    We present abundances of globular clusters (GCs) in the Milky Way and Fornax from integrated-light (IL) spectra. Our goal is to evaluate the consistency of the IL analysis relative to standard abundance analysis for individual stars in those same clusters. This sample includes an updated analysis of seven clusters from our previous publications and results for five new clusters that expand the metallicity range over which our technique has been tested. We find that the [Fe/H] measured from IL spectra agrees to ∼0.1 dex for GCs with metallicities as high as [Fe/H] = −0.3, but the abundances measured for more metal-rich clusters may be underestimated. In addition we systematically evaluate the accuracy of abundance ratios, [X/Fe], for Na i, Mg i, Al i, Si i, Ca i, Ti i, Ti ii, Sc ii, V i, Cr i, Mn i, Co i, Ni i, Cu i, Y ii, Zr i, Ba ii, La ii, Nd ii, and Eu ii. The elements for which the IL analysis gives results that are most similar to analysis of individual stellar spectra are Fe i, Ca i, Si i, Ni i, and Ba ii. The elements that show the greatest differences include Mg i and Zr i. Some elements show good agreement only over a limited range in metallicity. More stellar abundance data in these clusters would enable more complete evaluation of the IL results for other important elements.

  2. High resolution infrared spectra of Bulge Globular Clusters: Liller 1, NGC 6553, and Ter 5

    Origlia, L.; Rich, R. M.; Castro, S. M.

    2001-12-01

    Using the NIRSPEC spectrograph at Keck II, we have obtained echelle spectra covering the range 1.5-1.8μ m for 2 of the brightest giants in Liller 1 and NGC 6553, old metal rich globular clusters in the Galactic bulge. We also report a preliminary analysis for two giants in the obscured bulge globular cluster Ter 5. We use spectrum synthesis for the abundance analysis, and find [Fe/H]=-0.3+/-0.2 and [O/H]=+0.3+/- 0.1 (from the OH lines) for the giants in Liller 1 and NGC 6553. We measure strong lines for the alpha elements Mg, Ca, and Si, but the lower sensitivity of these lines to abundance permits us to only state a general [α /Fe]=+0.3+/-0.2 dex. The composition of the clusters is similar to that of field stars in the bulge and is consistent with a scenario in which the clusters formed early, with rapid enrichment. Our iron abundance for NGC 6553 is poorly consistent with either the low or the high values recently reported in the literature, unless unusally large, or no α -element enhancements are adopted, respectively. We will also present an abundance analsyis for 2 giants in the highly reddened bulge cluster Ter 5, which appears to be near the Solar metallicity. R. Michael Rich acknowledges finacial support from grant AST-0098739, from the National Science Foundation. Data presented herein were obtained at the W.M.Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors gratefully acknowledge those of Hawaiian ancestry on whose sacred mountain we are privileged to be guests. Without their generous hospitality, none of the observations presented would have been possible.

  3. Comparison of five cluster validity indices performance in brain [18 F]FET-PET image segmentation using k-means.

    Abualhaj, Bedor; Weng, Guoyang; Ong, Melissa; Attarwala, Ali Asgar; Molina, Flavia; Büsing, Karen; Glatting, Gerhard

    2017-01-01

    Dynamic [ 18 F]fluoro-ethyl-L-tyrosine positron emission tomography ([ 18 F]FET-PET) is used to identify tumor lesions for radiotherapy treatment planning, to differentiate glioma recurrence from radiation necrosis and to classify gliomas grading. To segment different regions in the brain k-means cluster analysis can be used. The main disadvantage of k-means is that the number of clusters must be pre-defined. In this study, we therefore compared different cluster validity indices for automated and reproducible determination of the optimal number of clusters based on the dynamic PET data. The k-means algorithm was applied to dynamic [ 18 F]FET-PET images of 8 patients. Akaike information criterion (AIC), WB, I, modified Dunn's and Silhouette indices were compared on their ability to determine the optimal number of clusters based on requirements for an adequate cluster validity index. To check the reproducibility of k-means, the coefficients of variation CVs of the objective function values OFVs (sum of squared Euclidean distances within each cluster) were calculated using 100 random centroid initialization replications RCI 100 for 2 to 50 clusters. k-means was performed independently on three neighboring slices containing tumor for each patient to investigate the stability of the optimal number of clusters within them. To check the independence of the validity indices on the number of voxels, cluster analysis was applied after duplication of a slice selected from each patient. CVs of index values were calculated at the optimal number of clusters using RCI 100 to investigate the reproducibility of the validity indices. To check if the indices have a single extremum, visual inspection was performed on the replication with minimum OFV from RCI 100 . The maximum CV of OFVs was 2.7 × 10 -2 from all patients. The optimal number of clusters given by modified Dunn's and Silhouette indices was 2 or 3 leading to a very poor segmentation. WB and I indices suggested in

  4. Academic Performance and Lifestyle Behaviors in Australian School Children: A Cluster Analysis

    Dumuid, Dorothea; Olds, Timothy; Martín-Fernández, Josep-Antoni; Lewis, Lucy K.; Cassidy, Leah; Maher, Carol

    2017-01-01

    Poor academic performance has been linked with particular lifestyle behaviors, such as unhealthy diet, short sleep duration, high screen time, and low physical activity. However, little is known about how lifestyle behavior patterns (or combinations of behaviors) contribute to children's academic performance. We aimed to compare academic…

  5. A High-precision Trigonometric Parallax to an Ancient Metal-poor Globular Cluster

    Brown, T. M.; Casertano, S.; Strader, J.; Riess, A.; VandenBerg, D. A.; Soderblom, D. R.; Kalirai, J.; Salinas, R.

    2018-03-01

    Using the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST), we have obtained a direct trigonometric parallax for the nearest metal-poor globular cluster, NGC 6397. Although trigonometric parallaxes have been previously measured for many nearby open clusters, this is the first parallax for an ancient metal-poor population—one that is used as a fundamental template in many stellar population studies. This high-precision measurement was enabled by the HST/WFC3 spatial-scanning mode, providing hundreds of astrometric measurements for dozens of stars in the cluster and also for Galactic field stars along the same sightline. We find a parallax of 0.418 ± 0.013 ± 0.018 mas (statistical, systematic), corresponding to a true distance modulus of 11.89 ± 0.07 ± 0.09 mag (2.39 ± 0.07 ± 0.10 kpc). The V luminosity at the stellar main-sequence turnoff implies an absolute cluster age of 13.4 ± 0.7 ± 1.2 Gyr. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-13817, GO-14336, and GO-14773.

  6. Analysis of Fiber Clustering in Composite Materials Using High-Fidelity Multiscale Micromechanics

    Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.

    2015-01-01

    A new multiscale micromechanical approach is developed for the prediction of the behavior of fiber reinforced composites in presence of fiber clustering. The developed method is based on a coupled two-scale implementation of the High-Fidelity Generalized Method of Cells theory, wherein both the local and global scales are represented using this micromechanical method. Concentration tensors and effective constitutive equations are established on both scales and linked to establish the required coupling, thus providing the local fields throughout the composite as well as the global properties and effective nonlinear response. Two nondimensional parameters, in conjunction with actual composite micrographs, are used to characterize the clustering of fibers in the composite. Based on the predicted local fields, initial yield and damage envelopes are generated for various clustering parameters for a polymer matrix composite with both carbon and glass fibers. Nonlinear epoxy matrix behavior is also considered, with results in the form of effective nonlinear response curves, with varying fiber clustering and for two sets of nonlinear matrix parameters.

  7. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  8. High performance bio-integrated devices

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  9. Network Signaling Channel for Improving ZigBee Performance in Dynamic Cluster-Tree Networks

    D. Hämäläinen

    2008-03-01

    Full Text Available ZigBee is one of the most potential standardized technologies for wireless sensor networks (WSNs. Yet, sufficient energy-efficiency for the lowest power WSNs is achieved only in rather static networks. This severely limits the applicability of ZigBee in outdoor and mobile applications, where operation environment is harsh and link failures are common. This paper proposes a network channel beaconing (NCB algorithm for improving ZigBee performance in dynamic cluster-tree networks. NCB reduces the energy consumption of passive scans by dedicating one frequency channel for network beacon transmissions and by energy optimizing their transmission rate. According to an energy analysis, the power consumption of network maintenance operations reduces by 70%–76% in dynamic networks. In static networks, energy overhead is negligible. Moreover, the service time for data routing increases up to 37%. The performance of NCB is validated by ns-2 simulations. NCB can be implemented as an extension on MAC and NWK layers and it is fully compatible with ZigBee.

  10. vSphere high performance cookbook

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  11. Google Classroom and Open Clusters: An Authentic Science Research Project for High School Students

    Johnson, Chelen H.; Linahan, Marcella; Cuba, Allison Frances; Dickmann, Samantha Rose; Hogan, Eleanor B.; Karos, Demetra N.; Kozikowski, Kendall G.; Kozikowski, Lauren Paige; Nelson, Samantha Brooks; O'Hara, Kevin Thomas; Ropinski, Brandi Lucia; Scarpa, Gabriella; Garmany, Catharine D.

    2016-01-01

    STEM education is about offering unique opportunities to our students. For the past three years, students from two high schools (Breck School in Minneapolis, MN, and Carmel Catholic High School in Mundelein, IL) have collaborated on authentic astronomy research projects. This past year they surveyed archival data of open clusters to determine if a clear turnoff point could be unequivocally determined. Age and distance to each open cluster were calculated. Additionally, students requested time on several telescopes to obtain original data to compare to the archival data. Students from each school worked in collaborative teams, sharing and verifying results through regular online hangouts and chats. Work papers were stored in a shared drive and on a student-designed Google site to facilitate dissemination of documents between the two schools.

  12. Residential High-Rise Clusters as a Contemporary Planning Challenge in Manama

    Florian Wiedmann

    2015-08-01

    Full Text Available This paper analyzes the different roots of current residential high-rise clusters emerging in new city districts along the coast of Bahrain’s capital city Manama, and the resulting urban planning and design challenges. Since the local real-estate markets were liberalized in Bahrain in 2003, the population grew rapidly to more than one million inhabitants. Consequently, the housing demand increased rapidly due to extensive immigration. Many residential developments were however constructed for the upper spectrum of the real-estate market, due to speculative tendencies causing a raise in land value. The emerging high-rise clusters are developed along the various waterfronts of Manama on newly reclaimed land. This paper explores the spatial consequences of the recent boom in construction boom and the various challenges for architects and urban planners to enhance urban qualities.

  13. High performance parallel I/O

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  14. The potential of clustering methods to define intersection test scenarios: Assessing real-life performance of AEB.

    Sander, Ulrich; Lubbe, Nils

    2018-04-01

    Intersection accidents are frequent and harmful. The accident types 'straight crossing path' (SCP), 'left turn across path - oncoming direction' (LTAP/OD), and 'left-turn across path - lateral direction' (LTAP/LD) represent around 95% of all intersection accidents and one-third of all police-reported car-to-car accidents in Germany. The European New Car Assessment Program (Euro NCAP) have announced that intersection scenarios will be included in their rating from 2020; however, how these scenarios are to be tested has not been defined. This study investigates whether clustering methods can be used to identify a small number of test scenarios sufficiently representative of the accident dataset to evaluate Intersection Automated Emergency Braking (AEB). Data from the German In-Depth Accident Study (GIDAS) and the GIDAS-based Pre-Crash Matrix (PCM) from 1999 to 2016, containing 784 SCP and 453 LTAP/OD accidents, were analyzed with principal component methods to identify variables that account for the relevant total variances of the sample. Three different methods for data clustering were applied to each of the accident types, two similarity-based approaches, namely Hierarchical Clustering (HC) and Partitioning Around Medoids (PAM), and the probability-based Latent Class Clustering (LCC). The optimum number of clusters was derived for HC and PAM with the silhouette method. The PAM algorithm was both initiated with random start medoid selection and medoids from HC. For LCC, the Bayesian Information Criterion (BIC) was used to determine the optimal number of clusters. Test scenarios were defined from optimal cluster medoids weighted by their real-life representation in GIDAS. The set of variables for clustering was further varied to investigate the influence of variable type and character. We quantified how accurately each cluster variation represents real-life AEB performance using pre-crash simulations with PCM data and a generic algorithm for AEB intervention. The

  15. High prevalence of clustered tuberculosis cases in Peruvian migrants in Florence, Italy

    Lorenzo Zammarchi

    2014-12-01

    Full Text Available Tuberculosis is a leading cause of morbidity for Peruvian migrants in Florence, Italy, where they account for about 20% of yearly diagnosed cases. A retrospective study on cases notified in Peruvian residents in Florence in the period 2001-2010 was carried out and available Mycobacterium tuberculosis strains were genotyped (MIRU-VNTR-24 and Spoligotyping. One hundred thirty eight cases were retrieved. Genotyping performed in 87 strains revealed that 39 (44.8% belonged to 12 clusters. Assuming that in each cluster the transmission of tuberculosis from the index case took place in Florence, a large proportion of cases could be preventable by improving early diagnosis of contagious cases and contact tracing.

  16. Clustering performance comparison using K-means and expectation maximization algorithms.

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  17. "K"-Means May Perform as well as Mixture Model Clustering but May Also Be Much Worse: Comment on Steinley and Brusco (2011)

    Vermunt, Jeroen K.

    2011-01-01

    Steinley and Brusco (2011) presented the results of a huge simulation study aimed at evaluating cluster recovery of mixture model clustering (MMC) both for the situation where the number of clusters is known and is unknown. They derived rather strong conclusions on the basis of this study, especially with regard to the good performance of…

  18. Strategy Guideline: Partnering for High Performance Homes

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  19. Long-term bridge performance high priority bridge performance issues.

    2014-10-01

    Bridge performance is a multifaceted issue involving performance of materials and protective systems, : performance of individual components of the bridge, and performance of the structural system as a whole. The : Long-Term Bridge Performance (LTBP)...

  20. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  1. An Introduction to High Performance Fortran

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  2. High performance computing on vector systems

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  3. High Performance Electronics on Flexible Silicon

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  4. Debugging a high performance computing program

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  5. Technology Leadership in Malaysia's High Performance School

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  6. Toward High Performance in Industrial Refrigeration Systems

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  7. Towards high performance in industrial refrigeration systems

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  8. Validated high performance liquid chromatographic (HPLC) method ...

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  9. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  10. Integrated plasma control for high performance tokamaks

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  11. Project materials [Commercial High Performance Buildings Project

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  12. High performance structural ceramics for nuclear industry

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  13. A new high performance current transducer

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  14. Strategy Guideline. High Performance Residential Lighting

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  15. Architecting Web Sites for High Performance

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  16. Performance assessment of air quality monitoring networks using principal component analysis and cluster analysis

    Lu, Wei-Zhen; He, Hong-Di; Dong, Li-yun

    2011-01-01

    This study aims to evaluate the performance of two statistical methods, principal component analysis and cluster analysis, for the management of air quality monitoring network of Hong Kong and the reduction of associated expenses. The specific objectives include: (i) to identify city areas with similar air pollution behavior; and (ii) to locate emission sources. The statistical methods were applied to the mass concentrations of sulphur dioxide (SO 2 ), respirable suspended particulates (RSP) and nitrogen dioxide (NO 2 ), collected in monitoring network of Hong Kong from January 2001 to December 2007. The results demonstrate that, for each pollutant, the monitoring stations are grouped into different classes based on their air pollution behaviors. The monitoring stations located in nearby area are characterized by the same specific air pollution characteristics and suggested with an effective management of air quality monitoring system. The redundant equipments should be transferred to other monitoring stations for allowing further enlargement of the monitored area. Additionally, the existence of different air pollution behaviors in the monitoring network is explained by the variability of wind directions across the region. The results imply that the air quality problem in Hong Kong is not only a local problem mainly from street-level pollutions, but also a region problem from the Pearl River Delta region. (author)

  17. High performance anode for advanced Li batteries

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  18. NINJA: Java for High Performance Numerical Computing

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  19. A highly accurate positioning and orientation system based on the usage of four-cluster fibre optic gyros

    Zhang, Xiaoyue; Lin, Zhili; Zhang, Chunxi

    2013-01-01

    A highly accurate positioning and orientation technique based on four-cluster fibre optic gyros (FOGs) is presented. The four-cluster FOG inertial measurement unit (IMU) comprises three low-precision FOGs, one static high-precision FOG and three accelerometers. To realize high-precision positioning and orientation, the static alignment (north-seeking) before vehicle manoeuvre was divided into a low-precision self-alignment phase and a high-precision north-seeking (online calibration) phase. The high-precision FOG measurement information was introduced to obtain high-precision azimuth alignment (north-seeking) result and achieve online calibration of the low-precision three-cluster FOG. The results of semi-physical simulation were presented to validate the availability and utility of the highly accurate positioning and orientation technique based on the four-cluster FOGs. (paper)

  20. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Performance Analysis of Entropy Methods on K Means in Clustering Process

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  2. Development of high performance cladding materials

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  3. A Linux Workstation for High Performance Graphics

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  4. The path toward HEP High Performance Computing

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  5. The Social Life of Learning Analytics: Cluster Analysis and the 'Performance' of Algorithmic Education

    Perrotta, Carlo; Williamson, Ben

    2018-01-01

    This paper argues that methods used for the classification and measurement of online education are not neutral and objective, but involved in the creation of the educational realities they claim to measure. In particular, the paper draws on material semiotics to examine cluster analysis as a 'performative device' that, to a significant extent,…

  6. The impact of export performance resources of companies belonging to clusters: a study in the French winery industry

    Aurora Carneiro Zen

    2014-12-01

    Full Text Available The purpose of this paper was to analyze the impact of resources on export performance of clustered companies. We argue that the insertion in clusters provides access to resources that influence the internationalization process of firms. We conducted a survey in the French wine industry, the main consumer market in volume and second largest producer of wine in the world. The population of the study includes exporting French wineries, located in clusters. The sample consists of 130 French wine exporters, located in different wine clusters. In short, the results indicated that access to cluster’s resources has a positive impact on the process of internationalization and export performance of companies. One managerial implication of the research is the importance of commercial resources. The firms with higher export performance attributed greater importance to their commercial resources. Further studies may measure the utilization of resources in the internationalization strategy, and compare the importance and the use of resources in accordance with the level of export performance of companies.

  7. High Performance Commercial Fenestration Framing Systems

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  8. Fracture toughness of ultra high performance concrete by flexural performance

    Manolova Emanuela

    2016-01-01

    Full Text Available This paper describes the fracture toughness of the innovative structural material - Ultra High Performance Concrete (UHPC, evaluated by flexural performance. For determination the material behaviour by static loading are used adapted standard test methods for flexural performance of fiber-reinforced concrete (ASTM C 1609 and ASTM C 1018. Fracture toughness is estimated by various deformation parameters derived from the load-deflection curve, obtained by testing simple supported beam under third-point loading, using servo-controlled testing system. This method is used to be estimated the contribution of the embedded fiber-reinforcement into improvement of the fractural behaviour of UHPC by changing the crack-resistant capacity, fracture toughness and energy absorption capacity with various mechanisms. The position of the first crack has been formulated based on P-δ (load- deflection response and P-ε (load - longitudinal deformation in the tensile zone response, which are used for calculation of the two toughness indices I5 and I10. The combination of steel fibres with different dimensions leads to a composite, having at the same time increased crack resistance, first crack formation, ductility and post-peak residual strength.

  9. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  10. Maintenance of Velocity and Power With Cluster Sets During High-Volume Back Squats.

    Tufano, James J; Conlon, Jenny A; Nimphius, Sophia; Brown, Lee E; Seitz, Laurent B; Williamson, Bryce D; Haff, G Gregory

    2016-10-01

    To compare the effects of a traditional set structure and 2 cluster set structures on force, velocity, and power during back squats in strength-trained men. Twelve men (25.8 ± 5.1 y, 1.74 ± 0.07 m, 79.3 ± 8.2 kg) performed 3 sets of 12 repetitions at 60% of 1-repetition maximum using 3 different set structures: traditional sets (TS), cluster sets of 4 (CS4), and cluster sets of 2 (CS2). When averaged across all repetitions, peak velocity (PV), mean velocity (MV), peak power (PP), and mean power (MP) were greater in CS2 and CS4 than in TS (P < .01), with CS2 also resulting in greater values than CS4 (P < .02). When examining individual sets within each set structure, PV, MV, PP, and MP decreased during the course of TS (effect sizes 0.28-0.99), whereas no decreases were noted during CS2 (effect sizes 0.00-0.13) or CS4 (effect sizes 0.00-0.29). These results demonstrate that CS structures maintain velocity and power, whereas TS structures do not. Furthermore, increasing the frequency of intraset rest intervals in CS structures maximizes this effect and should be used if maximal velocity is to be maintained during training.

  11. Ferric hydroxide supported gold subnano clusters or quantum dots: enhanced catalytic performance in chemoselective hydrogenation.

    Liu, Lequan; Qiao, Botao; Ma, Yubo; Zhang, Juan; Deng, Youquan

    2008-05-21

    An attempt to prepare ferric hydroxide supported Au subnano clusters via modified co-precipitation without any calcination was made. High resolution transmission electron microscopy (HRTEM), X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS) have been employed to study the structure and chemical states of these catalysts. No Au species could be observed in the HRTEM image nor from the XRD pattern, suggesting that the sizes of the Au species in and on the ferric hydroxide support were less than or around 1 nm. Chemoselective hydrogenation of aromatic nitro compounds and alpha,beta-unsaturated aldehydes was selected as a probe reaction to examine the catalytic properties of this catalyst. Under the same reaction conditions, such as 100 degrees C and 1 MPa H2 in the hydrogenation of aromatic nitro compounds, a 96-99% conversion (except for 4-nitrobenzonitrile) with 99% selectivity was obtained over the ferric hydroxide supported Au catalyst, and the TOF values were 2-6 times higher than that of the corresponding ferric oxide supported catalyst with 3-5 nm size Au particles. For further evaluation of this Au catalyst in the hydrogenation of citral and cinnamaldehyde, selectivity towards unsaturated alcohols was 2-20 times higher than that of the corresponding ferric oxide Au catalyst.

  12. HIGH PERFORMANCE CERIA BASED OXYGEN MEMBRANE

    2014-01-01

    The invention describes a new class of highly stable mixed conducting materials based on acceptor doped cerium oxide (CeO2-8 ) in which the limiting electronic conductivity is significantly enhanced by co-doping with a second element or co- dopant, such as Nb, W and Zn, so that cerium and the co......-dopant have an ionic size ratio between 0.5 and 1. These materials can thereby improve the performance and extend the range of operating conditions of oxygen permeation membranes (OPM) for different high temperature membrane reactor applications. The invention also relates to the manufacturing of supported...

  13. Playa: High-Performance Programmable Linear Algebra

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  14. Optimizing the design of very high power, high performance converters

    Edwards, R.J.; Tiagha, E.A.; Ganetis, G.; Nawrocky, R.J.

    1980-01-01

    This paper describes how various technologies are used to achieve the desired performance in a high current magnet power converter system. It is hoped that the discussions of the design approaches taken will be applicable to other power supply systems where stringent requirements in stability, accuracy and reliability must be met

  15. Robust High Performance Aquaporin based Biomimetic Membranes

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    on top of a support membrane. Control membranes, either without aquaporins or with the inactive AqpZ R189A mutant aquaporin served as controls. The separation performance of the membranes was evaluated by cross-flow forward osmosis (FO) and reverse osmosis (RO) tests. In RO the ABM achieved a water......Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect...... permeability of ~ 4 L/(m2 h bar) with a NaCl rejection > 97% at an applied hydraulic pressure of 5 bar. The water permeability was ~40% higher compared to a commercial brackish water RO membrane (BW30) and an order of magnitude higher compared to a seawater RO membrane (SW30HR). In FO, the ABMs had > 90...

  16. Evaluation of high-performance computing software

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  17. High performance cloud auditing and applications

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  18. Monitoring SLAC High Performance UNIX Computing Systems

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  19. High performance parallel computers for science

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  20. Toward a theory of high performance.

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  1. High-performance phase-field modeling

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  2. AHPCRC - Army High Performance Computing Research Center

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  3. Performance concerns for high duty fuel cycle

    Esposito, V.J.; Gutierrez, J.E.

    1999-01-01

    One of the goals of the nuclear industry is to achieve economic performance such that nuclear power plants are competitive in a de-regulated market. The manner in which nuclear fuel is designed and operated lies at the heart of economic viability. In this sense reliability, operating flexibility and low costs are the three major requirements of the NPP today. The translation of these three requirements to the design is part of our work. The challenge today is to produce a fuel design which will operate with long operating cycles, high discharge burnup, power up-rating and while still maintaining all design and safety margins. European Fuel Group (EFG) understands that to achieve the required performance high duty/energy fuel designs are needed. The concerns for high duty design includes, among other items, core design methods, advanced Safety Analysis methodologies, performance models, advanced material and operational strategies. The operational aspects require the trade-off and evaluation of various parameters including coolant chemistry control, material corrosion, boiling duty, boron level impacts, etc. In this environment MAEF is the design that EFG is now offering based on ZIRLO alloy and a robust skeleton. This new design is able to achieve 70 GWd/tU and Lead Test Programs are being executed to demonstrate this capability. A number of performance issues which have been a concern with current designs have been resolved such as cladding corrosion and incomplete RCCA insertion (IRI). As the core duty becomes more aggressive other new issues need to be addressed such as Axial Offset Anomaly. These new issues are being addressed by combination of the new design in concert with advanced methodologies to meet the demanding needs of NPP. The ability and strategy to meet high duty core requirements, flexibility of operation and maintain acceptable balance of all technical issues is the discussion in this paper. (authors)

  4. DURIP: High Performance Computing in Biomathematics Applications

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  5. High Performance Computing Operations Review Report

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  6. Planning for high performance project teams

    Reed, W.; Keeney, J.; Westney, R.

    1997-01-01

    Both industry-wide research and corporate benchmarking studies confirm the significant savings in cost and time that result from early planning of a project. Amoco's Team Planning Workshop combines long-term strategic project planning and short-term tactical planning with team building to provide the basis for high performing project teams, better project planning, and effective implementation of the Amoco Common Process for managing projects

  7. Implementation of High-Order Multireference Coupled-Cluster Methods on Intel Many Integrated Core Architecture.

    Aprà, E; Kowalski, K

    2016-03-08

    In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.

  8. Computational Biology and High Performance Computing 2000

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  9. Long-term memory and volatility clustering in high-frequency price changes

    oh, Gabjin; Kim, Seunghwan; Eom, Cheoljun

    2008-02-01

    We studied the long-term memory in diverse stock market indices and foreign exchange rates using Detrended Fluctuation Analysis (DFA). For all high-frequency market data studied, no significant long-term memory property was detected in the return series, while a strong long-term memory property was found in the volatility time series. The possible causes of the long-term memory property were investigated using the return data filtered by the AR(1) model, reflecting the short-term memory property, the GARCH(1,1) model, reflecting the volatility clustering property, and the FIGARCH model, reflecting the long-term memory property of the volatility time series. The memory effect in the AR(1) filtered return and volatility time series remained unchanged, while the long-term memory property diminished significantly in the volatility series of the GARCH(1,1) filtered data. Notably, there is no long-term memory property, when we eliminate the long-term memory property of volatility by the FIGARCH model. For all data used, although the Hurst exponents of the volatility time series changed considerably over time, those of the time series with the volatility clustering effect removed diminish significantly. Our results imply that the long-term memory property of the volatility time series can be attributed to the volatility clustering observed in the financial time series.

  10. A new experimental setup for high-pressure catalytic activity measurements on surface deposited mass-selected Pt clusters

    Watanabe, Yoshihide; Isomura, Noritake

    2009-01-01

    A new experimental setup to study catalytic and electronic properties of size-selected clusters on metal oxide substrates from the viewpoint of cluster-support interaction and to formulate a method for the development of heterogeneous catalysts such as automotive exhaust catalysts has been developed. The apparatus consists of a size-selected cluster source, a photoemission spectrometer, a scanning tunneling microscope (STM), and a high-pressure reaction cell. The high-pressure reaction cell measurements provided information on catalytic properties in conditions close to practical use. The authors investigated size-selected platinum clusters deposited on a TiO 2 (110) surface using a reaction cell and STM. Catalytic activity measurements showed that the catalytic activities have a cluster-size dependency.

  11. A Clustered Repeated-Sprint Running Protocol for Team-Sport Athletes Performed in Normobaric Hypoxia

    Jaime Morrison, Chris McLellan, Clare Minahan

    2015-12-01

    Full Text Available The present study compared the performance (peak speed, distance, and acceleration of ten amateur team-sport athletes during a clustered (i.e., multiple sets repeated-sprint protocol, (4 sets of 4, 4-s running sprints; i.e., RSR444 in normobaric normoxia (FiO2 = 0.209; i.e., RSN with normobaric hypoxia (FiO2 = 0.140; i.e., RSH. Subjects completed two separate trials (i. RSN, ii. RSH; randomised order between 48 h and 72 h apart on a non-motorized treadmill. In addition to performance, we examined blood lactate concentration [La-] and arterial oxygen saturation (SpO2 before, during, and after the RSR444. While there were no differences in peak speed or distance during set 1 or set 2, peak speed (p = 0.04 and 0.02, respectively and distance (p = 0.04 and 0.02, respectively were greater during set 3 and set 4 of RSN compared with RSH. There was no difference in the average acceleration achieved in set 1 (p = 0.45, set 2 (p = 0.26, or set 3 (p = 0.23 between RSN and RSH; however, the average acceleration was greater in RSN than RSH in set 4 (p < 0.01. Measurements of [La-] were higher during RSH than RSN immediately after Sprint 16 (10.2 ± 2.5 vs 8.6 ± 2.6 mM; p = 0.02. Estimations of SpO2 were lower during RSH than RSN, respectively, immediately prior to the commencement of the test (89.0 ± 2.0 vs 97.2 ± 1.5 %, post Sprint 8 (78.0 ± 6.3 vs 93.8 ± 3.6 % and post Sprint 16 (75.3 ± 6.3 vs 94.5 ± 2.5 %; all p < 0.01. In summary, the RSR444 is a practical protocol for the implementation of a hypoxic repeated-sprint training intervention into the training schedules of team-sport athletes. However, given the inability of amateur team-sport athletes to maintain performance in hypoxic (FiO2 = 0.140 conditions, the potential for specific training outcomes (i.e. speed to be achieved will be compromised, thus suggesting that the RSR444 should be used with caution.

  12. High performance separation of lanthanides and actinides

    Sivaraman, N.; Vasudeva Rao, P.R.

    2011-01-01

    The major advantage of High Performance Liquid Chromatography (HPLC) is its ability to provide rapid and high performance separations. It is evident from Van Deemter curve for particle size versus resolution that packing materials with particle sizes less than 2 μm provide better resolution for high speed separations and resolving complex mixtures compared to 5 μm based supports. In the recent past, chromatographic support material using monolith has been studied extensively at our laboratory. Monolith column consists of single piece of porous, rigid material containing mesopores and micropores, which provide fast analyte mass transfer. Monolith support provides significantly higher separation efficiency than particle-packed columns. A clear advantage of monolith is that it could be operated at higher flow rates but with lower back pressure. Higher operating flow rate results in higher column permeability, which drastically reduces analysis time and provides high separation efficiency. The above developed fast separation methods were applied to assay the lanthanides and actinides from the dissolver solutions of nuclear reactor fuels

  13. Cluster Mass Calibration at High Redshift: HST Weak Lensing Analysis of 13 Distant Galaxy Clusters from the South Pole Telescope Sunyaev-Zel'dovich Survey

    Schrabback, T.; et al.

    2016-11-11

    We present an HST/ACS weak gravitational lensing analysis of 13 massive high-redshift (z_median=0.88) galaxy clusters discovered in the South Pole Telescope (SPT) Sunyaev-Zel'dovich Survey. This study is part of a larger campaign that aims to robustly calibrate mass-observable scaling relations over a wide range in redshift to enable improved cosmological constraints from the SPT cluster sample. We introduce new strategies to ensure that systematics in the lensing analysis do not degrade constraints on cluster scaling relations significantly. First, we efficiently remove cluster members from the source sample by selecting very blue galaxies in V-I colour. Our estimate of the source redshift distribution is based on CANDELS data, where we carefully mimic the source selection criteria of the cluster fields. We apply a statistical correction for systematic photometric redshift errors as derived from Hubble Ultra Deep Field data and verified through spatial cross-correlations. We account for the impact of lensing magnification on the source redshift distribution, finding that this is particularly relevant for shallower surveys. Finally, we account for biases in the mass modelling caused by miscentring and uncertainties in the mass-concentration relation using simulations. In combination with temperature estimates from Chandra we constrain the normalisation of the mass-temperature scaling relation ln(E(z) M_500c/10^14 M_sun)=A+1.5 ln(kT/7.2keV) to A=1.81^{+0.24}_{-0.14}(stat.) +/- 0.09(sys.), consistent with self-similar redshift evolution when compared to lower redshift samples. Additionally, the lensing data constrain the average concentration of the clusters to c_200c=5.6^{+3.7}_{-1.8}.

  14. Cluster Mass Calibration at High Redshift: HST Weak Lensing Analysis of 13 Distant Galaxy Clusters from the South Pole Telescope Sunyaev-Zel’dovich Survey

    Schrabback, T.; Applegate, D.; Dietrich, J. P.; Hoekstra, H.; Bocquet, S.; Gonzalez, A. H.; der Linden, A. von; McDonald, M.; Morrison, C. B.; Raihan, S. F.; Allen, S. W.; Bayliss, M.; Benson, B. A.; Bleem, L. E.; Chiu, I.; Desai, S.; Foley, R. J.; de Haan, T.; High, F. W.; Hilbert, S.; Mantz, A. B.; Massey, R.; Mohr, J.; Reichardt, C. L.; Saro, A.; Simon, P.; Stern, C.; Stubbs, C. W.; Zenteno, A.

    2017-10-14

    We present an HST/Advanced Camera for Surveys (ACS) weak gravitational lensing analysis of 13 massive high-redshift (z(median) = 0.88) galaxy clusters discovered in the South Pole Telescope (SPT) Sunyaev-Zel'dovich Survey. This study is part of a larger campaign that aims to robustly calibrate mass-observable scaling relations over a wide range in redshift to enable improved cosmological constraints from the SPT cluster sample. We introduce new strategies to ensure that systematics in the lensing analysis do not degrade constraints on cluster scaling relations significantly. First, we efficiently remove cluster members from the source sample by selecting very blue galaxies in V - I colour. Our estimate of the source redshift distribution is based on Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) data, where we carefully mimic the source selection criteria of the cluster fields. We apply a statistical correction for systematic photometric redshift errors as derived from Hubble Ultra Deep Field data and verified through spatial cross-correlations. We account for the impact of lensing magnification on the source redshift distribution, finding that this is particularly relevant for shallower surveys. Finally, we account for biases in the mass modelling caused by miscentring and uncertainties in the concentration-mass relation using simulations. In combination with temperature estimates from Chandra we constrain the normalization of the mass-temperature scaling relation ln (E(z) M-500c/10(14)M(circle dot)) = A + 1.5ln (kT/7.2 keV) to A = 1.81(-0.14)(+0.24)(stat.)+/- 0.09(sys.), consistent with self-similar redshift evolution when compared to lower redshift samples. Additionally, the lensing data constrain the average concentration of the clusters to c(200c) = 5.6(-1.8)(+3.7).

  15. Cluster mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev-Zel'dovich Survey

    Schrabback, T.; Applegate, D.; Dietrich, J. P.; Hoekstra, H.; Bocquet, S.; Gonzalez, A. H.; von der Linden, A.; McDonald, M.; Morrison, C. B.; Raihan, S. F.; Allen, S. W.; Bayliss, M.; Benson, B. A.; Bleem, L. E.; Chiu, I.; Desai, S.; Foley, R. J.; de Haan, T.; High, F. W.; Hilbert, S.; Mantz, A. B.; Massey, R.; Mohr, J.; Reichardt, C. L.; Saro, A.; Simon, P.; Stern, C.; Stubbs, C. W.; Zenteno, A.

    2018-02-01

    We present an HST/Advanced Camera for Surveys (ACS) weak gravitational lensing analysis of 13 massive high-redshift (zmedian = 0.88) galaxy clusters discovered in the South Pole Telescope (SPT) Sunyaev-Zel'dovich Survey. This study is part of a larger campaign that aims to robustly calibrate mass-observable scaling relations over a wide range in redshift to enable improved cosmological constraints from the SPT cluster sample. We introduce new strategies to ensure that systematics in the lensing analysis do not degrade constraints on cluster scaling relations significantly. First, we efficiently remove cluster members from the source sample by selecting very blue galaxies in V - I colour. Our estimate of the source redshift distribution is based on Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) data, where we carefully mimic the source selection criteria of the cluster fields. We apply a statistical correction for systematic photometric redshift errors as derived from Hubble Ultra Deep Field data and verified through spatial cross-correlations. We account for the impact of lensing magnification on the source redshift distribution, finding that this is particularly relevant for shallower surveys. Finally, we account for biases in the mass modelling caused by miscentring and uncertainties in the concentration-mass relation using simulations. In combination with temperature estimates from Chandra we constrain the normalization of the mass-temperature scaling relation ln (E(z)M500c/1014 M⊙) = A + 1.5ln (kT/7.2 keV) to A=1.81^{+0.24}_{-0.14}(stat.) {± } 0.09(sys.), consistent with self-similar redshift evolution when compared to lower redshift samples. Additionally, the lensing data constrain the average concentration of the clusters to c_200c=5.6^{+3.7}_{-1.8}.

  16. High Performance OLED Panel and Luminaire

    Spindler, Jeffrey [OLEDWorks LLC, Rochester, NY (United States)

    2017-02-20

    In this project, OLEDWorks developed and demonstrated the technology required to produce OLED lighting panels with high energy efficiency and excellent light quality. OLED panels developed in this program produce high quality warm white light with CRI greater than 85 and efficacy up to 80 lumens per watt (LPW). An OLED luminaire employing 24 of the high performance panels produces practical levels of illumination for general lighting, with a flux of over 2200 lumens at 60 LPW. This is a significant advance in the state of the art for OLED solid-state lighting (SSL), which is expected to be a complementary light source to the more advanced LED SSL technology that is rapidly replacing all other traditional forms of lighting.

  17. Random matrix improved subspace clustering

    Couillet, Romain

    2017-03-06

    This article introduces a spectral method for statistical subspace clustering. The method is built upon standard kernel spectral clustering techniques, however carefully tuned by theoretical understanding arising from random matrix findings. We show in particular that our method provides high clustering performance while standard kernel choices provably fail. An application to user grouping based on vector channel observations in the context of massive MIMO wireless communication networks is provided.

  18. The path toward HEP High Performance Computing

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  19. A High Performance COTS Based Computer Architecture

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  20. Management issues for high performance storage systems

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  1. Formation of globular cluster candidates in merging proto-galaxies at high redshift: a view from the FIRE cosmological simulations

    Kim, Ji-hoon; Ma, Xiangcheng; Grudić, Michael Y.; Hopkins, Philip F.; Hayward, Christopher C.; Wetzel, Andrew; Faucher-Giguère, Claude-André; Kereš, Dušan; Garrison-Kimmel, Shea; Murray, Norman

    2018-03-01

    Using a state-of-the-art cosmological simulation of merging proto-galaxies at high redshift from the FIRE project, with explicit treatments of star formation and stellar feedback in the interstellar medium, we investigate the formation of star clusters and examine one of the formation hypotheses of present-day metal-poor globular clusters. We find that frequent mergers in high-redshift proto-galaxies could provide a fertile environment to produce long-lasting bound star clusters. The violent merger event disturbs the gravitational potential and pushes a large gas mass of ≳ 105-6 M⊙ collectively to high density, at which point it rapidly turns into stars before stellar feedback can stop star formation. The high dynamic range of the reported simulation is critical in realizing such dense star-forming clouds with a small dynamical time-scale, tff ≲ 3 Myr, shorter than most stellar feedback time-scales. Our simulation then allows us to trace how clusters could become virialized and tightly bound to survive for up to ˜420 Myr till the end of the simulation. Because the cluster's tightly bound core was formed in one short burst, and the nearby older stars originally grouped with the cluster tend to be preferentially removed, at the end of the simulation the cluster has a small age spread.

  2. Enhancement of high-order harmonics in a plasma waveguide formed in clustered Ar gas.

    Geng, Xiaotao; Zhong, Shiyang; Chen, Guanglong; Ling, Weijun; He, Xinkui; Wei, Zhiyi; Kim, Dong Eon

    2018-02-05

    Generation of high-order harmonics (HHs) is intensified by using a plasma waveguide created by a laser in a clustered gas jet. The formation of a plasma waveguide and the guiding of a laser beam are also demonstrated. Compared to the case without a waveguide, harmonics were strengthened up to nine times, and blue-shifted. Numerical simulation by solving the time-dependent Schrödinger equation in strong field approximation agreed well with experimental results. This result reveals that the strengthening is the result of improved phase matching and that the blue shift is a result of change in fundamental laser frequency due to self-phase modulation (SPM).

  3. Automatic Energy Schemes for High Performance Applications

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  4. Three exciting areas of experimental physical sciences : high temperature superconductors, metal clusters and super molecules of carbon

    Rao, C.N.

    1992-01-01

    The author has narrated his experience in carrying out research in three exciting areas of physical sciences. These areas are : high temperature superconductors, metal clusters and super molecules of carbon. (M.G.B.)

  5. High-performance computing in seismology

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  6. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  7. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  8. High performance computing in linear control

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  9. Building Trust in High-Performing Teams

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  10. Updated teaching techniques improve CPR performance measures: a cluster randomized, controlled trial.

    Ettl, Florian; Testori, Christoph; Weiser, Christoph; Fleischhackl, Sabine; Mayer-Stickler, Monika; Herkner, Harald; Schreiber, Wolfgang; Fleischhackl, Roman

    2011-06-01

    The first-aid training necessary for obtaining a drivers license in Austria has a regulated and predefined curriculum but has been targeted for the implementation of a new course structure with less theoretical input, repetitive training in cardiopulmonary resuscitation (CPR) and structured presentations using innovative media. The standard and a new course design were compared with a prospective, participant- and observer-blinded, cluster-randomized controlled study. Six months after the initial training, we evaluated the confidence of the 66 participants in their skills, CPR effectiveness parameters and correctness of their actions. The median self-confidence was significantly higher in the interventional group [IG, visual analogue scale (VAS:"0" not-confident at all,"100" highly confident):57] than in the control group (CG, VAS:41). The mean chest compression rate in the IG (98/min) was closer to the recommended 100 bpm than in the CG (110/min). The time to the first chest compression (IG:25s, CG:36s) and time to first defibrillator shock (IG:86s, CG:92s) were significantly shorter in the IG. Furthermore, the IG participants were safer in their handling of the defibrillator and started with countermeasures against developing shock more often. The management of an unconscious person and of heavy bleeding did not show a difference between the two groups even after shortening the lecture time. Motivation and self-confidence as well as skill retention after six months were shown to be dependent on the teaching methods and the time for practical training. Courses may be reorganized and content rescheduled, even within predefined curricula, to improve course outcomes. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Improving UV Resistance of High Performance Fibers

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  12. Intel Xeon Phi coprocessor high performance programming

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  13. Development of high-performance blended cements

    Wu, Zichao

    2000-10-01

    This thesis presents the development of high-performance blended cements from industrial by-products. To overcome the low-early strength of blended cements, several chemicals were studied as the activators for cement hydration. Sodium sulfate was discovered as the best activator. The blending proportions were optimized by Taguchi experimental design. The optimized blended cements containing up to 80% fly ash performed better than Type I cement in strength development and durability. Maintaining a constant cement content, concrete produced from the optimized blended cements had equal or higher strength and higher durability than that produced from Type I cement alone. The key for the activation mechanism was the reaction between added SO4 2- and Ca2+ dissolved from cement hydration products.

  14. High performance embedded system for real-time pattern matching

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  15. High performance embedded system for real-time pattern matching

    Sotiropoulou, C.-L., E-mail: c.sotiropoulou@cern.ch [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Luciano, P. [University of Cassino and Southern Lazio, Gaetano di Biasio 43, Cassino 03043 (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Gkaitatzis, S. [Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Citraro, S. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Giannetti, P. [INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dell' Orso, M. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy)

    2017-02-11

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  16. High performance data acquisition with InfiniBand

    Adamczewski, Joern; Essel, Hans G.; Kurz, Nikolaus; Linev, Sergey

    2008-01-01

    For the new experiments at FAIR new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. In this concept any data filtering is done behind the network. Therefore the network must achieve up to 1 GByte/s bi-directional data transfer per node. Detailed simulations have been done to optimize scheduling mechanisms for such event building networks. For real performance tests InfiniBand has been chosen as one of the fastest available network technology. The measurements of network event building have been performed on different Linux clusters from four to over hundred nodes. Several InfiniBand libraries have been tested like uDAPL, Verbs, or MPI. The tests have been integrated in the data acquisition backbone core software DABC, a general purpose data acquisition library. Detailed results are presented. In the worst cases (over hundred nodes) 50% of the required bandwidth can be already achieved. It seems possible to improve these results by further investigations

  17. Evolution of Late-type Galaxies in a Cluster Environment: Effects of High-speed Multiple Encounters with Early-type Galaxies

    Hwang, Jeong-Sun; Park, Changbom; Banerjee, Arunima; Hwang, Ho Seong

    2018-04-01

    Late-type galaxies falling into a cluster would evolve being influenced by the interactions with both the cluster and the nearby cluster member galaxies. Most numerical studies, however, tend to focus on the effects of the former with little work done on those of the latter. We thus perform a numerical study on the evolution of a late-type galaxy interacting with neighboring early-type galaxies at high speed using hydrodynamic simulations. Based on the information obtained from the Coma cluster, we set up the simulations for the case where a Milky Way–like late-type galaxy experiences six consecutive collisions with twice as massive early-type galaxies having hot gas in their halos at the closest approach distances of 15–65 h ‑1 kpc at the relative velocities of 1500–1600 km s‑1. Our simulations show that the evolution of the late-type galaxy can be significantly affected by the accumulated effects of the high-speed multiple collisions with the early-type galaxies, such as on cold gas content and star formation activity of the late-type galaxy, particularly through the hydrodynamic interactions between cold disk and hot gas halos. We find that the late-type galaxy can lose most of its cold gas after the six collisions and have more star formation activity during the collisions. By comparing our simulation results with those of galaxy–cluster interactions, we claim that the role of the galaxy–galaxy interactions on the evolution of late-type galaxies in clusters could be comparable with that of the galaxy–cluster interactions, depending on the dynamical history.

  18. Utilities for high performance dispersion model PHYSIC

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  19. An integrated high performance fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1992-01-01

    A high performance Fastbus slave interface ASIC is presented. The Fastbus slave integrated circuit (FASIC) is a programmable device, enabling its direct use in many different applications. The FASIC acts as an interface between Fastbus and a 'standard' processor/memory bus. It can work stand-alone or together with a microprocessor. A set of address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/s to Fastbus can be obtained using an internal FIFO buffer in the FASIC. (orig.)

  20. Assessing the performance of dispersionless and dispersion-accounting methods: helium interaction with cluster models of the TiO2(110) surface.

    de Lara-Castells, María Pilar; Stoll, Hermann; Mitrushchenkov, Alexander O

    2014-08-21

    As a prototypical dispersion-dominated physisorption problem, we analyze here the performance of dispersionless and dispersion-accounting methodologies on the helium interaction with cluster models of the TiO2(110) surface. A special focus has been given to the dispersionless density functional dlDF and the dlDF+Das construction for the total interaction energy (K. Pernal, R. Podeswa, K. Patkowski, and K. Szalewicz, Phys. Rev. Lett. 2009, 109, 263201), where Das is an effective interatomic pairwise functional form for the dispersion. Likewise, the performance of symmetry-adapted perturbation theory (SAPT) method is evaluated, where the interacting monomers are described by density functional theory (DFT) with the dlDF, PBE, and PBE0 functionals. Our benchmarks include CCSD(T)-F12b calculations and comparative analysis on the nuclear bound states supported by the He-cluster potentials. Moreover, intra- and intermonomer correlation contributions to the physisorption interaction are analyzed through the method of increments (H. Stoll, J. Chem. Phys. 1992, 97, 8449) at the CCSD(T) level of theory. This method is further applied in conjunction with a partitioning of the Hartree-Fock interaction energy to estimate individual interaction energy components, comparing them with those obtained using the different SAPT(DFT) approaches. The cluster size evolution of dispersionless and dispersion-accounting energy components is then discussed, revealing the reduced role of the dispersionless interaction and intramonomer correlation when the extended nature of the surface is better accounted for. On the contrary, both post-Hartree-Fock and SAPT(DFT) results clearly demonstrate the high-transferability character of the effective pairwise dispersion interaction whatever the cluster model is. Our contribution also illustrates how the method of increments can be used as a valuable tool not only to achieve the accuracy of CCSD(T) calculations using large cluster models but also to

  1. High performance visual display for HENP detectors

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  2. High-Performance Vertical Organic Electrochemical Transistors.

    Donahue, Mary J; Williamson, Adam; Strakosas, Xenofon; Friedlein, Jacob T; McLeod, Robert R; Gleskova, Helena; Malliaras, George G

    2018-02-01

    Organic electrochemical transistors (OECTs) are promising transducers for biointerfacing due to their high transconductance, biocompatibility, and availability in a variety of form factors. Most OECTs reported to date, however, utilize rather large channels, limiting the transistor performance and resulting in a low transistor density. This is typically a consequence of limitations associated with traditional fabrication methods and with 2D substrates. Here, the fabrication and characterization of OECTs with vertically stacked contacts, which overcome these limitations, is reported. The resulting vertical transistors exhibit a reduced footprint, increased intrinsic transconductance of up to 57 mS, and a geometry-normalized transconductance of 814 S m -1 . The fabrication process is straightforward and compatible with sensitive organic materials, and allows exceptional control over the transistor channel length. This novel 3D fabrication method is particularly suited for applications where high density is needed, such as in implantable devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Topological cell clustering in the ATLAS calorimeters and its performance in LHC Run 1

    Aad, G.; Abbott, B.; Abdallah, J.; Chudoba, Jiří; Havránek, Miroslav; Hejbal, Jiří; Jakoubek, Tomáš; Kepka, Oldřich; Kupčo, Alexander; Kůs, Vlastimil; Lokajíček, Miloš; Lysák, Roman; Marčišovský, Michal; Mikeštíková, Marcela; Němeček, Stanislav; Penc, Ondřej; Šícho, Petr; Staroba, Pavel; Svatoš, Michal; Taševský, Marek; Vrba, Václav

    2017-01-01

    Roč. 77, č. 7 (2017), s. 1-87, č. článku 490. ISSN 1434-6044 Institutional support: RVO:68378271 Keywords : CERN LHC Coll * ATLAS * electromagnetic * performance * track data analysis * data analysis method * experimental results * 7000 GeV-cms8000 GeV-cms Subject RIV: BF - Elementary Particles and High Energy Physics OBOR OECD: Particles and field physics Impact factor: 5.331, year: 2016

  4. High Performance Data Distribution for Scientific Community

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  5. Model-based Clustering of High-Dimensional Data in Astrophysics

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  6. Enforcing Resource Sharing Agreements Among Distributed Server Clusters

    Zhao, Tao; Karamcheti, Vijay

    2001-01-01

    Future scalable, high throughput, and high performance applications are likely to execute on platforms constructed by clustering multiple autonomous distributed servers, with resource access governed...

  7. High-performance laboratories and cleanrooms; TOPICAL

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-01-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations-primarily safety driven-that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities

  8. Industrial clusters and social networks and their impact on the performance of micro- and small-scale enterprises: evidence from the handloom sector in Ethiopia

    Ali, M.A.

    2012-01-01

    This study empirically investigates how clustering and social networks affect the performance of micro- and small-scale enterprises by looking at the evidence from Ethiopia. By contrasting the performance of clustered micro enterprises with that of dispersed ones, it was first shown that

  9. Transport in JET high performance plasmas

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  10. High-performance computing for airborne applications

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  11. Transport in JET high performance plasmas

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  12. High-performance vertical organic transistors.

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Performance of the CMS High Level Trigger

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  14. Development of a High Performance Spacer Grid

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  15. Low cost high performance uncertainty quantification

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  16. Energy Efficient Graphene Based High Performance Capacitors.

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. SISYPHUS: A high performance seismic inversion factory

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  18. Ultra high performance concrete dematerialization study

    NONE

    2004-03-01

    Concrete is the most widely used building material in the world and its use is expected to grow. It is well recognized that the production of portland cement results in the release of large amounts of carbon dioxide, a greenhouse gas (GHG). The main challenge facing the industry is to produce concrete in an environmentally sustainable manner. Reclaimed industrial by-proudcts such as fly ash, silica fume and slag can reduce the amount of portland cement needed to make concrete, thereby reducing the amount of GHGs released to the atmosphere. The use of these supplementary cementing materials (SCM) can also enhance the long-term strength and durability of concrete. The intention of the EcoSmart{sup TM} Concrete Project is to develop sustainable concrete through innovation in supply, design and construction. In particular, the project focuses on finding a way to minimize the GHG signature of concrete by maximizing the replacement of portland cement in the concrete mix with SCM while improving the cost, performance and constructability. This paper describes the use of Ductal{sup R} Ultra High Performance Concrete (UHPC) for ramps in a condominium. It examined the relationship between the selection of UHPC and the overall environmental performance, cost, constructability maintenance and operational efficiency as it relates to the EcoSmart Program. The advantages and challenges of using UHPC were outlined. In addition to its very high strength, UHPC has been shown to have very good potential for GHG emission reduction due to the reduced material requirements, reduced transport costs and increased SCM content. refs., tabs., figs.

  19. JT-60U high performance regimes

    Ishida, S.

    1999-01-01

    High performance regimes of JT-60U plasmas are presented with an emphasis upon the results from the use of a semi-closed pumped divertor with W-shaped geometry. Plasma performance in transient and quasi steady states has been significantly improved in reversed shear and high- βp regimes. The reversed shear regime elevated an equivalent Q DT eq transiently up to 1.25 (n D (0)τ E T i (0)=8.6x10 20 m-3·s·keV) in a reactor-relevant thermonuclear dominant regime. Long sustainment of enhanced confinement with internal transport barriers (ITBs) with a fully non-inductive current drive in a reversed shear discharge was successfully demonstrated with LH wave injection. Performance sustainment has been extended in the high- bp regime with a high triangularity achieving a long sustainment of plasma conditions equivalent to Q DT eq ∼0.16 (n D (0)τ E T i (0)∼1.4x10 20 m -3 ·s·keV) for ∼4.5 s with a large non-inductive current drive fraction of 60-70% of the plasma current. Thermal and particle transport analyses show significant reduction of thermal and particle diffusivities around ITB resulting in a strong Er shear in the ITB region. The W-shaped divertor is effective for He ash exhaust demonstrating steady exhaust capability of τ He */τ E ∼3-10 in support of ITER. Suppression of neutral back flow and chemical sputtering effect have been observed while MARFE onset density is rather decreased. Negative-ion based neutral beam injection (N-NBI) experiments have created a clear H-mode transition. Enhanced ionization cross- section due to multi-step ionization processes was confirmed as theoretically predicted. A current density profile driven by N-NBI is measured in a good agreement with theoretical prediction. N-NBI induced TAE modes characterized as persistent and bursting oscillations have been observed from a low hot beta of h >∼0.1-0.2% without a significant loss of fast ions. (author)

  20. Effect of high hydrostatic pressure on small oxygen-related clusters in silicon: LVM studies

    Murin, L.I.; Lindstroem, J.L.; Misiuk, A.

    2003-01-01

    Local vibrational mode (LVM) spectroscopy is used to explore the effect of high hydrostatic pressure (HP) on the formation of small oxygen-related clusters (dimers, trimers, thermal donors, and C-O complexes) at 450 deg. C and 650 deg. C in Cz-Si crystals with different impurity content and prehistory. It is found, in agreement with previous studies, that HP enhances the oxygen clustering in Cz-Si at elevated temperatures. The effect of HP is related mainly to enhancement in the diffusivity of single oxygen atoms and small oxygen aggregates. HP does not noticeably increase the binding energies of the most simple oxygen related complexes like O 2i , C s O ni . The biggest HP effect on the thermal double donor (TDDs) generation is revealed in hydrogenated samples. Heat-treatment of such samples at 450 deg. C under HP results in extremely high TDD introduction rates as well as in a strong increase in the concentration of the first TDD species

  1. Performance of the coupled thermalhydraulics/neutron kinetics code R/P/C on workstation clusters and multiprocessor systems

    Hammer, C.; Paffrath, M.; Boeer, R.; Finnemann, H.; Jackson, C.J.

    1996-01-01

    The light water reactor core simulation code PANBOX has been coupled with the transient analysis code RELAP5 for the purpose of performing plant safety analyses with a three-dimensional (3-D) neutron kinetics model. The system has been parallelized to improve the computational efficiency. The paper describes the features of this system with emphasis on performance aspects. Performance results are given for different types of parallelization, i. e. for using an automatic parallelizing compiler, using the portable PVM platform on a workstation cluster, using PVM on a shared memory multiprocessor, and for using machine dependent interfaces. (author)

  2. Clusters of word properties as predictors of elementary school children's performance on two word tasks

    Tellings, A.E.J.M.; Coppens, K.M.; Gelissen, J.P.T.M.; Schreuder, R.

    2013-01-01

    Often, the classification of words does not go beyond "difficult" (i.e., infrequent, late-learned, nonimageable, etc.) or "easy" (i.e., frequent, early-learned, imageable, etc.) words. In the present study, we used a latent cluster analysis to divide 703 Dutch words with scores for eight word

  3. High-performance phase-field modeling

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  4. High performance visual display for HENP detectors

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  5. Development of high performance ODS alloys

    Shao, Lin [Texas A & M Univ., College Station, TX (United States); Gao, Fei [Univ. of Michigan, Ann Arbor, MI (United States); Garner, Frank [Texas A & M Univ., College Station, TX (United States)

    2018-01-29

    This project aims to capitalize on insights developed from recent high-dose self-ion irradiation experiments in order to develop and test the next generation of optimized ODS alloys needed to meet the nuclear community's need for high strength, radiation-tolerant cladding and core components, especially with enhanced resistance to void swelling. Two of these insights are that ferrite grains swell earlier than tempered martensite grains, and oxide dispersions currently produced only in ferrite grains require a high level of uniformity and stability to be successful. An additional insight is that ODS particle stability is dependent on as-yet unidentified compositional combinations of dispersoid and alloy matrix, such as dispersoids are stable in MA957 to doses greater than 200 dpa but dissolve in MA956 at doses less than 200 dpa. These findings focus attention on candidate next-generation alloys which address these concerns. Collaboration with two Japanese groups provides this project with two sets of first-round candidate alloys that have already undergone extensive development and testing for unirradiated properties, but have not yet been evaluated for their irradiation performance. The first set of candidate alloys are dual phase (ferrite + martensite) ODS alloys with oxide particles uniformly distributed in both ferrite and martensite phases. The second set of candidate alloys are ODS alloys containing non-standard dispersoid compositions with controllable oxide particle sizes, phases and interfaces.

  6. Low-Cost High-Performance MRI

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (standards for affordable (<$50,000) and robust portable devices.

  7. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. A GMBCG galaxy cluster catalog of 55,880 rich clusters from SDSS DR7

    Hao, Jiangang; McKay, Timothy A.; Koester, Benjamin P.; Rykoff, Eli S.; Rozo, Eduardo; Annis, James; Wechsler, Risa H.; Evrard, August; Siegel, Seth R.; Becker, Matthew; Busha, Michael; /Fermilab /Michigan U. /Chicago U., Astron. Astrophys. Ctr. /UC, Santa Barbara /KICP, Chicago /KIPAC, Menlo Park /SLAC /Caltech /Brookhaven

    2010-08-01

    We present a large catalog of optically selected galaxy clusters from the application of a new Gaussian Mixture Brightest Cluster Galaxy (GMBCG) algorithm to SDSS Data Release 7 data. The algorithm detects clusters by identifying the red sequence plus Brightest Cluster Galaxy (BCG) feature, which is unique for galaxy clusters and does not exist among field galaxies. Red sequence clustering in color space is detected using an Error Corrected Gaussian Mixture Model. We run GMBCG on 8240 square degrees of photometric data from SDSS DR7 to assemble the largest ever optical galaxy cluster catalog, consisting of over 55,000 rich clusters across the redshift range from 0.1 < z < 0.55. We present Monte Carlo tests of completeness and purity and perform cross-matching with X-ray clusters and with the maxBCG sample at low redshift. These tests indicate high completeness and purity across the full redshift range for clusters with 15 or more members.

  10. Thermal interface pastes nanostructured for high performance

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  11. High performance liquid chromatography in pharmaceutical analyses

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  12. Combining high productivity with high performance on commodity hardware

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  13. Equation-of-motion coupled cluster method for high spin double electron attachment calculations

    Musiał, Monika, E-mail: musial@ich.us.edu.pl; Lupa, Łukasz; Kucharski, Stanisław A. [Institute of Chemistry, University of Silesia, Szkolna 9, 40-006 Katowice (Poland)

    2014-03-21

    The new formulation of the equation-of-motion (EOM) coupled cluster (CC) approach applicable to the calculations of the double electron attachment (DEA) states for the high spin components is proposed. The new EOM equations are derived for the high spin triplet and quintet states. In both cases the new equations are easier to solve but the substantial simplification is observed in the case of quintets. Out of 21 diagrammatic terms contributing to the standard DEA-EOM-CCSDT equations for the R{sub 2} and R{sub 3} amplitudes only four terms survive contributing to the R{sub 3} part. The implemented method has been applied to the calculations of the excited states (singlets, triplets, and quintets) energies of the carbon and silicon atoms and potential energy curves for selected states of the Na{sub 2} (triplets) and B{sub 2} (quintets) molecules.

  14. Mean Occupation Function of High-redshift Quasars from the Planck Cluster Catalog

    Chakraborty, Priyanka; Chatterjee, Suchetana; Dutta, Alankar; Myers, Adam D.

    2018-06-01

    We characterize the distribution of quasars within dark matter halos using a direct measurement technique for the first time at redshifts as high as z ∼ 1. Using the Planck Sunyaev-Zeldovich (SZ) catalog for galaxy groups and the Sloan Digital Sky Survey (SDSS) DR12 quasar data set, we assign host clusters/groups to the quasars and make a measurement of the mean number of quasars within dark matter halos as a function of halo mass. We find that a simple power-law fit of {log} =(2.11+/- 0.01) {log}(M)-(32.77+/- 0.11) can be used to model the quasar fraction in dark matter halos. This suggests that the quasar fraction increases monotonically as a function of halo mass even to redshifts as high as z ∼ 1.

  15. Integrating advanced facades into high performance buildings

    Selkowitz, Stephen E.

    2001-01-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  16. The need for high performance breeder reactors

    Vaughan, R.D.; Chermanne, J.

    1977-01-01

    It can be easily demonstrated, on the basis of realistic estimates of continued high oil costs, that an increasing portion of the growth in energy demand must be supplied by nuclear power and that this one might account for 20% of all the energy production by the end of the century. Such assumptions lead very quickly to the conclusion that the discovery, extraction and processing of the uranium will not be able to follow the demand; the bottleneck will essentially be related to the rate at which the ore can be discovered and extracted, and not to the existing quantities nor their grade. Figures as high as 150.000 T/annum and more would be quickly reached, and it is necessary to wonder already now if enough capital can be attracted to meet these requirements. There is only one solution to this problem: improve the conversion ratio of the nuclear system and quickly reach the breeding; this would lead to the reduction of the natural uranium consumption by a factor of about 50. However, this condition is not sufficient; the commercial breeder must have a breeding gain as high as possible because the Pu out-of-pile time and the Pu losses in the cycle could lead to an unacceptable doubling time for the system, if the breeding gain is too low. That is the reason why it is vital to develop high performance breeder reactors. The present paper indicates how the Gas-cooled Breeder Reactor [GBR] can meet the problems mentioned above, on the basis of recent and realistic studies. It briefly describes the present status of GBR development, from the predecessors in the gas cooled reactor line, particularly the AGR. It shows how the GBR fuel takes mostly profit from the LMFBR fuel irradiation experience. It compares the GBR performance on a consistent basis with that of the LMFBR. The GBR capital and fuel cycle costs are compared with those of thermal and fast reactors respectively. The conclusion is, based on a cost-benefit study, that the GBR must be quickly developed in order

  17. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  18. How to create high-performing teams.

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  19. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  20. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  1. High Performance with Prescriptive Optimization and Debugging

    Jensen, Nicklas Bo

    parallelization and automatic vectorization is attractive as it transparently optimizes programs. The thesis contributes an improved dependence analysis for explicitly parallel programs. These improvements lead to more loops being vectorized, on average we achieve a speedup of 1.46 over the existing dependence...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... enlightens the programmer why a given optimization was not applied, and suggest how to change the source code to make it more amenable to optimizations. We show how this can yield significant speedups and achieve 2.4 faster execution on a real industrial use case. To aid in parallel debugging we propose...

  2. Similarity transformed equation of motion coupled-cluster theory based on an unrestricted Hartree-Fock reference for applications to high-spin open-shell systems.

    Huntington, Lee M J; Krupička, Martin; Neese, Frank; Izsák, Róbert

    2017-11-07

    The similarity transformed equation of motion coupled-cluster approach is extended for applications to high-spin open-shell systems, within the unrestricted Hartree-Fock (UHF) formalism. An automatic active space selection scheme has also been implemented such that calculations can be performed in a black-box fashion. It is observed that both the canonical and automatic active space selecting similarity transformed equation of motion (STEOM) approaches perform about as well as the more expensive equation of motion coupled-cluster singles doubles (EOM-CCSD) method for the calculation of the excitation energies of doublet radicals. The automatic active space selecting UHF STEOM approach can therefore be employed as a viable, lower scaling alternative to UHF EOM-CCSD for the calculation of excited states in high-spin open-shell systems.

  3. Similarity transformed equation of motion coupled-cluster theory based on an unrestricted Hartree-Fock reference for applications to high-spin open-shell systems

    Huntington, Lee M. J.; Krupička, Martin; Neese, Frank; Izsák, Róbert

    2017-11-01

    The similarity transformed equation of motion coupled-cluster approach is extended for applications to high-spin open-shell systems, within the unrestricted Hartree-Fock (UHF) formalism. An automatic active space selection scheme has also been implemented such that calculations can be performed in a black-box fashion. It is observed that both the canonical and automatic active space selecting similarity transformed equation of motion (STEOM) approaches perform about as well as the more expensive equation of motion coupled-cluster singles doubles (EOM-CCSD) method for the calculation of the excitation energies of doublet radicals. The automatic active space selecting UHF STEOM approach can therefore be employed as a viable, lower scaling alternative to UHF EOM-CCSD for the calculation of excited states in high-spin open-shell systems.

  4. High-resolution tomography of positron emitters with clustered pinhole SPECT

    Goorden, Marlies C; Beekman, Freek J [Section of Radiation Detection and Medical Imaging, Applied Sciences, Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)], E-mail: m.c.goorden@tudelft.nl

    2010-03-07

    State-of-the-art small-animal single photon emission computed tomography (SPECT) enables sub-half-mm resolution imaging of radio-labelled molecules. Due to severe photon penetration through pinhole edges, current multi-pinhole SPECT is not suitable for high-resolution imaging of photons with high energies, such as the annihilation photons emitted by positron emitting tracers (511 keV). To deal with this edge penetration, we introduce here clustered multi-pinhole SPECT (CMP): each pinhole in a cluster has a narrow opening angle to reduce photon penetration. Using simulations, CMP is compared with (i) a collimator with traditional pinholes that is currently used for sub-half-mm imaging of SPECT isotopes (U-SPECT-II), and (ii), like (i) but with collimator thickness adapted to image high-energy photons (traditional multi-pinhole SPECT, TMP). At 511 keV, U-SPECT-II is able to resolve the 0.9 mm rods of an iteratively reconstructed Jaszczak-like capillary hot rod phantom, and while TMP only leads to small improvements, CMP can resolve rods as small as 0.7 mm. Using a digital tumour phantom, we show that CMP resolves many details not assessable with standard USPECT-II and TMP collimators. Furthermore, CMP makes it possible to visualize uptake of positron emitting tracers in sub-compartments of a digital mouse striatal brain phantom. This may open up unique possibilities for analysing processes such as those underlying the function of neurotransmitter systems. Additional potential of CMP may include (i) the imaging of other high-energy single-photon emitters (e.g. I-131) and (ii) localized imaging of positron emitting tracers simultaneously with single photon emitters, with an even better resolution than coincidence PET.

  5. Optimizing High Performance Self Compacting Concrete

    Raymond A Yonathan

    2017-01-01

    Full Text Available This paper’s objectives are to learn the effect of glass powder, silica fume, Polycarboxylate Ether, and gravel to optimizing composition of each factor in making High Performance SCC. Taguchi method is proposed in this paper as best solution to minimize specimen variable which is more than 80 variations. Taguchi data analysis method is applied to provide composition, optimizing, and the effect of contributing materials for nine variable of specimens. Concrete’s workability was analyzed using Slump flow test, V-funnel test, and L-box test. Compressive and porosity test were performed for the hardened state. With a dimension of 100×200 mm the cylindrical specimens were cast for compressive test with the age of 3, 7, 14, 21, 28 days. Porosity test was conducted at 28 days. It is revealed that silica fume contributes greatly to slump flow and porosity. Coarse aggregate shows the greatest contributing factor to L-box and compressive test. However, all factors show unclear result to V-funnel test.

  6. Optimized high energy resolution in γ-ray spectroscopy with AGATA triple cluster detectors

    Wiens, Andreas

    2011-06-20

    The AGATA demonstrator consists of five AGATA Triple Cluster (ATC) detectors. Each triple cluster detector contains three asymmetric, 36-fold segmented, encapsulated high purity germanium detectors. The purpose of the demonstrator is to show the feasibility of position-dependent γ-ray detection by means of γ-ray tracking, which is based on pulse shape analysis. The thesis describes the first optimization procedure of the first triple cluster detectors. Here, a high signal quality is mandatory for the energy resolution and the pulse shape analysis. The signal quality was optimized and the energy resolution was improved through the modification of the electronic properties, of the grounding scheme of the detector in particular. The first part of the work was the successful installation of the first four triple cluster detectors at INFN (National Institute of Nuclear Physics) in Legnaro, Italy, in the demonstrator frame prior to the AGATA commissioning experiments and the first physics campaign. The four ATC detectors combine 444 high resolution spectroscopy channels. This number combined with a high density were achieved for the first time for in-beam γ-ray spectroscopy experiments. The high quality of the ATC detectors is characterized by the average energy resolutions achieved for the segments of each crystal in the range of 1.943 and 2.131 keV at a γ-ray energy of 1.33 MeV for the first 12 crystals. The crosstalk level between individual detectors in the ATC is negligible. The crosstalk within one crystal is at a level of 10{sup -3}. In the second part of the work new methods for enhanced energy resolution in highly segmented and position sensitive detectors were developed. The signal-to-noise ratio was improved through averaging of the core and the segment signals, which led to an improvement of the energy resolution of 21% for γ-energies of 60 keV to a FWHM of 870 eV. In combination with crosstalk correction, a clearly improved energy resolution was

  7. NCI's Transdisciplinary High Performance Scientific Data Platform

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  8. Robust continuous clustering.

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  9. High Power Flex-Propellant Arcjet Performance

    Litchford, Ron J.

    2011-01-01

    implied nearly frozen flow in the nozzle and yielded performance ranges of 800-1100 sec for hydrogen and 400-600 sec for ammonia. Inferred thrust-to-power ratios were in the range of 30-10 lbf/MWe for hydrogen and 60-20 lbf/MWe for ammonia. Successful completion of this test series represents a fundamental milestone in the progression of high power arcjet technology, and it is hoped that the results may serve as a reliable touchstone for the future development of MW-class regeneratively-cooled flex-propellant plasma rockets.

  10. Gaia Data Release 1 Open cluster astrometry: performance, limitations, and future prospects

    van Leeuwen, F.; Vallenari, A.; Jordi, C.; Lindegren, L.; Bastian, U.; Prusti, T.; de Bruijne, J.H.J.; Brown, A.G.A.; Babusiaux, C.; Bailer-Jones, C.A.L.; Fuchs, Jan; Koubský, Pavel; Votruba, Viktor

    2017-01-01

    Roč. 601, May (2017), A19/1-A19/65 E-ISSN 1432-0746 R&D Projects: GA MŠk(CZ) LG15010 Grant - others:ESA(XE) ESA-PECS project No. 98058 Institutional support: RVO:67985815 Keywords : astrometry * open clusters and associations * proper motion and parallax Subject RIV: BN - Astronomy , Celestial Mechanics, Astrophysics OBOR OECD: Astronomy (including astrophysics,space science) Impact factor: 5.014, year: 2016

  11. Detecting Massive, High-Redshift Galaxy Clusters Using the Thermal Sunyaev-Zel'dovich Effect

    Adams, Carson; Steinhardt, Charles L.; Loeb, Abraham; Karim, Alexander; Staguhn, Johannes; Erler, Jens; Capak, Peter L.

    2017-01-01

    We develop the thermal Sunyaev-Zel'dovich (SZ) effect as a direct astrophysical measure of the mass distribution of dark matter halos. The SZ effect increases with cosmological distance, a unique astronomical property, and is highly sensitive to halo mass. We find that this presents a powerful methodology for distinguishing between competing models of the halo mass function distribution, particularly in the high-redshift domain just a few hundred million years after the Big Bang. Recent surveys designed to probe this epoch of initial galaxy formation such as CANDELS and SPLASH report an over-abundance of highly massive halos as inferred from stellar ultraviolet (UV) luminosities and the stellar mass to halo mass ratio estimated from nearby galaxies. If these UV luminosity to halo mass relations hold to high-redshift, observations estimate several orders of magnitude more highly massive halos than predicted by hierarchical merging and the standard cosmological paradigm. Strong constraints on the masses of these galaxy clusters are essential to resolving the current tension between observation and theory. We conclude that detections of thermal SZ sources are plausible at high-redshift only for the halo masses inferred from observation. Therefore, future SZ surveys will provide a robust determination between theoretical and observational predictions.

  12. Silicon Photomultiplier Performance in High ELectric Field

    Montoya, J.; Morad, J.

    2016-12-01

    Roughly 27% of the universe is thought to be composed of dark matter. The Large Underground Xenon (LUX) relies on the emission of light from xenon atoms after a collision with a dark matter particle. After a particle interaction in the detector, two things can happen: the xenon will emit light and charge. The charge (electrons), in the liquid xenon needs to be pulled into the gas section so that it can interact with gas and emit light. This allows LUX to convert a single electron into many photons. This is done by applying a high voltage across the liquid and gas regions, effectively ripping electrons out of the liquid xenon and into the gas. The current device used to detect photons is the photomultiplier tube (PMT). These devices are large and costly. In recent years, a new technology that is capable of detecting single photons has emerged, the silicon photomultiplier (SiPM). These devices are cheaper and smaller than PMTs. Their performance in a high electric fields, such as those found in LUX, are unknown. It is possible that a large electric field could introduce noise on the SiPM signal, drowning the single photon detection capability. My hypothesis is that SiPMs will not observe a significant increase is noise at an electric field of roughly 10kV/cm (an electric field within the range used in detectors like LUX). I plan to test this hypothesis by first rotating the SiPMs with no applied electric field between two metal plates roughly 2 cm apart, providing a control data set. Then using the same angles test the dark counts with the constant electric field applied. Possibly the most important aspect of LUX, is the photon detector because it's what detects the signals. Dark matter is detected in the experiment by looking at the ratio of photons to electrons emitted for a given interaction in the detector. Interactions with a low electron to photon ratio are more like to be dark matter events than those with a high electron to photon ratio. The ability to

  13. First Cluster results of the magnetic field structure of the mid- and high-altitude cusps

    P. J. Cargill

    Full Text Available Magnetic field measurements from the four Cluster spacecraft from the mid- and high-altitude cusp are presented. Cluster underwent two encounters with the mid-altitude cusp during its commissioning phase (24 August 2000. Evidence for field-aligned currents (FACs was seen in the data from all three operating spacecraft from northern and southern cusps. The extent of the FACs was of the order of 1 RE in the X-direction, and at least 300 km in the Y-direction. However, fine-scale field structures with scales of the order of the spacecraft separation (300 km were observed within the FACs. In the northern crossing, two of the spacecraft appeared to lie along the same magnetic field line, and observed very well matched signals. However, the third spacecraft showed evidence for structuring transverse to the field on scales of a few hundred km. A crossing of the high-altitude cusp from 13 February 2001 is presented. It is revealed to be a highly dynamic structure with the boundaries moving with velocities ranging from a few km/s to tens of km/s, and having structure on timescales ranging from less than one minute up to several minutes. The cusp proper is associated with the presence of a very disordered magnetic field, which is entirely different from the magnetosheath turbulence.

    Key words. Magnetospheric physics (current systems; magnetopause, cusp, and boundary layers – Space plasma physics (discontinuities

  14. The Role of Performance Management in the High Performance Organisation

    de Waal, André A.; van der Heijden, Beatrice I.J.M.

    2014-01-01

    The allegiance of partnering organisations and their employees to an Extended Enterprise performance is its proverbial sword of Damocles. Literature on Extended Enterprises focuses on collaboration, inter-organizational integration and learning to avoid diminishing or missing allegiance becoming an

  15. Constraints on cold dark matter theories from observations of massive x-ray-luminous clusters of galaxies at high redshift

    Luppino, G. A.; Gioia, I. M.

    1995-01-01

    During the course of a gravitational lensing survey of distant, X-ray selected Einstein Observatory Extended Medium Sensitivity Survey (EMSS) clusters of galaxies, we have studied six X-ray-luminous (L(sub x) greater than 5 x 10(exp 44)(h(sub 50)(exp -2))ergs/sec) clusters at redshifts exceeding z = 0.5. All of these clusters are apparently massive. In addition to their high X-ray luminosity, two of the clusters at z approximately 0.6 exhibit gravitationally lensed arcs. Furthermore, the highest redshift cluster in our sample, MS 1054-0321 at z = 0.826, is both extremely X-ray luminous (L(sub 0.3-3.5keV)=9.3 x 10(exp 44)(h(sub 50)(exp -2))ergs/sec) and exceedingly rich with an optical richness comparable to an Abell Richness Class 4 cluster. In this Letter, we discuss the cosmological implications of the very existence of these clusters for hierarchical structure formation theories such as standard Omega = 1 CDM (cold dark matter), hybrid Omega = 1 C + HDM (hot dark matter), and flat, low-density Lambda + CDM models.

  16. Implementation of a cluster Beowulf

    Victorino Guzman, Jorge Enrique

    2001-01-01

    One of the simulation systems that put a great stress on computational resources and performance are the climatic models, with a high cost of implementation, making difficult its acquisition. An alternative that offers good performance at a reasonable cost is the construction of Cluster Beowulf that allows to emulate the behaviour of a computer with several processors. In the present article we discuss the requirements of hardware for the construction of the Cluster Beowulf, the software resources for the implementation of the model CCM3.6 and the performance of the Cluster Beowulf, of the Group of Investigation in Meteorology at the National University of Colombia, with different number of processors

  17. Evaluating performance of high efficiency mist eliminators

    Waggoner, Charles A.; Parsons, Michael S.; Giffin, Paxton K. [Mississippi State University, Institute for Clean Energy Technology, 205 Research Blvd, Starkville, MS (United States)

    2013-07-01

    Processing liquid wastes frequently generates off gas streams with high humidity and liquid aerosols. Droplet laden air streams can be produced from tank mixing or sparging and processes such as reforming or evaporative volume reduction. Unfortunately these wet air streams represent a genuine threat to HEPA filters. High efficiency mist eliminators (HEME) are one option for removal of liquid aerosols with high dissolved or suspended solids content. HEMEs have been used extensively in industrial applications, however they have not seen widespread use in the nuclear industry. Filtering efficiency data along with loading curves are not readily available for these units and data that exist are not easily translated to operational parameters in liquid waste treatment plants. A specialized test stand has been developed to evaluate the performance of HEME elements under use conditions of a US DOE facility. HEME elements were tested at three volumetric flow rates using aerosols produced from an iron-rich waste surrogate. The challenge aerosol included submicron particles produced from Laskin nozzles and super micron particles produced from a hollow cone spray nozzle. Test conditions included ambient temperature and relative humidities greater than 95%. Data collected during testing HEME elements from three different manufacturers included volumetric flow rate, differential temperature across the filter housing, downstream relative humidity, and differential pressure (dP) across the filter element. Filter challenge was discontinued at three intermediate dPs and the filter to allow determining filter efficiency using dioctyl phthalate and then with dry surrogate aerosols. Filtering efficiencies of the clean HEME, the clean HEME loaded with water, and the HEME at maximum dP were also collected using the two test aerosols. Results of the testing included differential pressure vs. time loading curves for the nine elements tested along with the mass of moisture and solid

  18. Data Clustering

    Wagstaff, Kiri L.

    2012-03-01

    On obtaining a new data set, the researcher is immediately faced with the challenge of obtaining a high-level understanding from the observations. What does a typical item look like? What are the dominant trends? How many distinct groups are included in the data set, and how is each one characterized? Which observable values are common, and which rarely occur? Which items stand out as anomalies or outliers from the rest of the data? This challenge is exacerbated by the steady growth in data set size [11] as new instruments push into new frontiers of parameter space, via improvements in temporal, spatial, and spectral resolution, or by the desire to "fuse" observations from different modalities and instruments into a larger-picture understanding of the same underlying phenomenon. Data clustering algorithms provide a variety of solutions for this task. They can generate summaries, locate outliers, compress data, identify dense or sparse regions of feature space, and build data models. It is useful to note up front that "clusters" in this context refer to groups of items within some descriptive feature space, not (necessarily) to "galaxy clusters" which are dense regions in physical space. The goal of this chapter is to survey a variety of data clustering methods, with an eye toward their applicability to astronomical data analysis. In addition to improving the individual researcher’s understanding of a given data set, clustering has led directly to scientific advances, such as the discovery of new subclasses of stars [14] and gamma-ray bursts (GRBs) [38]. All clustering algorithms seek to identify groups within a data set that reflect some observed, quantifiable structure. Clustering is traditionally an unsupervised approach to data analysis, in the sense that it operates without any direct guidance about which items should be assigned to which clusters. There has been a recent trend in the clustering literature toward supporting semisupervised or constrained

  19. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  20. Cluster observations of high-frequency waves in the exterior cusp

    Y. Khotyaintsev

    2004-07-01

    Full Text Available We study wave emissions, in the frequency range from above the lower hybrid frequency up to the plasma frequency, observed during one of the Cluster crossings of a high-beta exterior cusp region on 4 March 2003. Waves are localized near narrow current sheets with a thickness a few times the ion inertial length; currents are strong, of the order of 0.1-0.5μA/m2 (0.1-0.5mA/m2 when mapped to ionosphere. The high frequency part of the waves, frequencies above the electron-cyclotron frequency, is analyzed in more detail. These high frequency waves can be broad-band, can have spectral peaks at the plasma frequency or spectral peaks at frequencies below the plasma frequency. The strongest wave emissions usually have a spectral peak near the plasma frequency. The wave emission intensity and spectral character change on a very short time scale, of the order of 1s. The wave emissions with strong spectral peaks near the plasma frequency are usually seen on the edges of the narrow current sheets. The most probable generation mechanism of high frequency waves are electron beams via bump-on-tail or electron two-stream instability. Buneman and ion-acoustic instability can be excluded as a possible generation mechanism of waves. We suggest that high frequency waves are generated by electron beams propagating along the separatrices of the reconnection region.