WorldWideScience

Sample records for scalable coherent interface

  1. Scalable coherent interface

    International Nuclear Information System (INIS)

    Alnaes, K.; Kristiansen, E.H.; Gustavson, D.B.; James, D.V.

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs

  2. The scalable coherent interface, IEEE P1596

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1990-01-01

    IEEE P1596, the scalable coherent interface (formerly known as SuperBus) is based on experience gained while developing Fastbus (ANSI/IEEE 960--1986, IEC 935), Futurebus (IEEE P896.x) and other modern 32-bit buses. SCI goals include a minimum bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared memory; support for repeaters which interface to existing or future buses; and support for inexpensive small rings as well as for general switched interconnections like Banyan, Omega, or crossbar networks. This paper presents a summary of current directions, reports the status of the work in progress, and suggests some applications in data acquisition and physics

  3. The Scalable Coherent Interface and related standards projects

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1991-09-01

    The Scalable Coherent Interface (SCI) project (IEEE P1596) found a way to avoid the limits that are inherent in bus technology. SCI provides bus-like services by transmitting packets on a collection of point-to-point unidirectional links. The SCI protocols support cache coherence in a distributed-shared-memory multiprocessor model, message passing, I/O, and local-area-network-like communication over fiber optic or wire links. VLSI circuits that operate parallel links at 1000 MByte/s and serial links at 1000 Mbit/s will be available early in 1992. Several ongoing SCI-related projects are applying the SCI technology to new areas or extending it to more difficult problems. P1596.1 defines the architecture of a bridge between SCI and VME; P1596.2 compatibly extends the cache coherence mechanism for efficient operation with kiloprocessor systems; P1596.3 defines new low-voltage (about 0.25 V) differential signals suitable for low power interfaces for CMOS or GaAs VLSI implementations of SCI; P1596.4 defines a high performance memory chip interface using these signals; P1596.5 defines data transfer formats for efficient interprocessor communication in heterogeneous multiprocessor systems. This paper reports the current status of SCI, related standards, and new projects. 16 refs

  4. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    International Nuclear Information System (INIS)

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.

    1992-10-01

    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory 'bus' to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition

  5. The scalable coherent interface, IEEE P1596, status and possible applications to data acquisition and physics

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1990-01-01

    IEEE P1596, the Scalable Coherent Interface (formerly known as SuperBus) is based on experience gained while developing Fastbus (ANSI/IEEE 960-1986, IEC 935), Futurebus (IEEE P896.x) and other modern 32-bit buses. SCI goals include a minimum bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared memory; support for repeaters which interface to existing or future buses; and support for inexpensive small rings as well as for general switched interconnections like Banyan, Omega, or crossbar networks. This paper presents a summary of current directions, reports the status of the work in progress, and suggests some applications in data acquisition and physics. 7 refs

  6. Applications of the scalable coherent interface to data acquisition at LHC

    CERN Document Server

    Bogaerts, A; Divià, R; Müller, H; Parkman, C; Ponting, P J; Skaali, B; Midttun, G; Wormald, D; Wikne, J; Falciano, S; Cesaroni, F; Vinogradov, V I; Kristiansen, E H; Solberg, B; Guglielmi, A M; Worm, F H; Bovier, J; Davis, C; CERN. Geneva. Detector Research and Development Committee

    1991-01-01

    We propose to use the Scalable Coherent Interface (SCI) as a very high speed interconnect between LHC detector data buffers and farms of commercial trigger processors. Both the global second and third level trigger can be based on SCI as a reconfigurable and scalable system. SCI is a proposed IEEE standard which uses fast point-to-point links to provide computer-bus like services. It can connect a maximum of 65 536 nodes (memories or processors), providing data transfer rates of up to 1 Gbyte/s. Scalable data acquisition systems can be built using either simple SCI rings or complex switches. The interconnections may be flat cables, coaxial cables, or optical fibres. SCI protocols have been entirely implemented in VLSI, resulting in a significant simplification of data acquisition software. Novel SCI features allow efficient implementation of both data and processor driven readout architectures. In particular, a very efficient implementation of the third level trigger can be achieved by combining SCI's shared ...

  7. IEEE P1596, a scalable coherent interface for GigaByte/sec multiprocessor applications

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1988-11-01

    IEEE P1596, the Scalable Coherent Interface (formerly known as SuperBus) is based on experience gained during the development of Fastbus (IEEE 960), Futurebus (IEEE 896.1) and other modern 32-bit buses. SCI goals include a minimum bandwidth of 1 GByte/sec per processor; efficient support of a coherent distributed-cache image of shared memory; and support for segmentation, bus repeaters and general switched interconnections like Banyan, Omega, or full crossbar networks. To achieve these ambitious goals, SCI must sacrifice the immediate handshake characteristic of the present generation of buses in favor of a packet-like split-cycle protocol. Wire-ORs, broadcasts, and even ordinary passive bus structures are to be avoided. However, a lower performance (1 GByte/sec per backplane instead of per processor) implementation using a register insertion ring architecture on a passive ''backplane'' appears to be possible using the same interface as for the more costly switch networks. This paper presents a summary of current directions, and reports the status of the work in progress

  8. Application of the Scalable Coherent Interface to Data Acquisition at LHC

    CERN Multimedia

    2002-01-01

    RD24 : The RD24 activities in 1996 were dominated by test and integration of PCI-SCI bridges for VME-bus and for PC's for the 1996 milestones. In spite of the dispersion of RD24 membership into the ATLAS, ALICE and the proposed LHC-B experiments, collaboration and sharing of resources of SCI laboratories and equipment continued with excellent results and several doctoral theses. The availability of cheap PCI-SCI adapters has allowed construction of VME multicrate testbenches based on a variety of VME processors and work-stations. Transparent memory-to-memory accesses between remote PCI buses over SCI have been established under the Linux, Lynx-OS and Windows-NT operating systems as a proof that scalable multicrate systems are ready to be implemented with off-the-shelf products. Commercial SCI-PCI adapters are based on a PCI-SCI ASIC from Dolphin. The FPGA based PCI-SCI adapter, designed by CERN and LBL for data acquisition at LHC and STAR allows addition of DAQ functions. The step from multicrate systems towa...

  9. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  10. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  11. Silicon nanophotonics for scalable quantum coherent feedback networks

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Brif, Constantin; Soh, Daniel B.S.; Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul

    2016-01-01

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  12. On the interfacial energy of coherent interfaces

    International Nuclear Information System (INIS)

    Kaptay, G.

    2012-01-01

    A thermodynamic model has been developed for interfacial energies of coherent interfaces using only the molar Gibbs energy and the molar volume of the two phases surrounding the interface as the initial data. The analysis is started from the simplest case of the interface formed by two solutions on the two sides of a miscibility gap, when both phases are described by the same Gibbs energy and molar volume functions. This method is applied to the fcc Au–Ni, liquid Ga–Pb and liquid Al–Bi systems. Reasonable agreement was found with the measured values in liquid Ga–Pb and Al–Bi systems. It was shown that the calculated results are sensitive to the choice of the Calphad-estimated thermodynamic data. The method is extended to the case where the two phases are described by different Gibbs energy and molar volume functions. The extended model is applied to the interface present in an Ni-based superalloy between the AlNi 3 face-centered cubic (fcc) compound and the Ni–Al fcc disordered solid solution. The calculated results are found to be similar to other values recently obtained from the combination of kinetic and thermodynamic data. The method is extended to ternary and higher order systems. It is predicted that the interfacial energy will gradually decrease with the increase in number of components in the system.

  13. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    Science.gov (United States)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  14. An interface energy density-based theory considering the coherent interface effect in nanomaterials

    Science.gov (United States)

    Yao, Yin; Chen, Shaohua; Fang, Daining

    2017-02-01

    To characterize the coherent interface effect conveniently and feasibly in nanomaterials, a continuum theory is proposed that is based on the concept of the interface free energy density, which is a dominant factor affecting the mechanical properties of the coherent interface in materials of all scales. The effect of the residual strain caused by self-relaxation and the lattice misfit of nanomaterials, as well as that due to the interface deformation induced by an external load on the interface free energy density is considered. In contrast to the existing theories, the stress discontinuity at the interface is characterized by the interface free energy density through an interface-induced traction. As a result, the interface elastic constant introduced in previous theories, which is not easy to determine precisely, is avoided in the present theory. Only the surface energy density of the bulk materials forming the interface, the relaxation parameter induced by surface relaxation, and the mismatch parameter for forming a coherent interface between the two surfaces are involved. All the related parameters are far easier to determine than the interface elastic constants. The effective bulk and shear moduli of a nanoparticle-reinforced nanocomposite are predicted using the proposed theory. Closed-form solutions are achieved, demonstrating the feasibility and convenience of the proposed model for predicting the interface effect in nanomaterials.

  15. Relaxation Mechanisms, Structure and Properties of Semi-Coherent Interfaces

    Directory of Open Access Journals (Sweden)

    Shuai Shao

    2015-10-01

    Full Text Available In this work, using the Cu–Ni (111 semi-coherent interface as a model system, we combine atomistic simulations and defect theory to reveal the relaxation mechanisms, structure, and properties of semi-coherent interfaces. By calculating the generalized stacking fault energy (GSFE profile of the interface, two stable structures and a high-energy structure are located. During the relaxation, the regions that possess the stable structures expand and develop into coherent regions; the regions with high-energy structure shrink into the intersection of misfit dislocations (nodes. This process reduces the interface excess potential energy but increases the core energy of the misfit dislocations and nodes. The core width is dependent on the GSFE of the interface. The high-energy structure relaxes by relative rotation and dilatation between the crystals. The relative rotation is responsible for the spiral pattern at nodes. The relative dilatation is responsible for the creation of free volume at nodes, which facilitates the nodes’ structural transformation. Several node structures have been observed and analyzed. The various structures have significant impact on the plastic deformation in terms of lattice dislocation nucleation, as well as the point defect formation energies.

  16. Scalable Quantum Information Transfer between Individual Nitrogen-Vacancy Centers by a Hybrid Quantum Interface

    International Nuclear Information System (INIS)

    Pei Pei; He-Fei Huang; Yan-Qing Guo; He-Shan Song

    2016-01-01

    We develop a design of a hybrid quantum interface for quantum information transfer (QIT), adopting a nanomechanical resonator as the intermedium, which is magnetically coupled with individual nitrogen-vacancy centers as the solid qubits, while capacitively coupled with a coplanar waveguide resonator as the quantum data bus. We describe the Hamiltonian of the model, and analytically demonstrate the QIT for both the resonant interaction and large detuning cases. The hybrid quantum interface allows for QIT between arbitrarily selected individual nitrogen-vacancy centers, and has advantages of the scalability and controllability. Our methods open an alternative perspective for implementing QIT, which is important during quantum storing or processing procedures in quantum computing. (paper)

  17. Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.

    Science.gov (United States)

    Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S

    2014-08-08

    Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.

  18. Ergatis: a web interface and scalable software system for bioinformatics workflows

    Science.gov (United States)

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  19. Bandwidth scalable, coherent transmitter based on the parallel synthesis of multiple spectral slices using optical arbitrary waveform generation.

    Science.gov (United States)

    Geisler, David J; Fontaine, Nicolas K; Scott, Ryan P; He, Tingting; Paraschis, Loukas; Gerstel, Ori; Heritage, Jonathan P; Yoo, S J B

    2011-04-25

    We demonstrate an optical transmitter based on dynamic optical arbitrary waveform generation (OAWG) which is capable of creating high-bandwidth (THz) data waveforms in any modulation format using the parallel synthesis of multiple coherent spectral slices. As an initial demonstration, the transmitter uses only 5.5 GHz of electrical bandwidth and two 10-GHz-wide spectral slices to create 100-ns duration, 20-GHz optical waveforms in various modulation formats including differential phase-shift keying (DPSK), quaternary phase-shift keying (QPSK), and eight phase-shift keying (8PSK) with only changes in software. The experimentally generated waveforms showed clear eye openings and separated constellation points when measured using a real-time digital coherent receiver. Bit-error-rate (BER) performance analysis resulted in a BER < 9.8 × 10(-6) for DPSK and QPSK waveforms. Additionally, we experimentally demonstrate three-slice, 4-ns long waveforms that highlight the bandwidth scalable nature of the optical transmitter. The various generated waveforms show that the key transmitter properties (i.e., packet length, modulation format, data rate, and modulation filter shape) are software definable, and that the optical transmitter is capable of acting as a flexible bandwidth transmitter.

  20. Detection of chemical interfaces in coherent anti-Stokes Raman scattering microscopy: Dk-CARS. I. Axial interfaces.

    Science.gov (United States)

    Gachet, David; Rigneault, Hervé

    2011-12-01

    We develop a full vectorial theoretical investigation of the chemical interface detection in conventional coherent anti-Stokes Raman scattering (CARS) microscopy. In Part I, we focus on the detection of axial interfaces (i.e., parallel to the optical axis) following a recent experimental demonstration of the concept [Phys. Rev. Lett. 104, 213905 (2010)]. By revisiting the Young's double slit experiment, we show that background-free microscopy and spectroscopy is achievable through the angular analysis of the CARS far-field radiation pattern. This differential CARS in k space (Dk-CARS) technique is interesting for fast detection of interfaces between molecularly different media. It may be adapted to other coherent and resonant scattering processes.

  1. From coherent to incoherent mismatched interfaces: A generalized continuum formulation of surface stresses

    Science.gov (United States)

    Dingreville, Rémi; Hallil, Abdelmalek; Berbenni, Stéphane

    2014-12-01

    The equilibrium of coherent and incoherent mismatched interfaces is reformulated in the context of continuum mechanics based on the Gibbs dividing surface concept. Two surface stresses are introduced: a coherent surface stress and an incoherent surface stress, as well as a transverse excess strain. The coherent surface stress and the transverse excess strain represent the thermodynamic driving forces of stretching the interface while the incoherent surface stress represents the driving force of stretching one crystal while holding the other fixed and thereby altering the structure of the interface. These three quantities fully characterize the elastic behavior of coherent and incoherent interfaces as a function of the in-plane strain, the transverse stress and the mismatch strain. The isotropic case is developed in detail and particular attention is paid to the case of interfacial thermo-elasticity. This exercise provides an insight on the physical significance of the interfacial elastic constants introduced in the formulation and illustrates the obvious coupling between the interface structure and its associated thermodynamics quantities. Finally, an example based on atomistic simulations of Cu/Cu2O interfaces is given to demonstrate the relevance of the generalized interfacial formulation and to emphasize the dependence of the interfacial thermodynamic quantities on the incoherency strain with an actual material system.

  2. Advances in clinical application of optical coherence tomography in vitreomacular interface disease

    Directory of Open Access Journals (Sweden)

    Xiao-Li Xing

    2013-08-01

    Full Text Available Vitreous macular interface disease mainly includes vitreomacular traction syndrome, idiopathic macular epiretinal membrane and idiopathic macular hole. Optical coherence tomography(OCTas a new tool that provides high resolution biopsy cross section image non traumatic imaging inspection, has a unique high resolution, no damage characteristics, and hence clinical widely used, vitreous macular interface for clinical disease diagnosis, differential diagnosis and condition monitoring and quantitative evaluation, treatment options, etc provides important information and reference value. Vitreous macular interface disease in OCT image of anatomical morphology characteristics, improve the clinical on disease occurrence and development of knowledge. We reviewed the advances in the application of OCT in vitreomacular interface disease.

  3. Structure Transformation and Coherent Interface in Large Lattice-Mismatched Nanoscale Multilayers

    Directory of Open Access Journals (Sweden)

    J. Y. Xie

    2013-01-01

    Full Text Available Nanoscale Al/W multilayers were fabricated by DC magnetron sputtering and characterized by transmission electron microscopy and high-resolution electron microscopy. Despite the large lattice mismatch and significantly different lattice structures between Al and W, a structural transition from face-centered cubic to body-centered cubic in Al layers was observed when the individual layer thickness was reduced from 5 nm to 1 nm, forming coherent Al/W interfaces. For potential mechanisms underlying the observed structure transition and forming of coherent interfaces, it was suggested that the reduction of interfacial energy and high stresses induced by large lattice-mismatch play a crucial role.

  4. Role of coherence and delocalization in photo-induced electron transfer at organic interfaces

    Science.gov (United States)

    Abramavicius, V.; Pranculis, V.; Melianas, A.; Inganäs, O.; Gulbinas, V.; Abramavicius, D.

    2016-09-01

    Photo-induced charge transfer at molecular heterojunctions has gained particular interest due to the development of organic solar cells (OSC) based on blends of electron donating and accepting materials. While charge transfer between donor and acceptor molecules can be described by Marcus theory, additional carrier delocalization and coherent propagation might play the dominant role. Here, we describe ultrafast charge separation at the interface of a conjugated polymer and an aggregate of the fullerene derivative PCBM using the stochastic Schrödinger equation (SSE) and reveal the complex time evolution of electron transfer, mediated by electronic coherence and delocalization. By fitting the model to ultrafast charge separation experiments, we estimate the extent of electron delocalization and establish the transition from coherent electron propagation to incoherent hopping. Our results indicate that even a relatively weak coupling between PCBM molecules is sufficient to facilitate electron delocalization and efficient charge separation at organic interfaces.

  5. Ab initio transmission electron microscopy image simulations of coherent Ag-MgO interfaces

    International Nuclear Information System (INIS)

    Mogck, S.; Kooi, B.J.; Hosson, J.Th.M. de; Finnis, M.W.

    2004-01-01

    Density-functional theory calculations, within the plane-wave-ultrasoft pseudopotential framework, were performed in the projection for MgO and for the coherent (111) Ag-MgO polar interface. First-principles calculations were incorporated in high-resolution transmission electron microscopy (HRTEM) simulations by converting the charge density into electron scattering factors to examine the influence of charge transfer, charge redistribution at the interface, and ionicity on the dynamical electron scattering and on calculated HRTEM images. It is concluded that the ionicity of oxides and the charge redistribution at interfaces play a significant role in HRTEM image simulations. In particular, the calculations show that at oxygen-terminated (111) Ag-MgO interfaces the first oxygen layer at the interface is much brighter than that in calculations with neutral atoms, in agreement with experimental observations

  6. Non-uniform Solute Segregation at Semi-Coherent Metal/Oxide Interfaces

    Science.gov (United States)

    Choudhury, Samrat; Aguiar, Jeffery A.; Fluss, Michael J.; Hsiung, Luke L.; Misra, Amit; Uberuaga, Blas P.

    2015-08-01

    The properties and performance of metal/oxide nanocomposites are governed by the structure and chemistry of the metal/oxide interfaces. Here we report an integrated theoretical and experimental study examining the role of interfacial structure, particularly misfit dislocations, on solute segregation at a metal/oxide interface. We find that the local oxygen environment, which varies significantly between the misfit dislocations and the coherent terraces, dictates the segregation tendency of solutes to the interface. Depending on the nature of the solute and local oxygen content, segregation to misfit dislocations can change from attraction to repulsion, revealing the complex interplay between chemistry and structure at metal/oxide interfaces. These findings indicate that the solute chemistry at misfit dislocations is controlled by the dislocation density and oxygen content. Fundamental thermodynamic concepts - the Hume-Rothery rules and the Ellingham diagram - qualitatively predict the segregation behavior of solutes to such interfaces, providing design rules for novel interfacial chemistries.

  7. The coherent interlayer resistance of a single, rotated interface between two stacks of AB graphite

    Energy Technology Data Exchange (ETDEWEB)

    Habib, K. M. Masum, E-mail: khabib@ee.ucr.edu; Sylvia, Somaia S.; Neupane, Mahesh; Lake, Roger K., E-mail: rlake@ee.ucr.edu [Department of Electrical Engineering, University of California, Riverside, California 92521-0204 (United States); Ge, Supeng [Department of Physics and Astronomy, University of California, Riverside, California 92521-0204 (United States)

    2013-12-09

    The coherent, interlayer resistance of a misoriented, rotated interface between two stacks of AB graphite is determined for a variety of misorientation angles. The quantum-resistance of the ideal AB stack is on the order of 1 to 10 mΩ μm{sup 2}. For small rotation angles, the coherent interlayer resistance exponentially approaches the ideal quantum resistance at energies away from the charge neutrality point. Over a range of intermediate angles, the resistance increases exponentially with cell size for minimum size unit cells. Larger cell sizes, of similar angles, may not follow this trend. The energy dependence of the interlayer transmission is described.

  8. The coherent interlayer resistance of a single, rotated interface between two stacks of AB graphite

    International Nuclear Information System (INIS)

    Habib, K. M. Masum; Sylvia, Somaia S.; Neupane, Mahesh; Lake, Roger K.; Ge, Supeng

    2013-01-01

    The coherent, interlayer resistance of a misoriented, rotated interface between two stacks of AB graphite is determined for a variety of misorientation angles. The quantum-resistance of the ideal AB stack is on the order of 1 to 10 mΩ μm 2 . For small rotation angles, the coherent interlayer resistance exponentially approaches the ideal quantum resistance at energies away from the charge neutrality point. Over a range of intermediate angles, the resistance increases exponentially with cell size for minimum size unit cells. Larger cell sizes, of similar angles, may not follow this trend. The energy dependence of the interlayer transmission is described

  9. Framework for non-coherent interface models at finite displacement jumps and finite strains

    Science.gov (United States)

    Ottosen, Niels Saabye; Ristinmaa, Matti; Mosler, Jörn

    2016-05-01

    This paper deals with a novel constitutive framework suitable for non-coherent interfaces, such as cracks, undergoing large deformations in a geometrically exact setting. For this type of interface, the displacement field shows a jump across the interface. Within the engineering community, so-called cohesive zone models are frequently applied in order to describe non-coherent interfaces. However, for existing models to comply with the restrictions imposed by (a) thermodynamical consistency (e.g., the second law of thermodynamics), (b) balance equations (in particular, balance of angular momentum) and (c) material frame indifference, these models are essentially fiber models, i.e. models where the traction vector is collinear with the displacement jump. This constraints the ability to model shear and, in addition, anisotropic effects are excluded. A novel, extended constitutive framework which is consistent with the above mentioned fundamental physical principles is elaborated in this paper. In addition to the classical tractions associated with a cohesive zone model, the main idea is to consider additional tractions related to membrane-like forces and out-of-plane shear forces acting within the interface. For zero displacement jump, i.e. coherent interfaces, this framework degenerates to existing formulations presented in the literature. For hyperelasticity, the Helmholtz energy of the proposed novel framework depends on the displacement jump as well as on the tangent vectors of the interface with respect to the current configuration - or equivalently - the Helmholtz energy depends on the displacement jump and the surface deformation gradient. It turns out that by defining the Helmholtz energy in terms of the invariants of these variables, all above-mentioned fundamental physical principles are automatically fulfilled. Extensions of the novel framework necessary for material degradation (damage) and plasticity are also covered.

  10. Ultra-compact coherent receiver with serial interface for pluggable transceiver.

    Science.gov (United States)

    Itoh, Toshihiro; Nakajima, Fumito; Ohno, Tetsuichiro; Yamanaka, Shogo; Soma, Shunichi; Saida, Takashi; Nosaka, Hideyuki; Murata, Koichi

    2014-09-22

    An ultra-compact integrated coherent receiver with a volume of 1.3 cc using a quad-channel transimpedance amplifier (TIA)-IC chip with a serial peripheral interface (SPI) is demonstrated for the first time. The TIA with the SPI and photodiode (PD) bias circuits, a miniature dual polarization optical hybrid, an octal-PD and small optical coupling system enabled the realization of the compact receiver. Measured transmission performance with 32 Gbaud dual-polarization quadrature phase shift keying signal is equivalent to that of the conventional multi-source agreement-based integrated coherent receiver with dual channel TIA-ICs. By comparing the bit-error rate (BER) performance with that under continuous SPI access, we also confirmed that there is no BER degradation caused by SPI interface access. Such an ultra-compact receiver is promising for realizing a new generation of pluggable transceivers.

  11. Ab Initio Predictions of Hexagonal Zr(B,C,N) Polymorphs for Coherent Interface Design

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Chongze [Univ. of Minnesota-Twin Cities, Minneapolis, MN (United States); Huang, Jingsong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sumpter, Bobby G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Meletis, Efstathios [Univ. of Texas at Arlington, Arlington, TX (United States); Dumitrica, Traian [Univ. of Minnesota-Twin Cities, Minneapolis, MN (United States)

    2017-10-27

    Density functional theory calculations are used to explore hexagonal (HX) NiAs-like polymorphs of Zr(B,C,N) and compare with corresponding Zr(B,C,N) Hagg-like face-centered cubic rocksalt (B1) phases. While all predicted compounds are mechanically stable according to the Born-Huang criteria, only HX Zr(C,N) are found dynamically stable from ab initio molecular dynamics simulations and lattice dynamics calculations. HX ZrN emerges as a candidate structure with ground state energy, elastic constants, and extrinsic mechanical parameters comparable with those of B1 ZrN. Ab initio band structure and semi-classical Boltzmann transport calculations predict a metallic character and a monotonic increase in electrical conductivity with the number of valence electrons. Electronic structure calculations indicate that the HX phases gain their stability and mechanical attributes by Zr d- non-metal p hybridization and by broadening of Zr d bands. Furthermore, it is shown that the HX ZrN phase provides a low-energy coherent interface model for connecting B1 ZrN domains, with significant energetic advantage over an atomistic interface model derived from high resolution transmission electron microscopy images. The ab initio characterizations provided herein should aid the experimental identification of non-Hagg-like hard phases. Furthermore, the results can also enrich the variety of crystalline phases potentially available for designing coherent interfaces in superhard nanostructured materials and in materials with multilayer characteristics.

  12. Interfacing spin qubits in quantum dots and donors—hot, dense, and coherent

    Science.gov (United States)

    Vandersypen, L. M. K.; Bluhm, H.; Clarke, J. S.; Dzurak, A. S.; Ishihara, R.; Morello, A.; Reilly, D. J.; Schreiber, L. R.; Veldhorst, M.

    2017-09-01

    Semiconductor spins are one of the few qubit realizations that remain a serious candidate for the implementation of large-scale quantum circuits. Excellent scalability is often argued for spin qubits defined by lithography and controlled via electrical signals, based on the success of conventional semiconductor integrated circuits. However, the wiring and interconnect requirements for quantum circuits are completely different from those for classical circuits, as individual direct current, pulsed and in some cases microwave control signals need to be routed from external sources to every qubit. This is further complicated by the requirement that these spin qubits currently operate at temperatures below 100 mK. Here, we review several strategies that are considered to address this crucial challenge in scaling quantum circuits based on electron spin qubits. Key assets of spin qubits include the potential to operate at 1 to 4 K, the high density of quantum dots or donors combined with possibilities to space them apart as needed, the extremely long-spin coherence times, and the rich options for integration with classical electronics based on the same technology.

  13. Imaging chemical interfaces perpendicular to the optical axis with focus-engineered coherent anti-Stokes Raman scattering microscopy

    International Nuclear Information System (INIS)

    Krishnamachari, Vishnu Vardhan; Potma, Eric Olaf

    2007-01-01

    In vibrational microscopy, it is often necessary to distinguish between chemically distinct microscopic objects and to highlight the 'chemical interfaces' present in the sample under investigation. Here we apply the concept of focus engineering to enhance the sensitivity of coherent anti-Stokes Raman scattering (CARS) microscopy to these interfaces. Based on detailed numerical simulations, we show that using a focused Stokes field with a sharp phase jump along the longitudinal direction leads to the suppression of the signal from bulk regions and improves the signal contrast from vibrational resonant interfaces oriented perpendicular to the axis of beam propagation. We also demonstrate that the CARS spectral response from chemical interfaces exhibits a clean, Raman-like band-shape with such a phase-shaped excitation. This phenomenon of interface highlighting is a consequence of the coherent nature of CARS signal generation and it involves a complex interplay of the spectral phase of the sample and the spatial phase of the excitation fields

  14. Theory of coherent transition radiation generated at a plasma-vacuum interface

    Energy Technology Data Exchange (ETDEWEB)

    Schroeder, Carl B.; Esarey, Eric; van Tilborg, Jeroen; Leemans, Wim P.

    2003-06-26

    Transition radiation generated by an electron beam, produced by a laser wakefield accelerator operating in the self-modulated regime, crossing the plasma-vacuum boundary is considered. The angular distributions and spectra are calculated for both the incoherent and coherent radiation. The effects of the longitudinal and transverse momentum distributions on the differential energy spectra are examined. Diffraction radiation from the finite transverse extent of the plasma is considered and shown to strongly modify the spectra and energy radiated for long wavelength radiation. This method of transition radiation generation has the capability of producing high peak power THz radiation, of order 100 (mu)J/pulse at the plasma-vacuum interface, which is several orders of magnitude beyond current state-of-the-art THz sources.

  15. Time domain optical coherence tomography investigation of bone matrix interface in rat femurs

    Science.gov (United States)

    Rusu, Laura-Cristina; Negruá¹±iu, Meda-Lavinia; Sinescu, Cosmin; Hoinoiu, Bogdan; Topala, Florin-Ionel; Duma, Virgil-Florin; Rominu, Mihai; Podoleanu, Adrian G.

    2013-08-01

    The materials used to fabricate scaffolds for tissue engineering are derived from synthetic polymers, mainly from the polyester family, or from natural materials (e.g., collagen and chitosan). The mechanical properties and the structural properties of these materials can be tailored by adjusting the molecular weight, the crystalline state, and the ratio of monomers in the copolymers. Quality control and adjustment of the scaffold manufacturing process are essential to achieve high standard scaffolds. Most scaffolds are made from highly crystalline polymers, which inevitably result in their opaque appearance. Their 3-D opaque structure prevents the observation of internal uneven surface structures of the scaffolds under normal optical instruments, such as the traditional light microscope. The inability to easily monitor the inner structure of scaffolds as well as the interface with the old bone poses a major challenge for tissue engineering: it impedes the precise control and adjustment of the parameters that affect the cell growth in response to various mimicked culture conditions. The aim of this paper is to investigate the interface between the femur rat bone and the new bone that is obtained using a method of tissue engineering that is based on different artificial matrixes inserted in previously artificially induced defects. For this study, 15 rats were used in conformity with ethical procedures. In all the femurs a round defect was induced by drilling with a 1 mm spherical Co-Cr surgical drill. The matrixes used were Bioss and 4bone. These materials were inserted into the induced defects. The femurs were investigated at 1 week, 1 month, 2 month and three month after the surgical procedures. The interfaces were examined using Time Domain (TD) Optical Coherence Tomography (OCT) combined with Confocal Microscopy (CM). The optical configuration uses two single mode directional couplers with a superluminiscent diode as the source centered at 1300 nm. The scanning

  16. Prevalence of optical coherence tomography detected vitreomacular interface disorders: The Maastricht Study.

    Science.gov (United States)

    Liesenborghs, Ilona; De Clerck, Eline E B; Berendschot, Tos T J M; Goezinne, Fleur; Schram, Miranda T; Henry, Ronald M A; Stehouwer, Coen D A; Webers, Carroll A B; Schouten, Jan S A G

    2018-01-25

    To calculate the prevalence of all vitreomacular interface (VMI) disorders and stratify according to age, sex and (pre)diabetes status. The presence of VMI disorders was assessed in 2660 participants aged between 40 and 75 years from The Maastricht Study who had a gradable macular spectral-domain optical coherence tomography (SD-OCT) volume scan in at least one eye [mean 59.7 ± 8.2 years, 50.2% men, 1531 normal glucose metabolism (NGM), 401 prediabetes, 728 type 2 diabetes (DM2, oversampled)]. A stratified and multivariable logistic regression analysis was used. The prevalence of the different VMI disorders for individuals with NGM, prediabetes and DM2 was, respectively, 5.7%, 6% and 6.7% for epiretinal membranes; 6%, 9.6% and 6.8% for vitreomacular traction; 1.1%, 0.7% and 0.3% for lamellar macular holes; 0.1%, 0% and 0% for pseudoholes; 1.1%, 1.9% and 5.5% for macular cysts. None of the participants was diagnosed with a macular hole. The prevalence of epiretinal membranes, vitreomacular traction and macular cysts was higher with age (p women (p DM2 is positively associated [OR = 3.9 (95% CI 2.11-7.22, p < 0.001)] with macular cysts and negatively associated with lamellar macular holes [OR = 0.2 (95% CI 0.04-0.9, p = 0.036)] after adjustment for age and sex. The calculated prevalence of VMI disorders was 15.9%. The calculated prevalence of VMI disorders in individuals aged between 40 and 75 years is 15.9%. The prevalence depends on age, sex and glucose metabolism status for several types of VMI disorders. © 2018 The Authors. Acta Ophthalmologica published by John Wiley & Sons Ltd on behalf of Acta Ophthalmologica Scandinavica Foundation.

  17. Scalable devices

    KAUST Repository

    Krü ger, Jens J.; Hadwiger, Markus

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales

  18. Implementation of a scalable, web-based, automated clinical decision support risk-prediction tool for chronic kidney disease using C-CDA and application programming interfaces.

    Science.gov (United States)

    Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam

    2017-11-01

    Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  19. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  20. Fracture of coherent interfaces between an fcc metal matrix and the Cr23C6 carbide precipitate from first principles

    Science.gov (United States)

    Barbé, Elric; Fu, Chu-Chun; Sauzay, Maxime

    2018-02-01

    It is known that microcrack initiation in metallic alloys containing second-phase particles may be caused by either an interfacial or an intraprecipitate fracture. So far, the dependence of these features on properties of the precipitate and the interface is not clearly known. The present study aims to determine the key properties of carbide-metal interfaces controlling the energy and critical stress of fracture, based on density functional theory (DFT) calculations. We address coherent interfaces between a fcc iron or nickel matrix and a frequently observed carbide, the M23C6 , for which a simplified chemical composition Cr23C6 is assumed. The interfacial properties such as the formation and Griffith energies, and the effective Young's modulus are analyzed as functions of the magnetic state of the metal lattice, including the paramagnetic phase of iron. Interestingly, a simpler antiferromagnetic phase is found to exhibit similar interfacial mechanical behavior to the paramagnetic phase. A linear dependence is determined between the surface (and interface) energy and the variation of the number of chemical bonds weighted by the respective bond strength, which can be used to predict the relative formation energy for the surface and interface with various chemical terminations. Finally, the critical stresses of both intraprecipitate and interfacial fractures due to a tensile loading are estimated via the universal binding energy relation (UBER) model, parametrized on the DFT data. The validity of this model is verified in the case of intraprecipitate fracture, against results from DFT tensile test simulations. In agreement with experimental evidences, we predict a much stronger tendency for an interfacial fracture for this carbide. In addition, the calculated interfacial critical stresses are fully compatible with available experimental data in steels, where the interfacial carbide-matrix fracture is only observed at incoherent interfaces.

  1. The Observation of the Structure of M23C6/ γ Coherent Interface in the 100Mn13 High Carbon High Manganese Steel

    Science.gov (United States)

    Xu, Zhenfeng; Ding, Zhimin; Liang, Bo

    2018-03-01

    The M23C6 carbides precipitate along the austenite grain boundary in the 100Mn13 high carbon high manganese steel after 1323 K (1050 °C) solution treatment and subsequent 748 K (475 °C) aging treatment. The grain boundary M23C6 carbides not only spread along the grain boundary and into the incoherent austenite grain, but also grow slowly into the coherent austenite grain. On the basis of the research with optical microscope, a further investigation for the M23C6/ γ coherent interface was carried out by transmission electron microscope (TEM). The results show that the grain boundary M23C6 carbides have orientation relationships with only one of the adjacent austenite grains in the same planes: (\\bar{1}1\\bar{1})_{{{M}_{ 2 3} {C}_{ 6} }} //(\\bar{1}1\\bar{1})_{γ } , (\\bar{1}11)_{{{M}_{ 2 3} {C}_{ 6} }} //(\\bar{1}11)_{γ } ,[ 1 10]_{{{M}_{ 2 3} {C}_{ 6} }} //[ 1 10]_{γ } . The flat M23C6/ γ coherent interface lies on the low indexed crystal planes {111}. Moreover, in M23C6/ γ coherent interface, there are embossments which stretch into the coherent austenite grain γ. Dislocations distribute in the embossments and coherent interface frontier. According to the experimental observation, the paper suggests that the embossments can promote the M23C6/ γ coherent interface move. Besides, the present work has analyzed chemical composition of experimental material and the crystal structures of austenite and M23C6, which indicates that the transformation can be completed through a little diffusion for C atoms and a simple variant for austenite unit cell.

  2. Scalable optical quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Manykin, E A; Mel' nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  3. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V

    2014-01-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  4. A Link between the Increase in Electroencephalographic Coherence and Performance Improvement in Operating a Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Irma Nayeli Angulo-Sherman

    2015-01-01

    Full Text Available We study the relationship between electroencephalographic (EEG coherence and accuracy in operating a brain-computer interface (BCI. In our case, the BCI is controlled through motor imagery. Hence, a number of volunteers were trained using different training paradigms: classical visual feedback, auditory stimulation, and functional electrical stimulation (FES. After each training session, the volunteers’ accuracy in operating the BCI was assessed, and the event-related coherence (ErCoh was calculated for all possible combinations of pairs of EEG sensors. After at least four training sessions, we searched for significant differences in accuracy and ErCoh using one-way analysis of variance (ANOVA and multiple comparison tests. Our results show that there exists a high correlation between an increase in ErCoh and performance improvement, and this effect is mainly localized in the centrofrontal and centroparietal brain regions for the case of our motor imagery task. This result has a direct implication with the development of new techniques to evaluate BCI performance and the process of selecting a feedback modality that better enhances the volunteer’s capacity to operate a BCI system.

  5. Coherent-Interface-Assembled Ag2O-Anchored Nanofibrillated Cellulose Porous Aerogels for Radioactive Iodine Capture.

    Science.gov (United States)

    Lu, Yun; Liu, Hongwei; Gao, Runan; Xiao, Shaoliang; Zhang, Ming; Yin, Yafang; Wang, Siqun; Li, Jian; Yang, Dongjiang

    2016-10-26

    Nanofibrillated cellulose (NFC) has received increasing attention in science and technology because of not only the availability of large amounts of cellulose in nature but also its unique structural and physical features. These high-aspect-ratio nanofibers have potential applications in water remediation and as a reinforcing scaffold in composites, coatings, and porous materials because of their fascinating properties. In this work, highly porous NFC aerogels were prepared based on tert-butanol freeze-drying of ultrasonically isolated bamboo NFC with 20-80 nm diameters. Then nonagglomerated 2-20-nm-diameter silver oxide (Ag 2 O) nanoparticles (NPs) were grown firmly onto the NFC scaffold with a high loading content of ∼500 wt % to fabricate Ag 2 O@NFC organic-inorganic composite aerogels (Ag 2 O@NFC). For the first time, the coherent interface and interaction mechanism between the cellulose I β nanofiber and Ag 2 O NPs are explored by high-resolution transmission electron microscopy and 3D electron tomography. Specifically, a strong hydrogen between Ag 2 O and NFC makes them grow together firmly along a coherent interface, where good lattice matching between specific crystal planes of Ag 2 O and NFC results in very small interfacial straining. The resulting Ag 2 O@NFC aerogels take full advantage of the properties of the 3D organic aerogel framework and inorganic NPs, such as large surface area, interconnected porous structures, and supreme mechanical properties. They open up a wide horizon for functional practical usage, for example, as a flexible superefficient adsorbent to capture I - ions from contaminated water and trap I 2 vapor for safe disposal, as presented in this work. The viable binding mode between many types of inorganic NPs and organic NFC established here highlights new ways to investigate cellulose-based functional nanocomposites.

  6. In vivo Evaluation of Enamel Dental Restoration Interface by Optical Coherence Tomography

    International Nuclear Information System (INIS)

    Mota, Claudia C. B. O.; Gomes, Anderson S. L.; Kashyap, Hannah U. K. S.; Kyotoku, Bernardo B. C.

    2009-01-01

    In this work, we report in vivo application of Optical Coherence Tomography (OCT) to assess dental restorations in humans. After approval by the Ethical Committee in Humans Research of the Federal University of Pernambuco, thirty patients with resin composite restorations performed in anterior teeth were selected. The patients were clinically evaluated, and OCT was performed. Images were obtained using OCT operating in the spectral domain, with a 840 nm super luminescent diode light source (spectral width of 50 nm, fiber output power 25mW and a measured spatial resolution of 10 μm). The image acquisition time was less than one second. The results were analyzed with respect to the integrity and marginal adaptation of the restoration. Using appropriate software, the lesion region can be exactly located and a new restoration procedure can be carried out. We have shown that OCT is more than adequate in clinical practice to assess dental restorations. (Author)

  7. Triboelectric Charging at the Nanostructured Solid/Liquid Interface for Area-Scalable Wave Energy Conversion and Its Use in Corrosion Protection.

    Science.gov (United States)

    Zhao, Xue Jiao; Zhu, Guang; Fan, You Jun; Li, Hua Yang; Wang, Zhong Lin

    2015-07-28

    We report a flexible and area-scalable energy-harvesting technique for converting kinetic wave energy. Triboelectrification as a result of direct interaction between a dynamic wave and a large-area nanostructured solid surface produces an induced current among an array of electrodes. An integration method ensures that the induced current between any pair of electrodes can be constructively added up, which enables significant enhancement in output power and realizes area-scalable integration of electrode arrays. Internal and external factors that affect the electric output are comprehensively discussed. The produced electricity not only drives small electronics but also achieves effective impressed current cathodic protection. This type of thin-film-based device is a potentially practical solution of on-site sustained power supply at either coastal or off-shore sites wherever a dynamic wave is available. Potential applications include corrosion protection, pollution degradation, water desalination, and wireless sensing for marine surveillance.

  8. Capturing coherent structures and turbulent interfaces in wake flows by means of the Organised Eddy Simulation, OES and by Tomo-PIV

    International Nuclear Information System (INIS)

    Deri, E; Braza, M; Cazin, S; Cid, E; Harran, G; Ouvrard, H; Hoarau, Y; Hunt, J

    2011-01-01

    The present study aims at a physical analysis of the coherent and chaotic vortex dynamics in the near wake around a flat plate at incidence, to provide new elements in respect of the flow physics turbulence modelling for high-Reynolds number flows around bodies. This constitutes nowadays a challenge in the aeronautics design. A special attention is paid to capture the thin shear layer interfaces downstream of the separation, responsible for aeroacoustics phenomena related to noise reduction and directly linked to an accurate prediction of the aerodynamic forces. The experimental investigation is carried out by means of tomographic PIV. The interaction of the most energetic coherent structures with the random turbulence is discussed. Furthermore, the POD analysis allowed evaluation of 3D phase averaged dynamics as well as the influence of higher modes associated with the finer-scale turbulence. The numerical study by means of the Organised Eddy Simulation, OES approach ensured a reduced turbulence diffusion that allowed development of the von Karman instability and of capturing of the thin shear-layer interfaces, by using appropriate criteria based on vorticity and dissipation rate of kinetic energy. A comparison between the experiments and the simulations concerning the coherent vortex pattern is carried out.

  9. Interface Consistency

    DEFF Research Database (Denmark)

    Staunstrup, Jørgen

    1998-01-01

    This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....

  10. Characterization of irradiation damage distribution near TiO2/SrTiO3 interfaces using coherent acoustic phonon interferometry

    International Nuclear Information System (INIS)

    Yarotski, Dmitry; Yan Li; Jia Quanxi; Taylor, Antoinette J.; Fu Engang; Wang Yongqiang; Uberuaga, Blas P.

    2012-01-01

    We apply ultrafast coherent acoustic phonon interferometry to characterize the distribution of the radiation damage near the TiO 2 /SrTiO 3 interfaces. We show that the optical and mechanical properties of anatase TiO 2 remain unaffected by the radiation dosages in the 0.1÷5 dpa (displacements per atom) range, while the degraded optical response indicates a significant defect accumulation in the interfacial region of SrTiO 3 at 0.1 dpa and subsequent amorphization at 3 dpa. Comparison between the theoretical simulations and the experimental results reveals an almost threefold reduction of the sound velocity in the irradiated SrTiO 3 layer with peak damage levels of 3 and 5 dpa.

  11. Quantitative Analysis of Lens Nuclear Density Using Optical Coherence Tomography (OCT with a Liquid Optics Interface: Correlation between OCT Images and LOCS III Grading

    Directory of Open Access Journals (Sweden)

    You Na Kim

    2016-01-01

    Full Text Available Purpose. To quantify whole lens and nuclear lens densities using anterior-segment optical coherence tomography (OCT with a liquid optics interface and evaluate their correlation with Lens Opacities Classification System III (LOCS III lens grading and corrected distance visual acuity (BCVA. Methods. OCT images of the whole lens and lens nucleus of eyes with age-related nuclear cataract were analyzed using ImageJ software. The lens grade and nuclear density were represented in pixel intensity units (PIU and correlations between PIU, BCVA, and LOCS III were assessed. Results. Forty-seven eyes were analyzed. The mean whole lens and lens nuclear densities were 26.99 ± 5.23 and 19.43 ± 6.15 PIU, respectively. A positive linear correlation was observed between lens opacities (R2 = 0.187, p<0.01 and nuclear density (R2 = 0.316, p<0.01 obtained from OCT images and LOCS III. Preoperative BCVA and LOCS III were also positively correlated (R2 = 0.454, p<0.01. Conclusions. Whole lens and lens nuclear densities obtained from OCT correlated with LOCS III. Nuclear density showed a higher positive correlation with LOCS III than whole lens density. OCT with a liquid optics interface is a potential quantitative method for lens grading and can aid in monitoring and managing age-related cataracts.

  12. An observation on the quality of interfaces in order to understand the complexity and coherence of informal settlement: A study on Tamansari Kampung in Bandung

    Science.gov (United States)

    Sawira, S.; Rahman, T.

    2018-05-01

    Self-organized settlements are formed within the limited capacity of the inhabitants with or without the Government’s interventions. This pattern is mostly found in the informal settlements, where occupants are the planners who are guided by their needs, limited resources and vernacular knowledge about place making. Understanding the process of its development and transformation could be a way of unfolding the complexity it offers to a formal urban setting. To identify the patterns of adaptation process, a study of morphological elements (i.e. house form, streets) could be a possible way. A case study of an informal settlement (Kampung of Tamansari, Bandung in Indonesia) has been taken to dissect these elements. Two of important components of the study area: house forms and streets created the first layer of urban fabric. High population density demanded layers of needs and activities which eventually guided the multifunctional characteristics of streets and house forms. Thus, streets create dialogue with the complex built forms-often known as interface is the key element to understand the underneath order of Tamansari. Here interface can be divided into two categories depending on their scale – small and large. Small scale interfaces are comprised of small elements such as, extended platform, fence, steps, low height wall, blank wall and elements to set above, set forth, set over in house forms. These components help to create and define semipublic spaces in the settlement. These spaces could be visually and physically interactive or no interactive which result into active or inactive spaces respectively. Small scale interfaces are common features of the settlement, whereas large scale interfaces are placed at strategic locations and act as active spaces. Connecting bridges, open spaces and contours often create special dialogue within and beyond the study area. Interfaces cater diversity in the settlement by creating hierarchy of spaces. Sense of belonging

  13. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  14. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  15. Model-Based Evaluation Of System Scalability: Bandwidth Analysis For Smartphone-Based Biosensing Applications

    DEFF Research Database (Denmark)

    Patou, François; Madsen, Jan; Dimaki, Maria

    2016-01-01

    Scalability is a design principle often valued for the engineering of complex systems. Scalability is the ability of a system to change the current value of one of its specification parameters. Although targeted frameworks are available for the evaluation of scalability for specific digital systems...... re-engineering of 5 independent system modules, from the replacement of a wireless Bluetooth interface, to the revision of the ADC sample-and-hold operation could help increase system bandwidth....

  16. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro; Shinagawa, Tatsuya

    2017-01-01

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  17. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  18. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  19. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  20. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  1. Volitional Control of Neuromagnetic Coherence

    Directory of Open Access Journals (Sweden)

    Matthew D Sacchet

    2012-12-01

    Full Text Available Coherence of neural activity between circumscribed brain regions has been implicated as an indicator of intracerebral communication in various cognitive processes. While neural activity can be volitionally controlled with neurofeedback, the volitional control of coherence has not yet been explored. Learned volitional control of coherence could elucidate mechanisms of associations between cortical areas and its cognitive correlates and may have clinical implications. Neural coherence may also provide a signal for brain-computer interfaces (BCI. In the present study we used the Weighted Overlapping Segment Averaging (WOSA method to assess coherence between bilateral magnetoencephalograph (MEG sensors during voluntary digit movement as a basis for BCI control. Participants controlled an onscreen cursor, with a success rate of 124 of 180 (68.9%, sign-test p < 0.001 and 84 out of 100 (84%, sign-test p < 0.001. The present findings suggest that neural coherence may be volitionally controlled and may have specific behavioral correlates.

  2. Coherent interface structures and intergrain Josephson coupling in dense MgO/Mg{sub 2}Si/MgB{sub 2} nanocomposites

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Katsuya; Takahashi, Kazuyuki; Uchino, Takashi, E-mail: uchino@kobe-u.ac.jp [Department of Chemistry, Graduate School of Science, Kobe University, Nada, Kobe 657-8501 (Japan); Nagashima, Yukihito [Nippon Sheet Glass Co., Ltd., Konoike, Itami 664-8520 (Japan); Seto, Yusuke [Department of Planetology, Graduate School of Science, Kobe University, Nada, Kobe 657-8501 (Japan); Matsumoto, Megumi; Sakurai, Takahiro [Center for Support to Research and Education Activities, Kobe University, Nada, Kobe 657-8501 (Japan); Ohta, Hitoshi [Molecular Photoscience Research Center, Kobe University, Nada, Kobe 657-8501 (Japan)

    2016-07-07

    Many efforts are under way to control the structure of heterointerfaces in nanostructured composite materials for designing functionality and engineering application. However, the fabrication of high-quality heterointerfaces is challenging because the crystal/crystal interface is usually the most defective part of the nanocomposite materials. In this work, we show that fully dense insulator (MgO)/semiconductor(Mg{sub 2}Si)/superconductor(MgB{sub 2}) nanocomposites with atomically smooth and continuous interfaces, including epitaxial-like MgO/Mg{sub 2}Si interfaces, are obtained by solid phase reaction between metallic magnesium and a borosilicate glass. The resulting nanocomposites exhibit a semiconductor-superconducting transition at 36 K owing to the MgB{sub 2} nanograins surrounded by the MgO/Mg{sub 2}Si matrix. This transition is followed by the intergrain phase-lock transition at ∼24 K due to the construction of Josephson-coupled network, eventually leading to a near-zero resistance state at 17 K. The method not only provides a simple process to fabricate dense nanocomposites with high-quality interfaces, but also enables to investigate the electric and magnetic properties of embedded superconducting nanograins with good intergrain coupling.

  3. Coherence matrix of plasmonic beams

    DEFF Research Database (Denmark)

    Novitsky, Andrey; Lavrinenko, Andrei

    2013-01-01

    We consider monochromatic electromagnetic beams of surface plasmon-polaritons created at interfaces between dielectric media and metals. We theoretically study non-coherent superpositions of elementary surface waves and discuss their spectral degree of polarization, Stokes parameters, and the for...... of the spectral coherence matrix. We compare the polarization properties of the surface plasmonspolaritons as three-dimensional and two-dimensional fields concluding that the latter is superior....

  4. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  5. Scalable cloud without dedicated storage

    Science.gov (United States)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  6. Characterisation of dispersive systems using a coherer

    Directory of Open Access Journals (Sweden)

    Nikolić Pantelija M.

    2002-01-01

    Full Text Available The possibility of characterization of aluminium powders using a horizontal coherer has been considered. Al powders of known dimension were treated with a high frequency electromagnetic field or with a DC electric field, which were increased until a dielectric breakdown occurred. Using a multifunctional card PC-428 Electronic Design and a suitable interface between the coherer and PC, the activation time of the coherer was measured as a function of powder dimension and the distance between the coherer electrodes. It was also shown that the average dimension of powders of unknown size could be determined using the coherer.

  7. Coherent detectors

    International Nuclear Information System (INIS)

    Lawrence, C R; Church, S; Gaier, T; Lai, R; Ruf, C; Wollack, E

    2009-01-01

    Coherent systems offer significant advantages in simplicity, testability, control of systematics, and cost. Although quantum noise sets the fundamental limit to their performance at high frequencies, recent breakthroughs suggest that near-quantum-limited noise up to 150 or even 200 GHz could be realized within a few years. If the demands of component separation can be met with frequencies below 200 GHz, coherent systems will be strong competitors for a space CMB polarization mission. The rapid development of digital correlator capability now makes space interferometers with many hundreds of elements possible. Given the advantages of coherent interferometers in suppressing systematic effects, such systems deserve serious study.

  8. Coherent detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, C R [M/C 169-327, Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Church, S [Room 324 Varian Physics Bldg, 382 Via Pueblo Mall, Stanford, CA 94305-4060 (United States); Gaier, T [M/C 168-314, Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Lai, R [Northrop Grumman Corporation, Redondo Beach, CA 90278 (United States); Ruf, C [1533 Space Research Building, The University of Michigan, Ann Arbor, MI 48109-2143 (United States); Wollack, E, E-mail: charles.lawrence@jpl.nasa.go [NASA/GSFC, Code 665, Observational Cosmology Laboratory, Greenbelt, MD 20771 (United States)

    2009-03-01

    Coherent systems offer significant advantages in simplicity, testability, control of systematics, and cost. Although quantum noise sets the fundamental limit to their performance at high frequencies, recent breakthroughs suggest that near-quantum-limited noise up to 150 or even 200 GHz could be realized within a few years. If the demands of component separation can be met with frequencies below 200 GHz, coherent systems will be strong competitors for a space CMB polarization mission. The rapid development of digital correlator capability now makes space interferometers with many hundreds of elements possible. Given the advantages of coherent interferometers in suppressing systematic effects, such systems deserve serious study.

  9. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  10. CODA: A scalable, distributed data acquisition system

    International Nuclear Information System (INIS)

    Watson, W.A. III; Chen, J.; Heyes, G.; Jastrzembski, E.; Quarrie, D.

    1994-01-01

    A new data acquisition system has been designed for physics experiments scheduled to run at CEBAF starting in the summer of 1994. This system runs on Unix workstations connected via ethernet, FDDI, or other network hardware to multiple intelligent front end crates -- VME, CAMAC or FASTBUS. CAMAC crates may either contain intelligent processors, or may be interfaced to VME. The system is modular and scalable, from a single front end crate and one workstation linked by ethernet, to as may as 32 clusters of front end crates ultimately connected via a high speed network to a set of analysis workstations. The system includes an extensible, device independent slow controls package with drivers for CAMAC, VME, and high voltage crates, as well as a link to CEBAF accelerator controls. All distributed processes are managed by standard remote procedure calls propagating change-of-state requests, or reading and writing program variables. Custom components may be easily integrated. The system is portable to any front end processor running the VxWorks real-time kernel, and to most workstations supplying a few standard facilities such as rsh and X-windows, and Motif and socket libraries. Sample implementations exist for 2 Unix workstation families connected via ethernet or FDDI to VME (with interfaces to FASTBUS or CAMAC), and via ethernet to FASTBUS or CAMAC

  11. Nonlinear optics at interfaces

    International Nuclear Information System (INIS)

    Chen, C.K.

    1980-12-01

    Two aspects of surface nonlinear optics are explored in this thesis. The first part is a theoretical and experimental study of nonlinear intraction of surface plasmons and bulk photons at metal-dielectric interfaces. The second part is a demonstration and study of surface enhanced second harmonic generation at rough metal surfaces. A general formulation for nonlinear interaction of surface plasmons at metal-dielectric interfaces is presented and applied to both second and third order nonlinear processes. Experimental results for coherent second and third harmonic generation by surface plasmons and surface coherent antiStokes Raman spectroscopy (CARS) are shown to be in good agreement with the theory

  12. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using...... simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  13. Optimizing Nanoelectrode Arrays for Scalable Intracellular Electrophysiology.

    Science.gov (United States)

    Abbott, Jeffrey; Ye, Tianyang; Ham, Donhee; Park, Hongkun

    2018-03-20

    , clarifying how the nanoelectrode attains intracellular access. This understanding will be translated into a circuit model for the nanobio interface, which we will then use to lay out the strategies for improving the interface. The intracellular interface of the nanoelectrode is currently inferior to that of the patch clamp electrode; reaching this benchmark will be an exciting challenge that involves optimization of electrode geometries, materials, chemical modifications, electroporation protocols, and recording/stimulation electronics, as we describe in the Account. Another important theme of this Account, beyond the optimization of the individual nanoelectrode-cell interface, is the scalability of the nanoscale electrodes. We will discuss this theme using a recent development from our groups as an example, where an array of ca. 1000 nanoelectrode pixels fabricated on a CMOS integrated circuit chip performs parallel intracellular recording from a few hundreds of cardiomyocytes, which marks a new milestone in electrophysiology.

  14. Design of a Scalable Event Notification Service: Interface and Architecture

    National Research Council Canada - National Science Library

    Carzaniga, Antonio; Rosenblum, David S; Wolf, Alexander L

    1998-01-01

    Event-based distributed systems are programmed to operate in response to events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems...

  15. Coherence and Sense of Coherence

    DEFF Research Database (Denmark)

    Dau, Susanne

    2014-01-01

    Constraints in the implementation of models of blended learning can be explained by several causes, but in this paper, it is illustrated that lack of sense of coherence is a major factor of these constraints along with the referential whole of the perceived learning environments. The question exa...

  16. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  17. Developing Scalable Information Security Systems

    Directory of Open Access Journals (Sweden)

    Valery Konstantinovich Ablekov

    2013-06-01

    Full Text Available Existing physical security systems has wide range of lacks, including: high cost, a large number of vulnerabilities, problems of modification and support system. This paper covers an actual problem of developing systems without this list of drawbacks. The paper presents the architecture of the information security system, which operates through the network protocol TCP/IP, including the ability to connect different types of devices and integration with existing security systems. The main advantage is a significant increase in system reliability, scalability, both vertically and horizontally, with minimal cost of both financial and time resources.

  18. Scalable web services for the PSIPRED Protein Analysis Workbench.

    Science.gov (United States)

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  19. Scalable and Hybrid Radio Resource Management for Future Wireless Networks

    DEFF Research Database (Denmark)

    Mino, E.; Luo, Jijun; Tragos, E.

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios, from local area to wide area wireless networks. The integration in a unique radio system of a cellular and local area type networks supposes...... a great advantage for the final user and for the operator, compared with the current situation, with disconnected systems, usually with different subscriptions, radio interfaces and terminals. To be a ubiquitous wireless system, the IST project WINNER II has defined three system modes. This contribution...

  20. Coherent Baryogenesis

    CERN Document Server

    Garbrecht, B; Schmidt, M G; Garbrecht, Bjorn; Prokopec, Tomislav; Schmidt, Michael G.

    2004-01-01

    We propose a new baryogenesis scenario based on coherent production and mixing of different fermionic species. The mechanism is operative during phase transitions, at which the fermions acquire masses via Yukawa couplings to scalar fields. Baryon production is efficient when the mass matrix is nonadiabatically varying, nonsymmetric and when it violates CP and B-L directly, or some other charges that are eventually converted to B-L. We first consider a toy model, which involves two mixing fermionic species, and then a hybrid inflationary scenario embedded in a supersymmetric Pati-Salam GUT. We show that, quite generically, a baryon excess in accordance with observation can result.

  1. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  2. Coherent Dynamics of a Hybrid Quantum Spin-Mechanical Oscillator System

    Science.gov (United States)

    Lee, Kenneth William, III

    A fully functional quantum computer must contain at least two important components: a quantum memory for storing and manipulating quantum information and a quantum data bus to securely transfer information between quantum memories. Typically, a quantum memory is composed of a matter system, such as an atom or an electron spin, due to their prolonged quantum coherence. Alternatively, a quantum data bus is typically composed of some propagating degree of freedom, such as a photon, which can retain quantum information over long distances. Therefore, a quantum computer will likely be a hybrid quantum device, consisting of two or more disparate quantum systems. However, there must be a reliable and controllable quantum interface between the memory and bus in order to faithfully interconvert quantum information. The current engineering challenge for quantum computers is scaling the device to large numbers of controllable quantum systems, which will ultimately depend on the choice of the quantum elements and interfaces utilized in the device. In this thesis, we present and characterize a hybrid quantum device comprised of single nitrogen-vacancy (NV) centers embedded in a high quality factor diamond mechanical oscillator. The electron spin of the NV center is a leading candidate for the realization of a quantum memory due to its exceptional quantum coherence times. On the other hand, mechanical oscillators are highly sensitive to a wide variety of external forces, and have the potential to serve as a long-range quantum bus between quantum systems of disparate energy scales. These two elements are interfaced through crystal strain generated by vibrations of the mechanical oscillator. Importantly, a strain interface allows for a scalable architecture, and furthermore, opens the door to integration into a larger quantum network through coupling to an optical interface. There are a few important engineering challenges associated with this device. First, there have been no

  3. Scalable quantum information processing with atomic ensembles and flying photons

    International Nuclear Information System (INIS)

    Mei Feng; Yu Yafei; Feng Mang; Zhang Zhiming

    2009-01-01

    We present a scheme for scalable quantum information processing with atomic ensembles and flying photons. Using the Rydberg blockade, we encode the qubits in the collective atomic states, which could be manipulated fast and easily due to the enhanced interaction in comparison to the single-atom case. We demonstrate that our proposed gating could be applied to generation of two-dimensional cluster states for measurement-based quantum computation. Moreover, the atomic ensembles also function as quantum repeaters useful for long-distance quantum state transfer. We show the possibility of our scheme to work in bad cavity or in weak coupling regime, which could much relax the experimental requirement. The efficient coherent operations on the ensemble qubits enable our scheme to be switchable between quantum computation and quantum communication using atomic ensembles.

  4. Scalable Creation of Long-Lived Multipartite Entanglement

    Science.gov (United States)

    Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.

    2017-10-01

    We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in 40Ca+, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ ⟩=(1 /√{2 })(|0000 ⟩+|1111 ⟩) , and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.

  5. Requirements for Scalable Access Control and Security Management Architectures

    National Research Council Canada - National Science Library

    Keromytis, Angelos D; Smith, Jonathan M

    2005-01-01

    Maximizing local autonomy has led to a scalable Internet. Scalability and the capacity for distributed control have unfortunately not extended well to resource access control policies and mechanisms...

  6. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    DEFF Research Database (Denmark)

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....... system that aims to improve the state of the art in scalability and reliability. We achieve scalability through limiting data sharing when possible, and through extensive use of lock-free data structures. Reliability is addressed with a careful re-design of the programming interface and structure...

  7. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  8. ALADDIN - enhancing applicability and scalability

    International Nuclear Information System (INIS)

    Roverso, Davide

    2001-02-01

    The ALADDIN project aims at the study and development of flexible, accurate, and reliable techniques and principles for computerised event classification and fault diagnosis for complex machinery and industrial processes. The main focus of the project is on advanced numerical techniques, such as wavelets, and empirical modelling with neural networks. This document reports on recent important advancements, which significantly widen the practical applicability of the developed principles, both in terms of flexibility of use, and in terms of scalability to large problem domains. In particular, two novel techniques are here described. The first, which we call Wavelet On- Line Pre-processing (WOLP), is aimed at extracting, on-line, relevant dynamic features from the process data streams. This technique allows a system a greater flexibility in detecting and processing transients at a range of different time scales. The second technique, which we call Autonomous Recursive Task Decomposition (ARTD), is aimed at tackling the problem of constructing a classifier able to discriminate among a large number of different event/fault classes, which is often the case when the application domain is a complex industrial process. ARTD also allows for incremental application development (i.e. the incremental addition of new classes to an existing classifier, without the need of retraining the entire system), and for simplified application maintenance. The description of these novel techniques is complemented by reports of quantitative experiments that show in practice the extent of these improvements. (Author)

  9. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  10. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  11. Scalable architecture for a room temperature solid-state quantum information processor.

    Science.gov (United States)

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  12. Resource-aware complexity scalability for mobile MPEG encoding

    NARCIS (Netherlands)

    Mietens, S.O.; With, de P.H.N.; Hentschel, C.; Panchanatan, S.; Vasudev, B.

    2004-01-01

    Complexity scalability attempts to scale the required resources of an algorithm with the chose quality settings, in order to broaden the application range. In this paper, we present complexity-scalable MPEG encoding of which the core processing modules are modified for scalability. Scalability is

  13. Coherent dynamics of plasma mirrors

    Energy Technology Data Exchange (ETDEWEB)

    Thaury, C; George, H; Quere, F; Monot, P; Martin, Ph [CEA, DSM, IRAMIS, Serv Photons Atomes and Mol, F-91191 Gif Sur Yvette, (France); Loch, R [Univ Twente, Laser Phys and Nonlinear Opt Grp, Fac Sci and Technol, MESA Inst Nanotechnol, NL-7500 AE Enschede, (Netherlands); Geindre, J P [Ecole Polytech, Lab Pour Utilisat Lasers Intenses, CNRS, F-91128 Palaiseau, (France)

    2008-07-01

    Coherent ultrashort X-ray pulses provide new ways to probe matter and its ultrafast dynamics. One of the promising paths to generate these pulses consists of using a nonlinear interaction with a system to strongly and periodically distort the waveform of intense laser fields, and thus produce high-order harmonics. Such distortions have so far been induced by using the nonlinear polarizability of atoms, leading to the production of atto-second light bursts, short enough to study the dynamics of electrons in matter. Shorter and more intense atto-second pulses, together with higher harmonic orders, are expected by reflecting ultra intense laser pulses on a plasma mirror - a dense (approximate to 10{sup 23} electrons cm{sup -3}) plasma with a steep interface. However, short-wavelength-light sources produced by such plasmas are known to generally be incoherent. In contrast, we demonstrate that like in usual low-intensity reflection, the coherence of the light wave is preserved during harmonic generation on plasma mirrors. We then exploit this coherence for interferometric measurements and thus carry out a first study of the laser-driven coherent dynamics of the plasma electrons. (authors)

  14. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  15. The TOTEM DAQ based on the Scalable Readout System (SRS)

    Science.gov (United States)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  16. Temporal Coherence Strategies for Augmented Reality Labeling

    DEFF Research Database (Denmark)

    Madsen, Jacob Boesen; Tatzgern, Markus; Madsen, Claus B.

    2016-01-01

    Temporal coherence of annotations is an important factor in augmented reality user interfaces and for information visualization. In this paper, we empirically evaluate four different techniques for annotation. Based on these findings, we follow up with subjective evaluations in a second experiment...

  17. Kinetic Interface

    DEFF Research Database (Denmark)

    2009-01-01

    A kinetic interface for orientation detection in a video training system is disclosed. The interface includes a balance platform instrumented with inertial motion sensors. The interface engages a participant's sense of balance in training exercises.......A kinetic interface for orientation detection in a video training system is disclosed. The interface includes a balance platform instrumented with inertial motion sensors. The interface engages a participant's sense of balance in training exercises....

  18. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Directory of Open Access Journals (Sweden)

    Giovanni Delussu

    Full Text Available This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  19. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data

    Science.gov (United States)

    Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191

  20. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Science.gov (United States)

    Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  1. Scheme for achieving coherent perfect absorption by anisotropic metamaterials

    KAUST Repository

    Zhang, Xiujuan

    2017-02-22

    We propose a unified scheme to achieve coherent perfect absorption of electromagnetic waves by anisotropic metamaterials. The scheme describes the condition on perfect absorption and offers an inverse design route based on effective medium theory in conjunction with retrieval method to determine practical metamaterial absorbers. The scheme is scalable to frequencies and applicable to various incident angles. Numerical simulations show that perfect absorption is achieved in the designed absorbers over a wide range of incident angles, verifying the scheme. By integrating these absorbers, we further propose an absorber to absorb energy from two coherent point sources.

  2. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Lund, Morten; Nielsen, Christian

    2018-01-01

    -term pro table business. However, the main message of this article is that while providing a good value proposition may help the rm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. Design/Methodology/Approach: The article is based...... on a ve-year longitudinal action research project of over 90 companies that participated in the International Center for Innovation project aimed at building 10 global network-based business models. Findings: This article introduces and discusses the term scalability from a company-level perspective......Purpose: The purpose of the article is to de ne what scalable business models are. Central to the contemporary understanding of business models is the value proposition towards the customer and the hypotheses generated about delivering value to the customer which become a good foundation for a long...

  3. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    and is itself a source and cause of prolific data creation. This calls for scalable map processing techniques that can handle the data volume and which play well with the predominant data models on the Web. (4) Maps are now consumed around the clock by a global audience. While historical maps were singleuser......-defined constraints as well as custom objectives. The purpose of the language is to derive a target multi-scale database from a source database according to holistic specifications. (b) The Glossy SQL compiler allows Glossy SQL to be scalably executed in a spatial analytics system, such as a spatial relational......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...

  4. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  5. Enhancing Scalability of Sparse Direct Methods

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia, Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-01-01

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers

  6. Software performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2009-01-01

    Praise from the Reviewers:"The practicality of the subject in a real-world situation distinguishes this book from othersavailable on the market."—Professor Behrouz Far, University of Calgary"This book could replace the computer organization texts now in use that every CS and CpEstudent must take. . . . It is much needed, well written, and thoughtful."—Professor Larry Bernstein, Stevens Institute of TechnologyA distinctive, educational text onsoftware performance and scalabilityThis is the first book to take a quantitative approach to the subject of software performance and scalability

  7. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....... as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take...

  8. Cohering power of quantum operations

    Energy Technology Data Exchange (ETDEWEB)

    Bu, Kaifeng, E-mail: bkf@zju.edu.cn [School of Mathematical Sciences, Zhejiang University, Hangzhou 310027 (China); Kumar, Asutosh, E-mail: asukumar@hri.res.in [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019 (India); Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Zhang, Lin, E-mail: linyz@zju.edu.cn [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Wu, Junde, E-mail: wjd@zju.edu.cn [School of Mathematical Sciences, Zhejiang University, Hangzhou 310027 (China)

    2017-05-18

    Highlights: • Quantum coherence. • Cohering power: production of quantum coherence by quantum operations. • Study of cohering power and generalized cohering power, and their comparison for differentmeasures of quantum coherence. • Operational interpretation of cohering power. • Bound on cohering power of a generic quantum operation. - Abstract: Quantum coherence and entanglement, which play a crucial role in quantum information processing tasks, are usually fragile under decoherence. Therefore, the production of quantum coherence by quantum operations is important to preserve quantum correlations including entanglement. In this paper, we study cohering power–the ability of quantum operations to produce coherence. First, we provide an operational interpretation of cohering power. Then, we decompose a generic quantum operation into three basic operations, namely, unitary, appending and dismissal operations, and show that the cohering power of any quantum operation is upper bounded by the corresponding unitary operation. Furthermore, we compare cohering power and generalized cohering power of quantum operations for different measures of coherence.

  9. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  10. Towards scalable quantum communication and computation: Novel approaches and realizations

    Science.gov (United States)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  11. Partially coherent imaging and spatial coherence wavelets

    International Nuclear Information System (INIS)

    Castaneda, Roman

    2003-03-01

    A description of spatially partially coherent imaging based on the propagation of second order spatial coherence wavelets and marginal power spectra (Wigner distribution functions) is presented. In this dynamics, the spatial coherence wavelets will be affected by the system through its elementary transfer function. The consistency of the model with the both extreme cases of full coherent and incoherent imaging was proved. In the last case we obtained the classical concept of optical transfer function as a simple integral of the elementary transfer function. Furthermore, the elementary incoherent response function was introduced as the Fourier transform of the elementary transfer function. It describes the propagation of spatial coherence wavelets form each object point to each image point through a specific point on the pupil planes. The point spread function of the system was obtained by a simple integral of the elementary incoherent response function. (author)

  12. High-energy coherent terahertz radiation emitted by wide-angle electron beams from a laser-wakefield accelerator

    Science.gov (United States)

    Yang, Xue; Brunetti, Enrico; Jaroszynski, Dino A.

    2018-04-01

    High-charge electron beams produced by laser-wakefield accelerators are potentially novel, scalable sources of high-power terahertz radiation suitable for applications requiring high-intensity fields. When an intense laser pulse propagates in underdense plasma, it can generate femtosecond duration, self-injected picocoulomb electron bunches that accelerate on-axis to energies from 10s of MeV to several GeV, depending on laser intensity and plasma density. The process leading to the formation of the accelerating structure also generates non-injected, sub-picosecond duration, 1–2 MeV nanocoulomb electron beams emitted obliquely into a hollow cone around the laser propagation axis. These wide-angle beams are stable and depend weakly on laser and plasma parameters. Here we perform simulations to characterise the coherent transition radiation emitted by these beams if passed through a thin metal foil, or directly at the plasma–vacuum interface, showing that coherent terahertz radiation with 10s μJ to mJ-level energy can be produced with an optical to terahertz conversion efficiency up to 10‑4–10‑3.

  13. Elastic energies of coherent germanium islands on silicon

    International Nuclear Information System (INIS)

    Vanderbilt, D.; Wickham, L.K.

    1991-01-01

    Motivated by recent observations of coherent Ge island formation during growth of Ge on Si (100), the authors of this paper have carried out a theoretical study of the elastic energies associated with the evolution of a uniform strained overlayer as it segregates into coherent islands. In the context of a two-dimensional model, the authors have explored the conditions under which coherent islands may be energetically favored over both uniform epitaxial films and dislocated islands. The authors find that if the interface energy (for dislocated islands) is more than about 15% of the surface energy, then there is a range of island sizes for which the coherent island structure is preferred

  14. On Longitudinal Spectral Coherence

    DEFF Research Database (Denmark)

    Kristensen, Leif

    1979-01-01

    It is demonstrated that the longitudinal spectral coherence differs significantly from the transversal spectral coherence in its dependence on displacement and frequency. An expression for the longitudinal coherence is derived and it is shown how the scale of turbulence, the displacement between ...... observation sites and the turbulence intensity influence the results. The limitations of the theory are discussed....

  15. Using scalable vector graphics to evolve art

    NARCIS (Netherlands)

    den Heijer, E.; Eiben, A. E.

    2016-01-01

    In this paper, we describe our investigations of the use of scalable vector graphics as a genotype representation in evolutionary art. We describe the technical aspects of using SVG in evolutionary art, and explain our custom, SVG specific operators initialisation, mutation and crossover. We perform

  16. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi; Gumerov, Nail A.; Yokota, Rio; Barba, Lorena A.; Duraiswami, Ramani

    2014-01-01

    -node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff

  17. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  18. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  19. Cooperative Scalable Moving Continuous Query Processing

    DEFF Research Database (Denmark)

    Li, Xiaohui; Karras, Panagiotis; Jensen, Christian S.

    2012-01-01

    of the global view and handle the majority of the workload. Meanwhile, moving clients, having basic memory and computation resources, handle small portions of the workload. This model is further enhanced by dynamic region allocation and grid size adjustment mechanisms that reduce the communication...... and computation cost for both servers and clients. An experimental study demonstrates that our approaches offer better scalability than competitors...

  20. Scalable optical switches for computing applications

    NARCIS (Netherlands)

    White, I.H.; Aw, E.T.; Williams, K.A.; Wang, Haibo; Wonfor, A.; Penty, R.V.

    2009-01-01

    A scalable photonic interconnection network architecture is proposed whereby a Clos network is populated with broadcast-and-select stages. This enables the efficient exploitation of an emerging class of photonic integrated switch fabric. A low distortion space switch technology based on recently

  1. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    Science.gov (United States)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  2. Scalability Dilemma and Statistic Multiplexed Computing — A Theory and Experiment

    Directory of Open Access Journals (Sweden)

    Justin Yuan Shi

    2017-08-01

    Full Text Available The For the last three decades, end-to-end computing paradigms, such as MPI (Message Passing Interface, RPC (Remote Procedure Call and RMI (Remote Method Invocation, have been the de facto paradigms for distributed and parallel programming. Despite of the successes, applications built using these paradigms suffer due to the proportionality factor of crash in the application with its size. Checkpoint/restore and backup/recovery are the only means to save otherwise lost critical information. The scalability dilemma is such a practical challenge that the probability of the data losses increases as the application scales in size. The theoretical significance of this practical challenge is that it undermines the fundamental structure of the scientific discovery process and mission critical services in production today. In 1997, the direct use of end-to-end reference model in distributed programming was recognized as a fallacy. The scalability dilemma was predicted. However, this voice was overrun by the passage of time. Today, the rapidly growing digitized data demands solving the increasingly critical scalability challenges. Computing architecture scalability, although loosely defined, is now the front and center of large-scale computing efforts. Constrained only by the economic law of diminishing returns, this paper proposes a narrow definition of a Scalable Computing Service (SCS. Three scalability tests are also proposed in order to distinguish service architecture flaws from poor application programming. Scalable data intensive service requires additional treatments. Thus, the data storage is assumed reliable in this paper. A single-sided Statistic Multiplexed Computing (SMC paradigm is proposed. A UVR (Unidirectional Virtual Ring SMC architecture is examined under SCS tests. SMC was designed to circumvent the well-known impossibility of end-to-end paradigms. It relies on the proven statistic multiplexing principle to deliver reliable service

  3. Quantum dot-micropillars: a bright source of coherent single photons

    DEFF Research Database (Denmark)

    Unsleber, Sebastian; He, Yu-Ming; Maier, Sebastian

    2016-01-01

    We present the efficient generation of coherent single photons based on quantum dots in micropillars. We utilize a scalable lithography scheme leading to quantum dot-micropillar devices with 74% extraction efficiency. Via pulsed strict resonant pumping, we show an indistinguishability of consecut...

  4. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  5. Elastic pointer directory organization for scalable shared memory multiprocessors

    Institute of Scientific and Technical Information of China (English)

    Yuhang Liu; Mingfa Zhu; Limin Xiao

    2014-01-01

    In the field of supercomputing, one key issue for scal-able shared-memory multiprocessors is the design of the directory which denotes the sharing state for a cache block. A good direc-tory design intends to achieve three key attributes: reasonable memory overhead, sharer position precision and implementation complexity. However, researchers often face the problem that gain-ing one attribute may result in losing another. The paper proposes an elastic pointer directory (EPD) structure based on the analysis of shared-memory applications, taking the fact that the number of sharers for each directory entry is typical y smal . Analysis re-sults show that for 4 096 nodes, the ratio of memory overhead to the ful-map directory is 2.7%. Theoretical analysis and cycle-accurate execution-driven simulations on a 16 and 64-node cache coherence non uniform memory access (CC-NUMA) multiproces-sor show that the corresponding pointer overflow probability is reduced significantly. The performance is observed to be better than that of a limited pointers directory and almost identical to the ful-map directory, except for the slight implementation complex-ity. Using the directory cache to explore directory access locality is also studied. The experimental result shows that this is a promis-ing approach to be used in the state-of-the-art high performance computing domain.

  6. A lightweight scalable agarose-gel-synthesized thermoelectric composite

    Science.gov (United States)

    Kim, Jin Ho; Fernandes, Gustavo E.; Lee, Do-Joong; Hirst, Elizabeth S.; Osgood, Richard M., III; Xu, Jimmy

    2018-03-01

    Electronic devices are now advancing beyond classical, rigid systems and moving into lighweight flexible regimes, enabling new applications such as body-wearables and ‘e-textiles’. To support this new electronic platform, composite materials that are highly conductive yet scalable, flexible, and wearable are needed. Materials with high electrical conductivity often have poor thermoelectric properties because their thermal transport is made greater by the same factors as their electronic conductivity. We demonstrate, in proof-of-principle experiments, that a novel binary composite can disrupt thermal (phononic) transport, while maintaining high electrical conductivity, thus yielding promising thermoelectric properties. Highly conductive Multi-Wall Carbon Nanotube (MWCNT) composites are combined with a low-band gap semiconductor, PbS. The work functions of the two materials are closely matched, minimizing the electrical contact resistance within the composite. Disparities in the speed of sound in MWCNTs and PbS help to inhibit phonon propagation, and boundary layer scattering at interfaces between these two materials lead to large Seebeck coefficient (> 150 μV/K) (Mott N F and Davis E A 1971 Electronic Processes in Non-crystalline Materials (Oxford: Clarendon), p 47) and a power factor as high as 10 μW/(K2 m). The overall fabrication process is not only scalable but also conformal and compatible with large-area flexible hosts including metal sheets, films, coatings, possibly arrays of fibers, textiles and fabrics. We explain the behavior of this novel thermoelectric material platform in terms of differing length scales for electrical conductivity and phononic heat transfer, and explore new material configurations for potentially lightweight and flexible thermoelectric devices that could be networked in a textile.

  7. Scalable Multi-Platform Distribution of Spatial 3d Contents

    Science.gov (United States)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  8. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...

  9. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  10. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  11. Scalable fabrication of perovskite solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe; Yang, Mengjin; Berry, Joseph J.; van Hest, Maikel F. A. M.; Zhu, Kai

    2018-03-27

    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discuss common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.

  12. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  13. Compiler-Enforced Cache Coherence Using a Functional Language

    Directory of Open Access Journals (Sweden)

    Rich Wolski

    1996-01-01

    Full Text Available The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language is able to address those issues.

  14. Scalable manufacturing processes with soft materials

    OpenAIRE

    White, Edward; Case, Jennifer; Kramer, Rebecca

    2014-01-01

    The emerging field of soft robotics will benefit greatly from new scalable manufacturing techniques for responsive materials. Currently, most of soft robotic examples are fabricated one-at-a-time, using techniques borrowed from lithography and 3D printing to fabricate molds. This limits both the maximum and minimum size of robots that can be fabricated, and hinders batch production, which is critical to gain wider acceptance for soft robotic systems. We have identified electrical structures, ...

  15. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    Architecture Knowledge for Evaluating Scalable Databases 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Nurgaliev... Scala , Erlang, Javascript Cursor-based queries Supported, Not Supported JOIN queries Supported, Not Supported Complex data types Lists, maps, sets...is therefore needed, using technology such as machine learning to extract content from product documentation. The terminology used in the database

  16. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  17. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; van Renesse, Robbert

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  18. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  19. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  20. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  1. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade; Stradford, Nicholas; Rodriguez, Cesar; Thomas, Shawna; Amato, Nancy M.

    2013-01-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  2. Scalable robotic biofabrication of tissue spheroids

    International Nuclear Information System (INIS)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V; Brown, J; Beaver, W; Da Silva, J V L

    2011-01-01

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  3. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  4. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  5. Electromagnetic spatial coherence wavelets

    International Nuclear Information System (INIS)

    Castaneda, R.; Garcia-Sucerquia, J.

    2005-10-01

    The recently introduced concept of spatial coherence wavelets is generalized for describing the propagation of electromagnetic fields in the free space. For this aim, the spatial coherence wavelet tensor is introduced as an elementary amount, in terms of which the formerly known quantities for this domain can be expressed. It allows analyzing the relationship between the spatial coherence properties and the polarization state of the electromagnetic wave. This approach is completely consistent with the recently introduced unified theory of coherence and polarization for random electromagnetic beams, but it provides a further insight about the causal relationship between the polarization states at different planes along the propagation path. (author)

  6. Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One Sided

    Directory of Open Access Journals (Sweden)

    Robert Gerstenberger

    2014-01-01

    Full Text Available Modern interconnects offer remote direct memory access (RDMA features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth and message rate. We also demonstrate application performance improvements with comparable programming complexity.

  7. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  8. Text Coherence in Translation

    Science.gov (United States)

    Zheng, Yanping

    2009-01-01

    In the thesis a coherent text is defined as a continuity of senses of the outcome of combining concepts and relations into a network composed of knowledge space centered around main topics. And the author maintains that in order to obtain the coherence of a target language text from a source text during the process of translation, a translator can…

  9. Coherent Multistatic ISAR Imaging

    NARCIS (Netherlands)

    Dorp, Ph. van; Otten, M.P.G.; Verzeilberg, J.M.M.

    2012-01-01

    This paper presents methods for Coherent Multistatic Radar Imaging for Non Cooperative Target Recognition (NCTR) with a network of radar sensors. Coherent Multistatic Radar Imaging is based on an extension of existing monostatic ISAR algorithms to the multistatic environment. The paper describes the

  10. VCSEL Based Coherent PONs

    DEFF Research Database (Denmark)

    Jensen, Jesper Bevensee; Rodes, Roberto; Caballero Jambrina, Antonio

    2014-01-01

    We present a review of research performed in the area of coherent access technologies employing vertical cavity surface emitting lasers (VCSELs). Experimental demonstrations of optical transmission over a passive fiber link with coherent detection using VCSEL local oscillators and directly modula...

  11. Measuring coherence with entanglement concurrence

    Science.gov (United States)

    Qi, Xianfei; Gao, Ting; Yan, Fengli

    2017-07-01

    Quantum coherence is a fundamental manifestation of the quantum superposition principle. Recently, Baumgratz et al (2014 Phys. Rev. Lett. 113 140401) presented a rigorous framework to quantify coherence from the view of theory of physical resource. Here we propose a new valid quantum coherence measure which is a convex roof measure, for a quantum system of arbitrary dimension, essentially using the generalized Gell-Mann matrices. Rigorous proof shows that the proposed coherence measure, coherence concurrence, fulfills all the requirements dictated by the resource theory of quantum coherence measures. Moreover, strong links between the resource frameworks of coherence concurrence and entanglement concurrence is derived, which shows that any degree of coherence with respect to some reference basis can be converted to entanglement via incoherent operations. Our work provides a clear quantitative and operational connection between coherence and entanglement based on two kinds of concurrence. This new coherence measure, coherence concurrence, may also be beneficial to the study of quantum coherence.

  12. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  13. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  14. Tip-Based Nanofabrication for Scalable Manufacturing

    International Nuclear Information System (INIS)

    Hu, Huan; Somnath, Suhas

    2017-01-01

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  15. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  16. Scalable Optical-Fiber Communication Networks

    Science.gov (United States)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  17. Scalable Tensor Factorizations with Missing Data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.

    2010-01-01

    of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP...... is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...

  18. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  19. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  20. Organic interfaces

    NARCIS (Netherlands)

    Poelman, W.A.; Tempelman, E.

    2014-01-01

    This paper deals with the consequences for product designers resulting from the replacement of traditional interfaces by responsive materials. Part 1 presents a theoretical framework regarding a new paradigm for man-machine interfacing. Part 2 provides an analysis of the opportunities offered by new

  1. Interface Realisms

    DEFF Research Database (Denmark)

    Pold, Søren

    2005-01-01

    This article argues for seeing the interface as an important representational and aesthetic form with implications for postmodern culture and digital aesthetics. The interface emphasizes realism due in part to the desire for transparency in Human-Computer Interaction (HCI) and partly...

  2. Coherent structures in compressible free-shear-layer flows

    Energy Technology Data Exchange (ETDEWEB)

    Aeschliman, D.P.; Baty, R.S. [Sandia National Labs., Albuquerque, NM (United States). Engineering Sciences Center; Kennedy, C.A.; Chen, J.H. [Sandia National Labs., Livermore, CA (United States). Combustion and Physical Sciences Center

    1997-08-01

    Large scale coherent structures are intrinsic fluid mechanical characteristics of all free-shear flows, from incompressible to compressible, and laminar to fully turbulent. These quasi-periodic fluid structures, eddies of size comparable to the thickness of the shear layer, dominate the mixing process at the free-shear interface. As a result, large scale coherent structures greatly influence the operation and efficiency of many important commercial and defense technologies. Large scale coherent structures have been studied here in a research program that combines a synergistic blend of experiment, direct numerical simulation, and analysis. This report summarizes the work completed for this Sandia Laboratory-Directed Research and Development (LDRD) project.

  3. SVOPME: A Scalable Virtual Organization Privileges Management Environment

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele; Sfiligoi, Igor; Levshina, Tanya; Wang, Nanbor; Ananthan, Balamurali

    2010-01-01

    Grids enable uniform access to resources by implementing standard interfaces to resource gateways. In the Open Science Grid (OSG), privileges are granted on the basis of the user's membership to a Virtual Organization (VO). However, Grid sites are solely responsible to determine and control access privileges to resources using users' identity and personal attributes, which are available through Grid credentials. While this guarantees full control on access rights to the sites, it makes VO privileges heterogeneous throughout the Grid and hardly fits with the Grid paradigm of uniform access to resources. To address these challenges, we are developing the Scalable Virtual Organization Privileges Management Environment (SVOPME), which provides tools for VOs to define and publish desired privileges and assists sites to provide the appropriate access policies. Moreover, SVOPME provides tools for Grid sites to analyze site access policies for various resources, verify compliance with preferred VO policies, and generate directives for site administrators on how the local access policies can be amended to achieve such compliance without taking control of local configurations away from site administrators. This paper discusses what access policies are of interest to the OSG community and how SVOPME implements privilege management for OSG.

  4. An open, interoperable, and scalable prehospital information technology network architecture.

    Science.gov (United States)

    Landman, Adam B; Rokos, Ivan C; Burns, Kevin; Van Gelder, Carin M; Fisher, Roger M; Dunford, James V; Cone, David C; Bogucki, Sandy

    2011-01-01

    Some of the most intractable challenges in prehospital medicine include response time optimization, inefficiencies at the emergency medical services (EMS)-emergency department (ED) interface, and the ability to correlate field interventions with patient outcomes. Information technology (IT) can address these and other concerns by ensuring that system and patient information is received when and where it is needed, is fully integrated with prior and subsequent patient information, and is securely archived. Some EMS agencies have begun adopting information technologies, such as wireless transmission of 12-lead electrocardiograms, but few agencies have developed a comprehensive plan for management of their prehospital information and integration with other electronic medical records. This perspective article highlights the challenges and limitations of integrating IT elements without a strategic plan, and proposes an open, interoperable, and scalable prehospital information technology (PHIT) architecture. The two core components of this PHIT architecture are 1) routers with broadband network connectivity to share data between ambulance devices and EMS system information services and 2) an electronic patient care report to organize and archive all electronic prehospital data. To successfully implement this comprehensive PHIT architecture, data and technology requirements must be based on best available evidence, and the system must adhere to health data standards as well as privacy and security regulations. Recent federal legislation prioritizing health information technology may position federal agencies to help design and fund PHIT architectures.

  5. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  6. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit; Bajic, Vladimir B.; Kaushik, Dinesh

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  7. Algorithmic psychometrics and the scalable subject.

    Science.gov (United States)

    Stark, Luke

    2018-04-01

    Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.

  8. Scalable Simulation of Electromagnetic Hybrid Codes

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.; Fujimoto, Richard; Karimabadi, Dr. Homa

    2006-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms

  9. Towards Scalable Graph Computation on Mobile Devices.

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  10. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  11. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  12. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  13. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  14. An ODMG-compatible testbed architecture for scalable management and analysis of physics data

    International Nuclear Information System (INIS)

    Malon, D.M.; May, E.N.

    1997-01-01

    This paper describes a testbed architecture for the investigation and development of scalable approaches to the management and analysis of massive amounts of high energy physics data. The architecture has two components: an interface layer that is compliant with a substantial subset of the ODMG-93 Version 1.2 specification, and a lightweight object persistence manager that provides flexible storage and retrieval services on a variety of single- and multi-level storage architectures, and on a range of parallel and distributed computing platforms

  15. Microprocessor interfacing

    CERN Document Server

    Vears, R E

    2014-01-01

    Microprocessor Interfacing provides the coverage of the Business and Technician Education Council level NIII unit in Microprocessor Interfacing (syllabus U86/335). Composed of seven chapters, the book explains the foundation in microprocessor interfacing techniques in hardware and software that can be used for problem identification and solving. The book focuses on the 6502, Z80, and 6800/02 microprocessor families. The technique starts with signal conditioning, filtering, and cleaning before the signal can be processed. The signal conversion, from analog to digital or vice versa, is expl

  16. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  17. Intracoronary optical coherence tomography

    DEFF Research Database (Denmark)

    Tenekecioglu, Erhan; Albuquerque, Felipe N; Sotomi, Yohei

    2017-01-01

    By providing valuable information about the coronary artery wall and lumen, intravascular imaging may aid in optimizing interventional procedure results and thereby could improve clinical outcomes following percutaneous coronary intervention (PCI). Intravascular optical coherence tomography (OCT...

  18. Coherence in Industrial Transformation

    DEFF Research Database (Denmark)

    Jørgensen, Ulrik; Lauridsen, Erik Hagelskjær

    2003-01-01

    The notion of coherence is used to illustrate the general finding, that the impact of environmental management systems and environmental policy is highly dependent of the context and interrelatedness of the systems, procedures and regimes established in society....

  19. Optical Coherence Tomography

    DEFF Research Database (Denmark)

    Fercher, A.F.; Andersen, Peter E.

    2017-01-01

    Optical coherence tomography (OCT) is a technique that is used to peer inside a body noninvasively. Tissue structure defined by tissue absorption and scattering coefficients, and the speed of blood flow, are derived from the characteristics of light remitted by the body. Singly backscattered light...... detected by partial coherence interferometry (PCI) is used to synthesize the tomographic image coded in false colors. A prerequisite of this technique is a low time-coherent but high space-coherent light source, for example, a superluminescent diode or a supercontinuum source. Alternatively, the imaging...... technique can be realized by using ultrafast wavelength scanning light sources. For tissue imaging, the light source wavelengths are restricted to the red and near-infrared (NIR) region from about 600 to 1300 nm, the so-called therapeutic window, where absorption (μa ≈ 0.01 mm−1) is small enough. Transverse...

  20. Coherent imaging at FLASH

    International Nuclear Information System (INIS)

    Chapman, H N; Bajt, S; Duesterer, S; Treusch, R; Barty, A; Benner, W H; Bogan, M J; Frank, M; Hau-Riege, S P; Woods, B W; Boutet, S; Cavalleri, A; Hajdu, J; Iwan, B; Seibert, M M; Timneanu, N; Marchesini, S; Sakdinawat, A; Sokolowski-Tinten, K

    2009-01-01

    We have carried out high-resolution single-pulse coherent diffractive imaging at the FLASH free-electron laser. The intense focused FEL pulse gives a high-resolution low-noise coherent diffraction pattern of an object before that object turns into a plasma and explodes. In particular we are developing imaging of biological specimens beyond conventional radiation damage resolution limits, developing imaging of ultrafast processes, and testing methods to characterize and perform single-particle imaging.

  1. Towards deterministic optical quantum computation with coherently driven atomic ensembles

    International Nuclear Information System (INIS)

    Petrosyan, David

    2005-01-01

    Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons

  2. Interface Anywhere

    Data.gov (United States)

    National Aeronautics and Space Administration — Current paradigms for crew interfaces to the systems that require control are constrained by decades old technologies which require the crew to be physically near an...

  3. Engineering scalable fault-tolerant quantum computation

    Science.gov (United States)

    Kimchi-Schwartz, Mollie; Danna, Rosenberg; Kim, David; Yoder, Jonilyn; Kjaergaard, Morten; Das, Rabindra; Grover, Jeff; Gustavsson, Simon; Oliver, William

    Recent demonstrations of quantum protocols comprising on the order of 5-10 superconducting qubits are foundational to the future development of quantum information processors. A next critical step in the development of resilient quantum processors will be the integration of coherent quantum circuits with a hardware platform that is amenable to extending the system size to hundreds of qubits and beyond. In this talk, we will discuss progress toward integrating coherent superconducting qubits with signal routing via the third dimension. This research was funded in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) and by the Assistant Secretary of Defense for Research & Engineering under Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the US Government.

  4. A generic interface to reduce the efficiency-stability-cost gap of perovskite solar cells

    Science.gov (United States)

    Hou, Yi; Du, Xiaoyan; Scheiner, Simon; McMeekin, David P.; Wang, Zhiping; Li, Ning; Killian, Manuela S.; Chen, Haiwei; Richter, Moses; Levchuk, Ievgen; Schrenker, Nadine; Spiecker, Erdmann; Stubhan, Tobias; Luechinger, Norman A.; Hirsch, Andreas; Schmuki, Patrik; Steinrück, Hans-Peter; Fink, Rainer H.; Halik, Marcus; Snaith, Henry J.; Brabec, Christoph J.

    2017-12-01

    A major bottleneck delaying the further commercialization of thin-film solar cells based on hybrid organohalide lead perovskites is interface loss in state-of-the-art devices. We present a generic interface architecture that combines solution-processed, reliable, and cost-efficient hole-transporting materials without compromising efficiency, stability, or scalability of perovskite solar cells. Tantalum-doped tungsten oxide (Ta-WOx)/conjugated polymer multilayers offer a surprisingly small interface barrier and form quasi-ohmic contacts universally with various scalable conjugated polymers. In a simple device with regular planar architecture and a self-assembled monolayer, Ta-WOx-doped interface-based perovskite solar cells achieve maximum efficiencies of 21.2% and offer more than 1000 hours of light stability. By eliminating additional ionic dopants, these findings open up the entire class of organics as scalable hole-transporting materials for perovskite solar cells.

  5. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.......Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as filter, or stack operations and pose significant challenges to automatic parallelization. Because...... the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same...

  6. Scalable Faceted Ranking in Tagging Systems

    Science.gov (United States)

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  7. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  8. Parallel scalability of Hartree-Fock calculations

    Science.gov (United States)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  9. iSIGHT-FD scalability test report.

    Energy Technology Data Exchange (ETDEWEB)

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  10. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  11. Scalable on-chip quantum state tomography

    Science.gov (United States)

    Titchener, James G.; Gräfe, Markus; Heilmann, René; Solntsev, Alexander S.; Szameit, Alexander; Sukhorukov, Andrey A.

    2018-03-01

    Quantum information systems are on a path to vastly exceed the complexity of any classical device. The number of entangled qubits in quantum devices is rapidly increasing, and the information required to fully describe these systems scales exponentially with qubit number. This scaling is the key benefit of quantum systems, however it also presents a severe challenge. To characterize such systems typically requires an exponentially long sequence of different measurements, becoming highly resource demanding for large numbers of qubits. Here we propose and demonstrate a novel and scalable method for characterizing quantum systems based on expanding a multi-photon state to larger dimensionality. We establish that the complexity of this new measurement technique only scales linearly with the number of qubits, while providing a tomographically complete set of data without a need for reconfigurability. We experimentally demonstrate an integrated photonic chip capable of measuring two- and three-photon quantum states with statistical reconstruction fidelity of 99.71%.

  12. A versatile scalable PET processing system

    International Nuclear Information System (INIS)

    Dong, H.; Weisenberger, A.; McKisson, J.; Wenze, Xi; Cuevas, C.; Wilson, J.; Zukerman, L.

    2011-01-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  13. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  14. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  15. Scalable quantum search using trapped ions

    International Nuclear Information System (INIS)

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.

    2010-01-01

    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  16. Scalable graphene aptasensors for drug quantification

    Science.gov (United States)

    Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie

    2017-11-01

    Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.

  17. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  18. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  19. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  20. Fourier transform based scalable image quality measure.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi; McLoughlin, Ian; Emmanuel, Sabu; Chia, Liang-Tien

    2012-08-01

    We present a new image quality assessment (IQA) algorithm based on the phase and magnitude of the 2D (twodimensional) Discrete Fourier Transform (DFT). The basic idea is to compare the phase and magnitude of the reference and distorted images to compute the quality score. However, it is well known that the Human Visual Systems (HVSs) sensitivity to different frequency components is not the same. We accommodate this fact via a simple yet effective strategy of nonuniform binning of the frequency components. This process also leads to reduced space representation of the image thereby enabling the reduced-reference (RR) prospects of the proposed scheme. We employ linear regression to integrate the effects of the changes in phase and magnitude. In this way, the required weights are determined via proper training and hence more convincing and effective. Lastly, using the fact that phase usually conveys more information than magnitude, we use only the phase for RR quality assessment. This provides the crucial advantage of further reduction in the required amount of reference image information. The proposed method is therefore further scalable for RR scenarios. We report extensive experimental results using a total of 9 publicly available databases: 7 image (with a total of 3832 distorted images with diverse distortions) and 2 video databases (totally 228 distorted videos). These show that the proposed method is overall better than several of the existing fullreference (FR) algorithms and two RR algorithms. Additionally, there is a graceful degradation in prediction performance as the amount of reference image information is reduced thereby confirming its scalability prospects. To enable comparisons and future study, a Matlab implementation of the proposed algorithm is available at http://www.ntu.edu.sg/home/wslin/reduced_phase.rar.

  1. Improving diabetes medication adherence: successful, scalable interventions

    Directory of Open Access Journals (Sweden)

    Zullig LL

    2015-01-01

    Full Text Available Leah L Zullig,1,2 Walid F Gellad,3,4 Jivan Moaddeb,2,5 Matthew J Crowley,1,2 William Shrank,6 Bradi B Granger,7 Christopher B Granger,8 Troy Trygstad,9 Larry Z Liu,10 Hayden B Bosworth1,2,7,11 1Center for Health Services Research in Primary Care, Durham Veterans Affairs Medical Center, Durham, NC, USA; 2Department of Medicine, Duke University, Durham, NC, USA; 3Center for Health Equity Research and Promotion, Pittsburgh Veterans Affairs Medical Center, Pittsburgh, PA, USA; 4Division of General Internal Medicine, University of Pittsburgh, Pittsburgh, PA, USA; 5Institute for Genome Sciences and Policy, Duke University, Durham, NC, USA; 6CVS Caremark Corporation; 7School of Nursing, Duke University, Durham, NC, USA; 8Department of Medicine, Division of Cardiology, Duke University School of Medicine, Durham, NC, USA; 9North Carolina Community Care Networks, Raleigh, NC, USA; 10Pfizer, Inc., and Weill Medical College of Cornell University, New York, NY, USA; 11Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC, USA Abstract: Effective medications are a cornerstone of prevention and disease treatment, yet only about half of patients take their medications as prescribed, resulting in a common and costly public health challenge for the US healthcare system. Since poor medication adherence is a complex problem with many contributing causes, there is no one universal solution. This paper describes interventions that were not only effective in improving medication adherence among patients with diabetes, but were also potentially scalable (ie, easy to implement to a large population. We identify key characteristics that make these interventions effective and scalable. This information is intended to inform healthcare systems seeking proven, low resource, cost-effective solutions to improve medication adherence. Keywords: medication adherence, diabetes mellitus, chronic disease, dissemination research

  2. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-07-01

    Full Text Available This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  3. Efficient quantum computing using coherent photon conversion.

    Science.gov (United States)

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting

  4. Stimulated coherent transition radiation

    International Nuclear Information System (INIS)

    Hung-chi Lihn.

    1996-03-01

    Coherent radiation emitted from a relativistic electron bunch consists of wavelengths longer than or comparable to the bunch length. The intensity of this radiation out-numbers that of its incoherent counterpart, which extends to wavelengths shorter than the bunch length, by a factor equal to the number of electrons in the bunch. In typical accelerators, this factor is about 8 to 11 orders of magnitude. The spectrum of the coherent radiation is determined by the Fourier transform of the electron bunch distribution and, therefore, contains information of the bunch distribution. Coherent transition radiation emitted from subpicosecond electron bunches at the Stanford SUNSHINE facility is observed in the far-infrared regime through a room-temperature pyroelectric bolometer and characterized through the electron bunch-length study. To measure the bunch length, a new frequency-resolved subpicosecond bunch-length measuring system is developed. This system uses a far-infrared Michelson interferometer to measure the spectrum of coherent transition radiation through optical autocorrelation with resolution far better than existing time-resolved methods. Hence, the radiation spectrum and the bunch length are deduced from the autocorrelation measurement. To study the stimulation of coherent transition radiation, a special cavity named BRAICER is invented. Far-infrared light pulses of coherent transition radiation emitted from electron bunches are delayed and circulated in the cavity to coincide with subsequent incoming electron bunches. This coincidence of light pulses with electron bunches enables the light to do work on electrons, and thus stimulates more radiated energy. The possibilities of extending the bunch-length measuring system to measure the three-dimensional bunch distribution and making the BRAICER cavity a broadband, high-intensity, coherent, far-infrared light source are also discussed

  5. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    International Nuclear Information System (INIS)

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.

    2004-01-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan

  6. Phonon-based scalable platform for chip-scale quantum computing

    Directory of Open Access Journals (Sweden)

    Charles M. Reinke

    2016-12-01

    Full Text Available We present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton, may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.

  7. SAR image effects on coherence and coherence estimation.

    Energy Technology Data Exchange (ETDEWEB)

    Bickel, Douglas Lloyd

    2014-01-01

    Radar coherence is an important concept for imaging radar systems such as synthetic aperture radar (SAR). This document quantifies some of the effects in SAR which modify the coherence. Although these effects can disrupt the coherence within a single SAR image, this report will focus on the coherence between separate images, such as for coherent change detection (CCD) processing. There have been other presentations on aspects of this material in the past. The intent of this report is to bring various issues that affect the coherence together in a single report to support radar engineers in making decisions about these matters.

  8. A scalable and continuous-upgradable optical wireless and wired convergent access network.

    Science.gov (United States)

    Sung, J Y; Cheng, K T; Chow, C W; Yeh, C H; Pan, C-L

    2014-06-02

    In this work, a scalable and continuous upgradable convergent optical access network is proposed. By using a multi-wavelength coherent comb source and a programmable waveshaper at the central office (CO), optical millimeter-wave (mm-wave) signals of different frequencies (from baseband to > 100 GHz) can be generated. Hence, it provides a scalable and continuous upgradable solution for end-user who needs 60 GHz wireless services now and > 100 GHz wireless services in the future. During the upgrade, user only needs to upgrade their optical networking unit (ONU). A programmable waveshaper is used to select the suitable optical tones with wavelength separation equals to the desired mm-wave frequency; while the CO remains intact. The centralized characteristics of the proposed system can easily add any new service and end-user. The centralized control of the wavelength makes the system more stable. Wired data rate of 17.45 Gb/s and w-band wireless data rate up to 3.36 Gb/s were demonstrated after transmission over 40 km of single-mode fiber (SMF).

  9. A scalable quantum computer with ions in an array of microtraps

    Science.gov (United States)

    Cirac; Zoller

    2000-04-06

    Quantum computers require the storage of quantum information in a set of two-level systems (called qubits), the processing of this information using quantum gates and a means of final readout. So far, only a few systems have been identified as potentially viable quantum computer models--accurate quantum control of the coherent evolution is required in order to realize gate operations, while at the same time decoherence must be avoided. Examples include quantum optical systems (such as those utilizing trapped ions or neutral atoms, cavity quantum electrodynamics and nuclear magnetic resonance) and solid state systems (using nuclear spins, quantum dots and Josephson junctions). The most advanced candidates are the quantum optical and nuclear magnetic resonance systems, and we expect that they will allow quantum computing with about ten qubits within the next few years. This is still far from the numbers required for useful applications: for example, the factorization of a 200-digit number requires about 3,500 qubits, rising to 100,000 if error correction is implemented. Scalability of proposed quantum computer architectures to many qubits is thus of central importance. Here we propose a model for an ion trap quantum computer that combines scalability (a feature usually associated with solid state proposals) with the advantages of quantum optical systems (in particular, quantum control and long decoherence times).

  10. Designing Interfaces

    CERN Document Server

    Tidwell, Jenifer

    2010-01-01

    Despite all of the UI toolkits available today, it's still not easy to design good application interfaces. This bestselling book is one of the few reliable sources to help you navigate through the maze of design options. By capturing UI best practices and reusable ideas as design patterns, Designing Interfaces provides solutions to common design problems that you can tailor to the situation at hand. This updated edition includes patterns for mobile apps and social media, as well as web applications and desktop software. Each pattern contains full-color examples and practical design advice th

  11. COHERENT Experiment: current status

    International Nuclear Information System (INIS)

    Akimov, D; Belov, V; Bolozdynya, A; Burenkov, A; Albert, J B; Del Valle Coello, M; D’Onofrio, M; Awe, C; Barbeau, P S; Cervantes, M; Becker, B; Cabrera-Palmer, B; Collar, J I; Cooper, R J; Cooper, R L; Cuesta, C; Detwiler, J; Eberhardt, A; Dean, D; Dolgolenko, A G

    2017-01-01

    The COHERENT Collaboration is realizing a long term neutrino physics research program. The main goals of the program are to detect and study elastic neutrino-nucleus scattering (CEνNS). This process is predicted by Standard Model but it has never been observed experimentally because of the very low energy of the recoil nucleus. COHERENT is using different detector technologies: CsI[Na] and NaI scintillator crystals, a single-phase liquid Ar and a Ge detectors. The placement of all the detector setups is in the basement of the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). The current status of the COHERENT experimental program is presented. (paper)

  12. Dynamic coherent backscattering mirror

    Energy Technology Data Exchange (ETDEWEB)

    Zeylikovich, I.; Xu, M., E-mail: mxu@fairfield.edu [Physics Department, Fairfield University, Fairfield, CT 06824 (United States)

    2016-02-15

    The phase of multiply scattered light has recently attracted considerable interest. Coherent backscattering is a striking phenomenon of multiple scattered light in which the coherence of light survives multiple scattering in a random medium and is observable in the direction space as an enhancement of the intensity of backscattered light within a cone around the retroreflection direction. Reciprocity also leads to enhancement of backscattering light in the spatial space. The random medium behaves as a reciprocity mirror which robustly converts a diverging incident beam into a converging backscattering one focusing at a conjugate spot in space. Here we first analyze theoretically this coherent backscattering mirror (CBM) phenomenon and then demonstrate the capability of CBM compensating and correcting both static and dynamic phase distortions occurring along the optical path. CBM may offer novel approaches for high speed dynamic phase corrections in optical systems and find applications in sensing and navigation.

  13. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  14. Optical Coherence Tomography

    DEFF Research Database (Denmark)

    Mogensen, Mette; Themstrup, Lotte; Banzhaf, Christina

    2014-01-01

    Optical coherence tomography (OCT) has developed rapidly since its first realisation in medicine and is currently an emerging technology in the diagnosis of skin disease. OCT is an interferometric technique that detects reflected and backscattered light from tissue and is often described as the o......Optical coherence tomography (OCT) has developed rapidly since its first realisation in medicine and is currently an emerging technology in the diagnosis of skin disease. OCT is an interferometric technique that detects reflected and backscattered light from tissue and is often described...

  15. Coherent light microscopy

    CERN Document Server

    Ferraro, Pietro; Zalevsky, Zeev

    2011-01-01

    This book deals with the latest achievements in the field of optical coherent microscopy. While many other books exist on microscopy and imaging, this book provides a unique resource dedicated solely to this subject. Similarly, many books describe applications of holography, interferometry and speckle to metrology but do not focus on their use for microscopy. The coherent light microscopy reference provided here does not focus on the experimental mechanics of such techniques but instead is meant to provide a users manual to illustrate the strengths and capabilities of developing techniques. Th

  16. Interface unit

    NARCIS (Netherlands)

    Keyson, D.V.; Freudenthal, A.; De Hoogh, M.P.A.; Dekoven, E.A.M.

    2001-01-01

    The invention relates to an interface unit comprising at least a display unit for communication with a user, which is designed for being coupled with a control unit for at least one or more parameters in a living or working environment, such as the temperature setting in a house, which control unit

  17. Qubit lattice coherence induced by electromagnetic pulses in superconducting metamaterials.

    Science.gov (United States)

    Ivić, Z; Lazarides, N; Tsironis, G P

    2016-07-12

    Quantum bits (qubits) are at the heart of quantum information processing schemes. Currently, solid-state qubits, and in particular the superconducting ones, seem to satisfy the requirements for being the building blocks of viable quantum computers, since they exhibit relatively long coherence times, extremely low dissipation, and scalability. The possibility of achieving quantum coherence in macroscopic circuits comprising Josephson junctions, envisioned by Legett in the 1980's, was demonstrated for the first time in a charge qubit; since then, the exploitation of macroscopic quantum effects in low-capacitance Josephson junction circuits allowed for the realization of several kinds of superconducting qubits. Furthermore, coupling between qubits has been successfully achieved that was followed by the construction of multiple-qubit logic gates and the implementation of several algorithms. Here it is demonstrated that induced qubit lattice coherence as well as two remarkable quantum coherent optical phenomena, i.e., self-induced transparency and Dicke-type superradiance, may occur during light-pulse propagation in quantum metamaterials comprising superconducting charge qubits. The generated qubit lattice pulse forms a compound "quantum breather" that propagates in synchrony with the electromagnetic pulse. The experimental confirmation of such effects in superconducting quantum metamaterials may open a new pathway to potentially powerful quantum computing.

  18. Interface superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gariglio, S., E-mail: stefano.gariglio@unige.ch [DQMP, Université de Genève, 24 Quai E.-Ansermet, CH-1211 Genève (Switzerland); Gabay, M. [Laboratoire de Physique des Solides, Bat 510, Université Paris-Sud 11, Centre d’Orsay, 91405 Orsay Cedex (France); Mannhart, J. [Max Planck Institute for Solid State Research, 70569 Stuttgart (Germany); Triscone, J.-M. [DQMP, Université de Genève, 24 Quai E.-Ansermet, CH-1211 Genève (Switzerland)

    2015-07-15

    Highlights: • We discuss interfacial superconductivity, a field boosted by the discovery of the superconducting interface between LaAlO. • This system allows the electric field control and the on/off switching of the superconducting state. • We compare superconductivity at the interface and in bulk doped SrTiO. • We discuss the role of the interfacially induced Rashba type spin–orbit. • We briefly discuss superconductivity in cuprates, in electrical double layer transistor field effect experiments. • Recent observations of a high T{sub c} in a monolayer of FeSe deposited on SrTiO{sub 3} are presented. - Abstract: Low dimensional superconducting systems have been the subject of numerous studies for many years. In this article, we focus our attention on interfacial superconductivity, a field that has been boosted by the discovery of superconductivity at the interface between the two band insulators LaAlO{sub 3} and SrTiO{sub 3}. We explore the properties of this amazing system that allows the electric field control and on/off switching of superconductivity. We discuss the similarities and differences between bulk doped SrTiO{sub 3} and the interface system and the possible role of the interfacially induced Rashba type spin–orbit. We also, more briefly, discuss interface superconductivity in cuprates, in electrical double layer transistor field effect experiments, and the recent observation of a high T{sub c} in a monolayer of FeSe deposited on SrTiO{sub 3}.

  19. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  20. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy Tran...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  1. Design issues for numerical libraries on scalable multicore architectures

    International Nuclear Information System (INIS)

    Heroux, M A

    2008-01-01

    Future generations of scalable computers will rely on multicore nodes for a significant portion of overall system performance. At present, most applications and libraries cannot exploit multiple cores beyond running addition MPI processes per node. In this paper we discuss important multicore architecture issues, programming models, algorithms requirements and software design related to effective use of scalable multicore computers. In particular, we focus on important issues for library research and development, making recommendations for how to effectively develop libraries for future scalable computer systems

  2. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  3. Building Scalable Knowledge Graphs for Earth Science

    Science.gov (United States)

    Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.

    2017-12-01

    Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.

  4. Ancestors protocol for scalable key management

    Directory of Open Access Journals (Sweden)

    Dieter Gollmann

    2010-06-01

    Full Text Available Group key management is an important functional building block for secure multicast architecture. Thereby, it has been extensively studied in the literature. The main proposed protocol is Adaptive Clustering for Scalable Group Key Management (ASGK. According to ASGK protocol, the multicast group is divided into clusters, where each cluster consists of areas of members. Each cluster uses its own Traffic Encryption Key (TEK. These clusters are updated periodically depending on the dynamism of the members during the secure session. The modified protocol has been proposed based on ASGK with some modifications to balance the number of affected members and the encryption/decryption overhead with any number of the areas when a member joins or leaves the group. This modified protocol is called Ancestors protocol. According to Ancestors protocol, every area receives the dynamism of the members from its parents. The main objective of the modified protocol is to reduce the number of affected members during the leaving and joining members, then 1 affects n overhead would be reduced. A comparative study has been done between ASGK protocol and the modified protocol. According to the comparative results, it found that the modified protocol is always outperforming the ASGK protocol.

  5. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  6. Percolator: Scalable Pattern Discovery in Dynamic Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sutanay; Purohit, Sumit; Lin, Peng; Wu, Yinghui; Holder, Lawrence B.; Agarwal, Khushbu

    2018-02-06

    We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walking through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.

  7. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  8. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.

    2015-02-01

    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  9. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    E. J. C. Rijshouwer

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 mm2 in 65 nm CMOS (including memories and proves functional on silicon.

  10. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.

  11. Scalable quantum information processing with photons and atoms

    Science.gov (United States)

    Pan, Jian-Wei

    Over the past three decades, the promises of super-fast quantum computing and secure quantum cryptography have spurred a world-wide interest in quantum information, generating fascinating quantum technologies for coherent manipulation of individual quantum systems. However, the distance of fiber-based quantum communications is limited due to intrinsic fiber loss and decreasing of entanglement quality. Moreover, probabilistic single-photon source and entanglement source demand exponentially increased overheads for scalable quantum information processing. To overcome these problems, we are taking two paths in parallel: quantum repeaters and through satellite. We used the decoy-state QKD protocol to close the loophole of imperfect photon source, and used the measurement-device-independent QKD protocol to close the loophole of imperfect photon detectors--two main loopholes in quantum cryptograph. Based on these techniques, we are now building world's biggest quantum secure communication backbone, from Beijing to Shanghai, with a distance exceeding 2000 km. Meanwhile, we are developing practically useful quantum repeaters that combine entanglement swapping, entanglement purification, and quantum memory for the ultra-long distance quantum communication. The second line is satellite-based global quantum communication, taking advantage of the negligible photon loss and decoherence in the atmosphere. We realized teleportation and entanglement distribution over 100 km, and later on a rapidly moving platform. We are also making efforts toward the generation of multiphoton entanglement and its use in teleportation of multiple properties of a single quantum particle, topological error correction, quantum algorithms for solving systems of linear equations and machine learning. Finally, I will talk about our recent experiments on quantum simulations on ultracold atoms. On the one hand, by applying an optical Raman lattice technique, we realized a two-dimensional spin-obit (SO

  12. The Puzzle of Coherence

    DEFF Research Database (Denmark)

    Andersen, Anne Bendix; Frederiksen, Kirsten; Beedholm, Kirsten

    2016-01-01

    Background During the past decade, politicians and healthcare providers have strived to create a coherent healthcare system across primary and secondary healthcare sectors in Denmark. Nevertheless, elderly patients with chronic diseases (EPCD) continue to report experiences of poor-quality care a...

  13. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  14. Coherence Multiplex System Topologies

    NARCIS (Netherlands)

    Meijerink, Arjan; Taniman, R.O.; Heideman, G.H.L.M.; van Etten, Wim

    2007-01-01

    Coherence multiplexing is a potentially inexpensive form of optical code-division multiple access, which is particularly suitable for short-range applications with moderate bandwidth requirements, such as access networks, LANs, or interconnects. Various topologies are known for constructing an

  15. Coherent synchrotron radiation

    International Nuclear Information System (INIS)

    Agoh, Tomonori

    2006-01-01

    This article presents basic properties of coherent synchrotron radiation (CSR) with numerical examples and introduces the reader to important aspects of CSR in future accelerators with short bunches. We show interesting features of the single bunch instability due to CSR in storage rings and discuss the longitudinal CSR field via the impedance representation. (author)

  16. Interference due to coherence swapping

    Indian Academy of Sciences (India)

    particle is, its interaction with the beam splitter does not reveal this information .... If one shines a strong linearly polarised monochromatic laser beam, or a quasi .... to be a hindrance to coherence, can be suitably designed to create coherence.

  17. Coherent states in quantum mechanics

    International Nuclear Information System (INIS)

    Rodrigues, R. de Lima; Fernandes Junior, Damasio; Batista, Sheyla Marques

    2001-12-01

    We present a review work on the coherent states is non-relativistic quantum mechanics analysing the quantum oscillators in the coherent states. The coherent states obtained via a displacement operator that act on the wave function of ground state of the oscillator and the connection with Quantum Optics which were implemented by Glauber have also been considered. A possible generalization to the construction of new coherent states it is point out. (author)

  18. Coherent states in quantum mechanics

    CERN Document Server

    Rodrigues, R D L; Fernandes, D

    2001-01-01

    We present a review work on the coherent states is non-relativistic quantum mechanics analysing the quantum oscillators in the coherent states. The coherent states obtained via a displacement operator that act on the wave function of ground state of the oscillator and the connection with Quantum Optics which were implemented by Glauber have also been considered. A possible generalization to the construction of new coherent states it is point out.

  19. Interface learning

    DEFF Research Database (Denmark)

    Thorhauge, Sally

    2014-01-01

    "Interface learning - New goals for museum and upper secondary school collaboration" investigates and analyzes the learning that takes place when museums and upper secondary schools in Denmark work together in local partnerships to develop and carry out school-related, museum-based coursework...... for students. The research focuses on the learning that the students experience in the interface of the two learning environments: The formal learning environment of the upper secondary school and the informal learning environment of the museum. Focus is also on the learning that the teachers and museum...... professionals experience as a result of their collaboration. The dissertation demonstrates how a given partnership’s collaboration affects the students’ learning experiences when they are doing the coursework. The dissertation presents findings that museum-school partnerships can use in order to develop...

  20. A Testbed for Highly-Scalable Mission Critical Information Systems

    National Research Council Canada - National Science Library

    Birman, Kenneth P

    2005-01-01

    ... systems in a networked environment. Headed by Professor Ken Birman, the project is exploring a novel fusion of classical protocols for reliable multicast communication with a new style of peer-to-peer protocol called scalable "gossip...

  1. Scalable Partitioning Algorithms for FPGAs With Heterogeneous Resources

    National Research Council Canada - National Science Library

    Selvakkumaran, Navaratnasothie; Ranjan, Abhishek; Raje, Salil; Karypis, George

    2004-01-01

    As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable placement solutions...

  2. Scalable Track Detection in SAR CCD Images

    Energy Technology Data Exchange (ETDEWEB)

    Chow, James G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Quach, Tu-Thach [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988, up fr om 0.907 obtained by the current state-of-the-art method.

  3. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth; Tracy Rafferty

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scale long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK

  4. Coherent hybrid electromagnetic field imaging

    Science.gov (United States)

    Cooke, Bradly J [Jemez Springs, NM; Guenther, David C [Los Alamos, NM

    2008-08-26

    An apparatus and corresponding method for coherent hybrid electromagnetic field imaging of a target, where an energy source is used to generate a propagating electromagnetic beam, an electromagnetic beam splitting means to split the beam into two or more coherently matched beams of about equal amplitude, and where the spatial and temporal self-coherence between each two or more coherently matched beams is preserved. Two or more differential modulation means are employed to modulate each two or more coherently matched beams with a time-varying polarization, frequency, phase, and amplitude signal. An electromagnetic beam combining means is used to coherently combine said two or more coherently matched beams into a coherent electromagnetic beam. One or more electromagnetic beam controlling means are used for collimating, guiding, or focusing the coherent electromagnetic beam. One or more apertures are used for transmitting and receiving the coherent electromagnetic beam to and from the target. A receiver is used that is capable of square-law detection of the coherent electromagnetic beam. A waveform generator is used that is capable of generation and control of time-varying polarization, frequency, phase, or amplitude modulation waveforms and sequences. A means of synchronizing time varying waveform is used between the energy source and the receiver. Finally, a means of displaying the images created by the interaction of the coherent electromagnetic beam with target is employed.

  5. SOL: A Library for Scalable Online Learning Algorithms

    OpenAIRE

    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai

    2016-01-01

    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  6. Modular Universal Scalable Ion-trap Quantum Computer

    Science.gov (United States)

    2016-06-02

    SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11

  7. Architectures and Applications for Scalable Quantum Information Systems

    Science.gov (United States)

    2007-01-01

    Gershenfeld and I. Chuang. Quantum computing with molecules. Scientific American, June 1998. [16] A. Globus, D. Bailey, J. Han, R. Jaffe, C. Levit , R...AFRL-IF-RS-TR-2007-12 Final Technical Report January 2007 ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS...NUMBER 5b. GRANT NUMBER FA8750-01-2-0521 4. TITLE AND SUBTITLE ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS 5c

  8. On the scalability of LISP and advanced overlaid services

    OpenAIRE

    Coras, Florin

    2015-01-01

    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed ...

  9. Scalable, full-colour and controllable chromotropic plasmonic printing

    OpenAIRE

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates ...

  10. Responsive, Flexible and Scalable Broader Impacts (Invited)

    Science.gov (United States)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small

  11. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  12. Optical Coherence and Quantum Optics

    CERN Document Server

    Mandel, Leonard

    1995-01-01

    This book presents a systematic account of optical coherence theory within the framework of classical optics, as applied to such topics as radiation from sources of different states of coherence, foundations of radiometry, effects of source coherence on the spectra of radiated fields, coherence theory of laser modes, and scattering of partially coherent light by random media. The book starts with a full mathematical introduction to the subject area and each chapter concludes with a set of exercises. The authors are renowned scientists and have made substantial contributions to many of the topi

  13. Trident: scalable compute archives: workflows, visualization, and analysis

    Science.gov (United States)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub

  14. Coherent imaging using SACLA

    International Nuclear Information System (INIS)

    Nishino, Yoshinori; Kimura, Takashi; Suzuki, Akihiro; Joti, Yasumasa; Bessho, Yoshitaka

    2017-01-01

    X-ray free-electron lasers (XFELs) with femtosecond pulse duration offer an innovative solution to transcend the spatial resolution limitation in conventional X-ray imaging for biological samples and soft matters by clearing up the radiation damage problem using the “diffraction-before-destruction” strategy. Building on this strategy, the authors are developing a method to image solution sample under controlled environment, pulsed coherent X-ray solution scattering (PCXSS), using XFELs and phase retrieval algorithms in coherent diffractive imaging (CDI). This article describes the basics of PCXSS and examples of PCXSS measurement, for a living cell and self-assemblies of gold nanoparticles, performed by the authors using SACLA. An attempt toward the industrial application of PCXSS is also described. (author)

  15. Coherent dynamics in semiconductors

    DEFF Research Database (Denmark)

    Hvam, Jørn Märcher

    1998-01-01

    enhanced in quantum confined lower-dimensional systems, where exciton and biexciton effects dominate the spectra even at room temperature. The coherent dynamics of excitons are at modest densities well described by the optical Bloch equations and a number of the dynamical effects known from atomic......Ultrafast nonlinear optical spectroscopy is used to study the coherent dynamics of optically excited electron-hole pairs in semiconductors. Coulomb interaction implies that the optical inter-band transitions are dominated, at least at low temperatures, by excitonic effects. They are further...... and molecular systems are found and studied in the exciton-biexciton system of semiconductors. At densities where strong exciton interactions, or many-body effects, become dominant, the semiconductor Bloch equations present a more rigorous treatment of the phenomena Ultrafast degenerate four-wave mixing is used...

  16. Generalized hypergeometric coherent states

    International Nuclear Information System (INIS)

    Appl, Thomas; Schiller, Diethard H

    2004-01-01

    We introduce a large class of holomorphic quantum states by choosing their normalization functions to be given by generalized hypergeometric functions. We call them generalized hypergeometric states in general, and generalized hypergeometric coherent states in particular, if they allow a resolution of unity. Depending on the domain of convergence of the generalized hypergeometric functions, we distinguish generalized hypergeometric states on the plane, the open unit disc and the unit circle. All states are eigenstates of suitably defined lowering operators. We then study their photon number statistics and phase properties as revealed by the Husimi and Pegg-Barnett phase distributions. On the basis of the generalized hypergeometric coherent states we introduce new analytic representations of arbitrary quantum states in Bargmann and Hardy spaces as well as generalized hypergeometric Husimi distributions and corresponding phase distributions

  17. The Effective Coherence Length in Anisotropic Superconductors

    International Nuclear Information System (INIS)

    Polturak, E.; Koren, G.; Nesher, O

    1999-01-01

    If electrons are transmitted from a normal conductor(N) into a superconductor(S), common wisdom has it that the electrons are converted into Cooper pairs within a coherence length from the interface. This is true in conventional superconductors with an isotropic order parameter. We have established experimentally that the situation is rather different in high Tc superconductors having an anisotropic order parameter. We used epitaxial thin film S/N bilayers having different interface orientations in order to inject carriers from S into N along different directions. The distance to which these carriers penetrate were determined through their effect on the Tc of the bilayers. We found that the effective coherence length is 20A only along the a or b directions, while in other directions we find a length of 250dr20A out of plane, and an even larger value for in-plane, off high symmetry directions. These observations can be explained using the Blonder-Tinkham-Klapwijk model adapted to anisotropic superconductivity. Several implications of our results on outstanding problems with high Tc junctions will be discussed

  18. Quantum coherence: Reciprocity and distribution

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Asutosh, E-mail: asukumar@hri.res.in [Harish-Chandra Research Institute, Allahabad-211019 (India); Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India)

    2017-03-18

    Quantum coherence is the outcome of the superposition principle. Recently, it has been theorized as a quantum resource, and is the premise of quantum correlations in multipartite systems. It is therefore interesting to study the coherence content and its distribution in a multipartite quantum system. In this work, we show analytically as well as numerically the reciprocity between coherence and mixedness of a quantum state. We find that this trade-off is a general feature in the sense that it is true for large spectra of measures of coherence and of mixedness. We also study the distribution of coherence in multipartite systems by looking at monogamy-type relation–which we refer to as additivity relation–between coherences of different parts of the system. We show that for the Dicke states, while the normalized measures of coherence violate the additivity relation, the unnormalized ones satisfy the same. - Highlights: • Quantum coherence. • Reciprocity between quantum coherence and mixedness. • Distribution of quantum coherence in multipartite quantum systems. • Additivity relation for distribution of quantum coherence in Dicke and “X” states.

  19. On coherent states

    International Nuclear Information System (INIS)

    Polubarinov, I.V.

    1975-01-01

    A definition of the coherent state representation is given in this paper. In the representation quantum theory equations take the form of classical field theory equations (with causality inherent to the latter) not only in simple cases (free field and interactions with an external current or field), but also in the general case of closed systems of interacting fields. And, conversely, a classical field theory can be transformed into a form of a quantum one

  20. The Puzzle of Coherence

    DEFF Research Database (Denmark)

    Andersen, Anne Bendix; Frederiksen, Kirsten; Beedholm, Kirsten

    2016-01-01

    During the past decade, politicians and health care providers have strived to create a coherent health care system across primary and secondary health care systems in Denmark. Nevertheless, elderly patients with chronic diseases (EPCD) continue to report experiences of poor-quality care and lack ...... both nationally and internationally in preparation of health agreements, implementation of new collaboration forms among health care providers, and in improvement of delegation and transfer of information and assignments across sectors in health care....

  1. Soft Interfaces

    International Nuclear Information System (INIS)

    Strzalkowski, Ireneusz

    1997-01-01

    This book presents an extended form of the 1994 Dirac Memorial Lecture delivered by Pierre Gilles de Gennes at Cambridge University. The main task of the presentation is to show the beauty and richness of structural forms and phenomena which are observed at soft interfaces between two media. They are much more complex than forms and phenomena existing in each phase separately. Problems are discussed including both traditional, classical techniques, such as the contact angle in static and dynamic partial wetting, as well as the latest research methodology, like 'environmental' scanning electron microscopes. The book is not a systematic lecture on phenomena but it can be considered as a compact set of essays on topics which particularly fascinate the author. The continuum theory widely used in the book is based on a deep molecular approach. The author is particularly interested in a broad-minded rheology of liquid systems at interfaces with specific emphasis on polymer melts. To study this, the author has developed a special methodology called anemometry near walls. The second main topic presented in the book is the problem of adhesion. Molecular processes, energy transformations and electrostatic interaction are included in an interesting discussion of the many aspects of the principles of adhesion. The third topic concerns welding between two polymer surfaces, such as A/A and A/B interfaces. Of great worth is the presentation of various unsolved, open problems. The kind of topics and brevity of description indicate that this book is intended for a well prepared reader. However, for any reader it will present an interesting picture of how many mysterious processes are acting in the surrounding world and how these phenomena are perceived by a Nobel Laureate, who won that prize mainly for his investigations in this field. (book review)

  2. Spectral coherence in windturbine wakes

    Energy Technology Data Exchange (ETDEWEB)

    Hojstrup, J. [Riso National Lab., Roskilde (Denmark)

    1996-12-31

    This paper describes an experiment at a Danish wind farm to investigate the lateral and vertical coherences in the nonequilibrium turbulence of a wind turbine wake. Two meteorological masts were instrumented for measuring profiles of mean speed, turbulence, and temperature. Results are provided graphically for turbulence intensities, velocity spectra, lateral coherence, and vertical coherence. The turbulence was somewhat influenced by the wake, or possibly from aggregated wakes further upstream, even at 14.5 diameters. Lateral coherence (separation 5m) seemed to be unaffected by the wake at 7.5 diameters, but the flow was less coherent in the near wake. The wake appeared to have little influence on vertical coherence (separation 13m). Simple, conventional models for coherence appeared to be adequate descriptions for wake turbulence except for the near wake situation. 3 refs., 7 figs., 1 tab.

  3. Interface Screenings

    DEFF Research Database (Denmark)

    Thomsen, Bodil Marie Stavning

    2015-01-01

    In Wim Wenders' film Until the End of the World (1991), three different diagrams for the visual integration of bodies are presented: 1) GPS tracking and mapping in a landscape, 2) video recordings layered with the memory perception of these recordings, and 3) data-created images from dreams...... and memories. From a transvisual perspective, the question is whether or not these (by now realized) diagrammatic modes involving the body in ubiquitous global media can be analysed in terms of the affects and events created in concrete interfaces. The examples used are filmic as felt sensations...

  4. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  5. Collision-induced coherence

    International Nuclear Information System (INIS)

    Bloembergen, N.

    1985-01-01

    Collision-induced coherence is based on the elimination of phase correlations between coherent Feynman-type pathways which happen to interfere destructively in the absence of damping for certain nonlinear processes. One consequence is the appearance of the extra resonances in four-wave light mixing experiments, for which the intensity increases with increasing buffer gas pressure. These resonances may occur between a pair of initially unpopulated excited states, or between a pair of initially equally populated ground states. The pair of levels may be Zeeman substrates which became degenerate in zero magnetic field. The resulting collision-enhanced Hanle resonances can lead to very sharp variations in the four-wave light mixing signal as the external magnetic field passes through zero. The theoretical description in terms of a coherence grating between Zeeman substrates is equivalent to a description in terms of a spin polarization grating obtained by collision-enhanced transverse optical pumping. The axis of quantization in the former case is taken perpendicular to the direction of the light beams; in the latter case is taken parallel to this direction

  6. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  7. Coherent electron cooling

    Energy Technology Data Exchange (ETDEWEB)

    Litvinenko,V.

    2009-05-04

    Cooling intense high-energy hadron beams remains a major challenge in modern accelerator physics. Synchrotron radiation is still too feeble, while the efficiency of two other cooling methods, stochastic and electron, falls rapidly either at high bunch intensities (i.e. stochastic of protons) or at high energies (e-cooling). In this talk a specific scheme of a unique cooling technique, Coherent Electron Cooling, will be discussed. The idea of coherent electron cooling using electron beam instabilities was suggested by Derbenev in the early 1980s, but the scheme presented in this talk, with cooling times under an hour for 7 TeV protons in the LHC, would be possible only with present-day accelerator technology. This talk will discuss the principles and the main limitations of the Coherent Electron Cooling process. The talk will describe the main system components, based on a high-gain free electron laser driven by an energy recovery linac, and will present some numerical examples for ions and protons in RHIC and the LHC and for electron-hadron options for these colliders. BNL plans a demonstration of the idea in the near future.

  8. Coherent radiation from pulsars

    International Nuclear Information System (INIS)

    Cox, J.L. Jr.

    1979-01-01

    Interaction between a relativistic electrom stream and a plasma under conditions believed to exist in pulsar magnetospheres is shown to result in the simultaneous emission of coherent curvature radiation at radio wavelengths and incoherent curvature radiation at X-ray wavelengths from the same spatial volume. It is found that such a stream can propagate through a plasma parallel to a very strong magnetic field only if its length is less than a critical length L/sub asterisk/ic. Charge induced in the plasma by the stream co-moves with the stream and has the same limitation in longitudinal extent. The resultant charge bunching is sufficient to cause the relatively low energy plasma particles to radiate at radio wavelengths coherently while the relatively high energy stream particles radiate at X-ray wavelengths incoherently as the stream-plasma system moves along curved magnetic field lines. The effective number of coherently radiating particles per bunch is estimated to be approx.10 14 --10 15 for a tupical pulsar

  9. Birefringent coherent diffraction imaging

    Science.gov (United States)

    Karpov, Dmitry; dos Santos Rolo, Tomy; Rich, Hannah; Kryuchkov, Yuriy; Kiefer, Boris; Fohtung, E.

    2016-10-01

    Directional dependence of the index of refraction contains a wealth of information about anisotropic optical properties in semiconducting and insulating materials. Here we present a novel high-resolution lens-less technique that uses birefringence as a contrast mechanism to map the index of refraction and dielectric permittivity in optically anisotropic materials. We applied this approach successfully to a liquid crystal polymer film using polarized light from helium neon laser. This approach is scalable to imaging with diffraction-limited resolution, a prospect rapidly becoming a reality in view of emergent brilliant X-ray sources. Applications of this novel imaging technique are in disruptive technologies, including novel electronic devices, in which both charge and spin carry information as in multiferroic materials and photonic materials such as light modulators and optical storage.

  10. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    world data sets show the effectiveness and scalability of the proposed framework.

  11. Neurofeedback training of alpha-band coherence enhances motor performance.

    Science.gov (United States)

    Mottaz, Anais; Solcà, Marco; Magnin, Cécile; Corbet, Tiffany; Schnider, Armin; Guggisberg, Adrian G

    2015-09-01

    Neurofeedback training of motor cortex activations with brain-computer interface systems can enhance recovery in stroke patients. Here we propose a new approach which trains resting-state functional connectivity associated with motor performance instead of activations related to movements. Ten healthy subjects and one stroke patient trained alpha-band coherence between their hand motor area and the rest of the brain using neurofeedback with source functional connectivity analysis and visual feedback. Seven out of ten healthy subjects were able to increase alpha-band coherence between the hand motor cortex and the rest of the brain in a single session. The patient with chronic stroke learned to enhance alpha-band coherence of his affected primary motor cortex in 7 neurofeedback sessions applied over one month. Coherence increased specifically in the targeted motor cortex and in alpha frequencies. This increase was associated with clinically meaningful and lasting improvement of motor function after stroke. These results provide proof of concept that neurofeedback training of alpha-band coherence is feasible and behaviorally useful. The study presents evidence for a role of alpha-band coherence in motor learning and may lead to new strategies for rehabilitation. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  12. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures.

    Science.gov (United States)

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2017-04-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9 m -by-10 m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user's wrist is stationary.

  13. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  14. Scalable real space pseudopotential density functional codes for materials in the exascale regime

    Science.gov (United States)

    Lena, Charles; Chelikowsky, James; Schofield, Grady; Biller, Ariel; Kronik, Leeor; Saad, Yousef; Deslippe, Jack

    Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs, and clusters with and without spin polarization. Fully self-consistent solutions using this approach have been routinely obtained for systems with thousands of atoms. Yet, there are many systems of notable larger sizes where quantum mechanical accuracy is desired, but scalability proves to be a hindrance. Such systems include large biological molecules, complex nanostructures, or mismatched interfaces. We will present an overview of our new massively parallel algorithms, which offer improved scalability in preparation for exascale supercomputing. We will illustrate these algorithms by considering the electronic structure of a Si nanocrystal exceeding 104 atoms. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).

  15. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  16. GSKY: A scalable distributed geospatial data server on the cloud

    Science.gov (United States)

    Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben

    2017-04-01

    Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as

  17. Scalable Production of Mechanically Robust Antireflection Film for Omnidirectional Enhanced Flexible Thin Film Solar Cells.

    Science.gov (United States)

    Wang, Min; Ma, Pengsha; Yin, Min; Lu, Linfeng; Lin, Yinyue; Chen, Xiaoyuan; Jia, Wei; Cao, Xinmin; Chang, Paichun; Li, Dongdong

    2017-09-01

    Antireflection (AR) at the interface between the air and incident window material is paramount to boost the performance of photovoltaic devices. 3D nanostructures have attracted tremendous interest to reduce reflection, while the structure is vulnerable to the harsh outdoor environment. Thus the AR film with improved mechanical property is desirable in an industrial application. Herein, a scalable production of flexible AR films is proposed with microsized structures by roll-to-roll imprinting process, which possesses hydrophobic property and much improved robustness. The AR films can be potentially used for a wide range of photovoltaic devices whether based on rigid or flexible substrates. As a demonstration, the AR films are integrated with commercial Si-based triple-junction thin film solar cells. The AR film works as an effective tool to control the light travel path and utilize the light inward more efficiently by exciting hybrid optical modes, which results in a broadband and omnidirectional enhanced performance.

  18. PetClaw: A scalable parallel nonlinear wave propagation solver for Python

    KAUST Repository

    Alghamdi, Amal; Ahmadia, Aron; Ketcheson, David I.; Knepley, Matthew; Mandli, Kyle; Dalcin, Lisandro

    2011-01-01

    We present PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation. PetClaw unifies two well-known scientific computing packages, Clawpack and PETSc, using Python interfaces into both. We rely on Clawpack to provide the infrastructure and kernels for time-dependent nonlinear wave propagation. Similarly, we rely on PETSc to manage distributed data arrays and the communication between them.We describe both the implementation and performance of PetClaw as well as our challenges and accomplishments in scaling a Python-based code to tens of thousands of cores on the BlueGene/P architecture. The capabilities of PetClaw are demonstrated through application to a novel problem involving elastic waves in a heterogeneous medium. Very finely resolved simulations are used to demonstrate the suppression of shock formation in this system.

  19. Museets interface

    DEFF Research Database (Denmark)

    Pold, Søren

    2007-01-01

    Søren Pold gør sig overvejelser med udgangspunkt i museumsprojekterne Kongedragter.dk og Stigombord.dk. Han argumenterer for, at udviklingen af internettets interfaces skaber nye måder at se, forstå og interagere med kulturen på. Brugerne får nye medievaner og perceptionsmønstre, der må medtænkes i...... tilrettelæggelsen af den fremtidige formidling. Samtidig får museets genstande en ny status som flygtige ikoner i det digitale rum, og alt i alt inviterer det til, at museerne kan forholde sig mere åbent og eksperimenterende til egen praksis og rolle som kulturinstitution....

  20. Towards a scalable and accurate quantum approach for describing vibrations of molecule–metal interfaces

    Directory of Open Access Journals (Sweden)

    David M. Benoit

    2011-08-01

    Full Text Available We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters.

  1. Topological Properties of Spatial Coherence Function

    International Nuclear Information System (INIS)

    Ji-Rong, Ren; Tao, Zhu; Yi-Shi, Duan

    2008-01-01

    The topological properties of the spatial coherence function are investigated rigorously. The phase singular structures (coherence vortices) of coherence function can be naturally deduced from the topological current, which is an abstract mathematical object studied previously. We find that coherence vortices are characterized by the Hopf index and Brouwer degree in topology. The coherence flux quantization and the linking of the closed coherence vortices are also studied from the topological properties of the spatial coherence function

  2. QUADrATiC: scalable gene expression connectivity mapping for repurposing FDA-approved therapeutics.

    Science.gov (United States)

    O'Reilly, Paul G; Wen, Qing; Bankhead, Peter; Dunne, Philip D; McArt, Darragh G; McPherson, Suzanne; Hamilton, Peter W; Mills, Ken I; Zhang, Shu-Dong

    2016-05-04

    Gene expression connectivity mapping has proven to be a powerful and flexible tool for research. Its application has been shown in a broad range of research topics, most commonly as a means of identifying potential small molecule compounds, which may be further investigated as candidates for repurposing to treat diseases. The public release of voluminous data from the Library of Integrated Cellular Signatures (LINCS) programme further enhanced the utilities and potentials of gene expression connectivity mapping in biomedicine. We describe QUADrATiC ( http://go.qub.ac.uk/QUADrATiC ), a user-friendly tool for the exploration of gene expression connectivity on the subset of the LINCS data set corresponding to FDA-approved small molecule compounds. It enables the identification of compounds for repurposing therapeutic potentials. The software is designed to cope with the increased volume of data over existing tools, by taking advantage of multicore computing architectures to provide a scalable solution, which may be installed and operated on a range of computers, from laptops to servers. This scalability is provided by the use of the modern concurrent programming paradigm provided by the Akka framework. The QUADrATiC Graphical User Interface (GUI) has been developed using advanced Javascript frameworks, providing novel visualization capabilities for further analysis of connections. There is also a web services interface, allowing integration with other programs or scripts. QUADrATiC has been shown to provide an improvement over existing connectivity map software, in terms of scope (based on the LINCS data set), applicability (using FDA-approved compounds), usability and speed. It offers potential to biological researchers to analyze transcriptional data and generate potential therapeutics for focussed study in the lab. QUADrATiC represents a step change in the process of investigating gene expression connectivity and provides more biologically-relevant results than

  3. Partially coherent isodiffracting pulsed beams

    Science.gov (United States)

    Koivurova, Matias; Ding, Chaoliang; Turunen, Jari; Pan, Liuzhan

    2018-02-01

    We investigate a class of isodiffracting pulsed beams, which are superpositions of transverse modes supported by spherical-mirror laser resonators. By employing modal weights that, for stationary light, produce a Gaussian Schell-model beam, we extend this standard model to pulsed beams. We first construct the two-frequency cross-spectral density function that characterizes the spatial coherence in the space-frequency domain. By assuming a power-exponential spectral profile, we then employ the generalized Wiener-Khintchine theorem for nonstationary light to derive the two-time mutual coherence function that describes the space-time coherence of the ensuing beams. The isodiffracting nature of the laser resonator modes permits all (paraxial-domain) calculations at any propagation distance to be performed analytically. Significant spatiotemporal coupling is revealed in subcycle, single-cycle, and few-cycle domains, where the partial spatial coherence also leads to reduced temporal coherence even though full spectral coherence is assumed.

  4. Coherent quantum logic

    International Nuclear Information System (INIS)

    Finkelstein, D.

    1987-01-01

    The von Neumann quantum logic lacks two basic symmetries of classical logic, that between sets and classes, and that between lower and higher order predicates. Similarly, the structural parallel between the set algebra and linear algebra of Grassmann and Peano was left incomplete by them in two respects. In this work a linear algebra is constructed that completes this correspondence and is interpreted as a new quantum logic that restores these invariances, and as a quantum set theory. It applies to experiments with coherent quantum phase relations between the quantum and the apparatus. The quantum set theory is applied to model a Lorentz-invariant quantum time-space complex

  5. Diffraction coherence in optics

    CERN Document Server

    Françon, M; Green, L L

    2013-01-01

    Diffraction: Coherence in Optics presents a detailed account of the course on Fraunhofer diffraction phenomena, studied at the Faculty of Science in Paris. The publication first elaborates on Huygens' principle and diffraction phenomena for a monochromatic point source and diffraction by an aperture of simple form. Discussions focus on diffraction at infinity and at a finite distance, simplified expressions for the field, calculation of the path difference, diffraction by a rectangular aperture, narrow slit, and circular aperture, and distribution of luminous flux in the airy spot. The book th

  6. Hadron coherent production

    International Nuclear Information System (INIS)

    Dremin, I.M.

    1981-01-01

    The process of the coherent production of hadrons analogous to Cherenkov radiation of photons is considered. Its appearence and qualitative treatment are possible now because it is known from experiment that the real part of the πp (and pp) forward elastic scattering amplitude is positive at high energies. The threshold behaviour of the process as well as very typical angular and psub(T)-distributions where psub(t)-transverse momentum corresponding to the ring structure of the target diagram at rather large angles and to high-psub(T) jet production are emphasized [ru

  7. Optical coherence refractometry.

    Science.gov (United States)

    Tomlins, Peter H; Woolliams, Peter; Hart, Christian; Beaumont, Andrew; Tedaldi, Matthew

    2008-10-01

    We introduce a novel approach to refractometry using a low coherence interferometer at multiple angles of incidence. We show that for plane parallel samples it is possible to measure their phase refractive index rather than the group index that is usually measured by interferometric methods. This is a significant development because it enables bulk refractive index measurement of scattering and soft samples, not relying on surface measurements that can be prone to error. Our technique is also noncontact and compatible with in situ refractive index measurements. Here, we demonstrate this new technique on a pure silica test piece and a highly scattering resin slab, comparing the results with standard critical angle refractometry.

  8. Coherent laser beam combining

    CERN Document Server

    Brignon, Arnaud

    2013-01-01

    Recently, the improvement of diode pumping in solid state lasers and the development of double clad fiber lasers have allowed to maintain excellent laser beam quality with single mode fibers. However, the fiber output power if often limited below a power damage threshold. Coherent laser beam combining (CLBC) brings a solution to these limitations by identifying the most efficient architectures and allowing for excellent spectral and spatial quality. This knowledge will become critical for the design of the next generation high-power lasers and is of major interest to many industrial, environme

  9. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  10. Coherent states and rational surfaces

    International Nuclear Information System (INIS)

    Brody, Dorje C; Graefe, Eva-Maria

    2010-01-01

    The state spaces of generalized coherent states associated with special unitary groups are shown to form rational curves and surfaces in the space of pure states. These curves and surfaces are generated by the various Veronese embeddings of the underlying state space into higher dimensional state spaces. This construction is applied to the parameterization of generalized coherent states, which is useful for practical calculations, and provides an elementary combinatorial approach to the geometry of the coherent state space. The results are extended to Hilbert spaces with indefinite inner products, leading to the introduction of a new kind of generalized coherent states.

  11. Scalable synthesis and energy applications of defect engineeered nano materials

    Science.gov (United States)

    Karakaya, Mehmet

    Nanomaterials and nanotechnologies have attracted a great deal of attention in a few decades due to their novel physical properties such as, high aspect ratio, surface morphology, impurities, etc. which lead to unique chemical, optical and electronic properties. The awareness of importance of nanomaterials has motivated researchers to develop nanomaterial growth techniques to further control nanostructures properties such as, size, surface morphology, etc. that may alter their fundamental behavior. Carbon nanotubes (CNTs) are one of the most promising materials with their rigidity, strength, elasticity and electric conductivity for future applications. Despite their excellent properties explored by the abundant research works, there is big challenge to introduce them into the macroscopic world for practical applications. This thesis first gives a brief overview of the CNTs, it will then go on mechanical and oil absorption properties of macro-scale CNT assemblies, then following CNT energy storage applications and finally fundamental studies of defect introduced graphene systems. Chapter Two focuses on helically coiled carbon nanotube (HCNT) foams in compression. Similarly to other foams, HCNT foams exhibit preconditioning effects in response to cyclic loading; however, their fundamental deformation mechanisms are unique. Bulk HCNT foams exhibit super-compressibility and recover more than 90% of large compressive strains (up to 80%). When subjected to striker impacts, HCNT foams mitigate impact stresses more effectively compared to other CNT foams comprised of non-helical CNTs (~50% improvement). The unique mechanical properties we revealed demonstrate that the HCNT foams are ideally suited for applications in packaging, impact protection, and vibration mitigation. The third chapter describes a simple method for the scalable synthesis of three-dimensional, elastic, and recyclable multi-walled carbon nanotube (MWCNT) based light weight bucky-aerogels (BAGs) that are

  12. Regulatory risk coherence

    International Nuclear Information System (INIS)

    Remick, F.J.

    1992-01-01

    As one of the most progressive users of risk assessment in decision making, the US Nuclear Regulatory Commission (NRC) is in a position to play an important role in influencing the development of standard government wide policies for the application of risk assessment in decision making. The NRC, with the support of the nuclear industry, should use the opportunity provided by its experience with risk assessment to actively encourage the adoption of standard national and international health-based safety goals and at the same time accelerate its own efforts to implement the safety goals it has already developed for itself. There are signs of increased recognition of the need for consistency and coherence in the application of risk assessment in government decision making. The NRC and the nuclear industry have recently taken a great step toward establishing a consistant and coherent risk assessment-based culture in the US nuclear industry. As a result of Generic Letter 88-20, which asks each commercial nuclear power plant licensee to perform an individual plant examination by September 1992, for the first time a risk assessment characterizing initiating events in each plant will exist

  13. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...... in order to save transmissions. To ensure decodability at the end-nodes, a priori information about the content of the combined packets must be available. This is gathered during the initial transmissions to the relay. The trade-off between decodability and number of necessary transmissions is analysed...

  14. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    Science.gov (United States)

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an

  15. A customizable, scalable scheduling and reporting system.

    Science.gov (United States)

    Wood, Jody L; Whitman, Beverly J; Mackley, Lisa A; Armstrong, Robert; Shotto, Robert T

    2014-06-01

    Scheduling is essential for running a facility smoothly and for summarizing activities in use reports. The Penn State Hershey Clinical Simulation Center has developed a scheduling interface that uses off-the-shelf components, with customizations that adapt to each institution's data collection and reporting needs. The system is designed using programs within the Microsoft Office 2010 suite. Outlook provides the scheduling component, while the reporting is performed using Access or Excel. An account with a calendar is created for the main schedule, with separate resource accounts created for each room within the center. The Outlook appointment form's 2 default tabs are used, in addition to a customized third tab. The data are then copied from the calendar into either a database table or a spreadsheet, where the reports are generated.Incorporating this system into an institution-wide structure allows integration of personnel lists and potentially enables all users to check the schedule from their desktop. Outlook also has a Web-based application for viewing the basic schedule from outside the institution, although customized data cannot be accessed. The scheduling and reporting functions have been used for a year at the Penn State Hershey Clinical Simulation Center. The schedule has increased workflow efficiency, improved the quality of recorded information, and provided more accurate reporting. The Penn State Hershey Clinical Simulation Center's scheduling and reporting system can be adapted easily to most simulation centers and can expand and change to meet future growth with little or no expense to the center.

  16. Heterogeneous scalable framework for multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Karla Vanessa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.

  17. Data Intensive Architecture for Scalable Cyber Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-12-19

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. In this paper, we describe a prototype implementation designed to provide cyber analysts an environment where they can interactively explore a month’s worth of cyber security data. This prototype utilized On-Line Analytical Processing (OLAP) techniques to present a data cube to the analysts. The cube provides a summary of the data, allowing trends to be easily identified as well as the ability to easily pull up the original records comprising an event of interest. The cube was built using SQL Server Analysis Services (SSAS), with the interface to the cube provided by Tableau. This software infrastructure was supported by a novel hardware architecture comprising a Netezza TwinFin® for the underlying data warehouse and a cube server with a FusionIO drive hosting the data cube. We evaluated this environment on a month’s worth of artificial, but realistic, data using multiple queries provided by our cyber analysts. As our results indicate, OLAP technology has progressed to the point where it is in a unique position to provide novel insights to cyber analysts, as long as it is supported by an appropriate data intensive architecture.

  18. Ordering states with various coherence measures

    Science.gov (United States)

    Yang, Long-Mei; Chen, Bin; Fei, Shao-Ming; Wang, Zhi-Xi

    2018-04-01

    Quantum coherence is one of the most significant theories in quantum physics. Ordering states with various coherence measures is an intriguing task in quantification theory of coherence. In this paper, we study this problem by use of four important coherence measures—the l_1 norm of coherence, the relative entropy of coherence, the geometric measure of coherence and the modified trace distance measure of coherence. We show that each pair of these measures give a different ordering of qudit states when d≥3. However, for single-qubit states, the l_1 norm of coherence and the geometric coherence provide the same ordering. We also show that the relative entropy of coherence and the geometric coherence give a different ordering for single-qubit states. Then we partially answer the open question proposed in Liu et al. (Quantum Inf Process 15:4189, 2016) whether all the coherence measures give a different ordering of states.

  19. Interfaces habladas

    Directory of Open Access Journals (Sweden)

    María Teresa Soto Sanfiel

    2012-04-01

    Full Text Available Este artículo describe y piensa al fenómeno de las Interfaces habladas (IH desde variados puntos de vista y niveles de análisis. El texto se ha concebido con los objetivos específicos de: 1.- procurar una visión panorámica de aspectos de la producción y consumo comunicativo de las IH; 2.- ofrecer recomendaciones para su creación y uso eficaz, y 3.- llamar la atención sobre su proliferación e inspirar su estudio desde la comunicación. A pesar de la creciente presencia de las IF en nues-tras vidas cotidianas, hay ausencia de textos que las caractericen y analicen por sus aspectos comunicativos. El trabajo es pertinente porque el fenómeno significa un cambio respecto a estadios comunica-tivos precedentes con consecuencias en las concepciones intelectuales y emocionales de los usuarios. La proliferación de IH nos abre a nue-vas realidades comunicativas: hablamos con máquinas.

  20. Scalable polylithic on-package integratable apparatus and method

    Energy Technology Data Exchange (ETDEWEB)

    Khare, Surhud; Somasekhar, Dinesh; Borkar, Shekhar Y.

    2017-12-05

    Described is an apparatus which comprises: a first die including: a processing core; a crossbar switch coupled to the processing core; and a first edge interface coupled to the crossbar switch; and a second die including: a first edge interface positioned at a periphery of the second die and coupled to the first edge interface of the first die, wherein the first edge interface of the first die and the first edge interface of the second die are positioned across each other; a clock synchronization circuit coupled to the second edge interface; and a memory interface coupled to the clock synchronization circuit.

  1. Extending the POSIX I/O interface: a parallel file system perspective.

    Energy Technology Data Exchange (ETDEWEB)

    Vilayannur, M.; Lang, S.; Ross, R.; Klundt, R.; Ward, L.; Mathematics and Computer Science; VMWare, Inc.; SNL

    2008-12-11

    The POSIX interface does not lend itself well to enabling good performance for high-end applications. Extensions are needed in the POSIX I/O interface so that high-concurrency HPC applications running on top of parallel file systems perform well. This paper presents the rationale, design, and evaluation of a reference implementation of a subset of the POSIX I/O interfaces on a widely used parallel file system (PVFS) on clusters. Experimental results on a set of micro-benchmarks confirm that the extensions to the POSIX interface greatly improve scalability and performance.

  2. A coherent Ising machine for 2000-node optimization problems

    Science.gov (United States)

    Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L.; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki

    2016-11-01

    The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.

  3. Geometry of spin coherent states

    Science.gov (United States)

    Chryssomalakos, C.; Guzmán-González, E.; Serrano-Ensástiga, E.

    2018-04-01

    Spin states of maximal projection along some direction in space are called (spin) coherent, and are, in many respects, the ‘most classical’ available. For any spin s, the spin coherent states form a 2-sphere in the projective Hilbert space \

  4. Damping of Coherent oscillations

    CERN Document Server

    Vos, L

    1996-01-01

    Damping of coherent oscillations by feedback is straightforward in principle. It has been a vital ingredient for the safe operation of accelerators since a long time. The increasing dimensions and beam intensities of the new generation of hadron colliders impose unprecedented demands on the performance of future systems. The arguments leading to the specification of a transverse feedback system for the CERN SPS in its role as LHC injector and the LHC collider itself are developped to illustrate this. The preservation of the transverse emittance is the guiding principle during this exercise keeping in mind the hostile environment which comprises: transverse impedance bent on developping coupled bunch instabilities, injection errors, unwanted transverse excitation, unavoidable tune spreads and noise in the damping loop.

  5. Quantum information and coherence

    CERN Document Server

    Öhberg, Patrik

    2014-01-01

    This book offers an introduction to ten key topics in quantum information science and quantum coherent phenomena, aimed at graduate-student level. The chapters cover some of the most recent developments in this dynamic research field where theoretical and experimental physics, combined with computer science, provide a fascinating arena for groundbreaking new concepts in information processing. The book addresses both the theoretical and experimental aspects of the subject, and clearly demonstrates how progress in experimental techniques has stimulated a great deal of theoretical effort and vice versa. Experiments are shifting from simply preparing and measuring quantum states to controlling and manipulating them, and the book outlines how the first real applications, notably quantum key distribution for secure communication, are starting to emerge. The chapters cover quantum retrodiction, ultracold quantum gases in optical lattices, optomechanics, quantum algorithms, quantum key distribution, quantum cont...

  6. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...

  7. Scalable storage for a DBMS using transparent distribution

    NARCIS (Netherlands)

    J.S. Karlsson; M.L. Kersten (Martin)

    1997-01-01

    textabstractScalable Distributed Data Structures (SDDSs) provide a self-managing and self-organizing data storage of potentially unbounded size. This stands in contrast to common distribution schemas deployed in conventional distributed DBMS. SDDSs, however, have mostly been used in synthetic

  8. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman; Yokota, Rio; Ahmadia, Aron

    2012-01-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach

  9. Cascaded column generation for scalable predictive demand side management

    NARCIS (Netherlands)

    Toersche, Hermen; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2014-01-01

    We propose a nested Dantzig-Wolfe decomposition, combined with dynamic programming, for the distributed scheduling of a large heterogeneous fleet of residential appliances with nonlinear behavior. A cascaded column generation approach gives a scalable optimization strategy, provided that the problem

  10. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  11. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  12. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  13. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can ...

  14. Scalability and efficiency of genetic algorithms for geometrical applications

    NARCIS (Netherlands)

    Dijk, van S.F.; Thierens, D.; Berg, de M.; Schoenauer, M.

    2000-01-01

    We study the scalability and efficiency of a GA that we developed earlier to solve the practical cartographic problem of labeling a map with point features. We argue that the special characteristics of our GA make that it fits in well with theoretical models predicting the optimal population size

  15. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  16. A Massively Scalable Architecture for Instant Messaging & Presence

    NARCIS (Netherlands)

    Schippers, Jorrit; Remke, Anne Katharina Ingrid; Punt, Henk; Wegdam, M.; Haverkort, Boudewijn R.H.M.; Thomas, N.; Bradley, J.; Knottenbelt, W.; Dingle, N.; Harder, U.

    2010-01-01

    This paper analyzes the scalability of Instant Messaging & Presence (IM&P) architectures. We take a queueing-based modelling and analysis approach to ��?nd the bottlenecks of the current IM&P architecture at the Dutch social network Hyves, as well as of alternative architectures. We use the

  17. Adolescent sexuality education: An appraisal of some scalable ...

    African Journals Online (AJOL)

    Adolescent sexuality education: An appraisal of some scalable interventions for the Nigerian context. VC Pam. Abstract. Most issues around sexual intercourse are highly sensitive topics in Nigeria. Despite the disturbingly high adolescent HIV prevalence and teenage pregnancy rate in Nigeria, sexuality education is ...

  18. Scalable multifunction RF system concepts for joint operations

    NARCIS (Netherlands)

    Otten, M.P.G.; Wit, J.J.M. de; Smits, F.M.A.; Rossum, W.L. van; Huizing, A.

    2010-01-01

    RF systems based on modular architectures have the potential of better re-use of technology, decreasing development time, and decreasing life cycle cost. Moreover, modular architectures provide scalability, allowing low cost upgrades and adaptability to different platforms. To achieve maximum

  19. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  20. Integrated coherent matter wave circuits

    International Nuclear Information System (INIS)

    Ryu, C.; Boshier, M. G.

    2015-01-01

    An integrated coherent matter wave circuit is a single device, analogous to an integrated optical circuit, in which coherent de Broglie waves are created and then launched into waveguides where they can be switched, divided, recombined, and detected as they propagate. Applications of such circuits include guided atom interferometers, atomtronic circuits, and precisely controlled delivery of atoms. We report experiments demonstrating integrated circuits for guided coherent matter waves. The circuit elements are created with the painted potential technique, a form of time-averaged optical dipole potential in which a rapidly moving, tightly focused laser beam exerts forces on atoms through their electric polarizability. Moreover, the source of coherent matter waves is a Bose-Einstein condensate (BEC). Finally, we launch BECs into painted waveguides that guide them around bends and form switches, phase coherent beamsplitters, and closed circuits. These are the basic elements that are needed to engineer arbitrarily complex matter wave circuitry

  1. Oceanotron, Scalable Server for Marine Observations

    Science.gov (United States)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  2. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  3. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed; Alouini, Mohamed-Slim

    2016-01-01

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation

  4. PyClaw: Accessible, Extensible, Scalable Tools for Wave Propagation Problems

    KAUST Repository

    Ketcheson, David I.

    2012-08-15

    Development of scientific software involves tradeoffs between ease of use, generality, and performance. We describe the design of a general hyperbolic PDE solver that can be operated with the convenience of MATLAB yet achieves efficiency near that of hand-coded Fortran and scales to the largest supercomputers. This is achieved by using Python for most of the code while employing automatically wrapped Fortran kernels for computationally intensive routines, and using Python bindings to interface with a parallel computing library and other numerical packages. The software described here is PyClaw, a Python-based structured grid solver for general systems of hyperbolic PDEs [K. T. Mandli et al., PyClaw Software, Version 1.0, http://numerics.kaust.edu.sa/pyclaw/ (2011)]. PyClaw provides a powerful and intuitive interface to the algorithms of the existing Fortran codes Clawpack and SharpClaw, simplifying code development and use while providing massive parallelism and scalable solvers via the PETSc library. The package is further augmented by use of PyWENO for generation of efficient high-order weighted essentially nonoscillatory reconstruction code. The simplicity, capability, and performance of this approach are demonstrated through application to example problems in shallow water flow, compressible flow, and elasticity.

  5. The Front-End Concentrator card for the RD51 Scalable Readout System

    International Nuclear Information System (INIS)

    Toledo, J; Esteve, R; Monzó, J M; Tarazona, A; Muller, H; Martoiu, S

    2011-01-01

    Conventional readout systems exist in many variants since the usual approach is to build readout electronics for one given type of detector. The Scalable Readout System (SRS) developed within the RD51 collaboration relaxes this situation considerably by providing a choice of frontends which are connected over a customizable interface to a common SRS DAQ architecture. This allows sharing development and production costs among a large base of users as well as support from a wide base of developers. The Front-end Concentrator card (FEC), a RD51 common project between CERN and the NEXT Collaboration, is a reconfigurable interface between the SRS online system and a wide range of frontends. This is accomplished by using application-specific adapter cards between the FEC and the frontends. The ensemble (FEC and adapter card are edge mounted) forms a 6U × 220 mm Eurocard combo that fits on a 19'' subchassis. Adapter cards exist already for the first applications and more are in development.

  6. PyClaw: Accessible, Extensible, Scalable Tools for Wave Propagation Problems

    KAUST Repository

    Ketcheson, David I.; Mandli, Kyle; Ahmadia, Aron; Alghamdi, Amal; de Luna, Manuel Quezada; Parsani, Matteo; Knepley, Matthew G.; Emmett, Matthew

    2012-01-01

    Development of scientific software involves tradeoffs between ease of use, generality, and performance. We describe the design of a general hyperbolic PDE solver that can be operated with the convenience of MATLAB yet achieves efficiency near that of hand-coded Fortran and scales to the largest supercomputers. This is achieved by using Python for most of the code while employing automatically wrapped Fortran kernels for computationally intensive routines, and using Python bindings to interface with a parallel computing library and other numerical packages. The software described here is PyClaw, a Python-based structured grid solver for general systems of hyperbolic PDEs [K. T. Mandli et al., PyClaw Software, Version 1.0, http://numerics.kaust.edu.sa/pyclaw/ (2011)]. PyClaw provides a powerful and intuitive interface to the algorithms of the existing Fortran codes Clawpack and SharpClaw, simplifying code development and use while providing massive parallelism and scalable solvers via the PETSc library. The package is further augmented by use of PyWENO for generation of efficient high-order weighted essentially nonoscillatory reconstruction code. The simplicity, capability, and performance of this approach are demonstrated through application to example problems in shallow water flow, compressible flow, and elasticity.

  7. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SV...

  8. International workshop on phase retrieval and coherent scattering. Coherence 2005

    International Nuclear Information System (INIS)

    Nugent, K.A.; Fienup, J.R.; Van Dyck, D.; Van Aert, S.; Weitkamp, T.; Diaz, A.; Pfeiffer, F.; Cloetens, P.; Stampanoni, M.; Bunk, O.; David, C.; Bronnikov, A.V.; Shen, Q.; Xiao, X.; Gureyev, T.E.; Nesterets, Ya.I.; Paganin, D.M.; Wilkins, S.W.; Mokso, R.; Cloetens, P.; Ludwig, W.; Hignette, O.; Maire, E.; Faulkner, H.M.L.; Rodenburg, J.M.; Wu, X.; Liu, H.; Grubel, G.; Ludwig, K.F.; Livet, F.; Bley, F.; Simon, J.P.; Caudron, R.; Le Bolloc'h, D.; Moussaid, A.; Gutt, C.; Sprung, M.; Madsen, A.; Tolan, M.; Sinha, S.K.; Scheffold, F.; Schurtenberger, P.; Robert, A.; Madsen, A.; Falus, P.; Borthwick, M.A.; Mochrie, S.G.J.; Livet, F.; Sutton, M.D.; Ehrburger-Dolle, F.; Bley, F.; Geissler, E.; Sikharulidze, I.; Jeu, W.H. de; Lurio, L.B.; Hu, X.; Jiao, X.; Jiang, Z.; Lurio, L.B.; Hu, X.; Jiao, X.; Jiang, Z.; Naryanan, S.; Sinha, S.K.; Lal, J.; Naryanan, S.; Sinha, S.K.; Lal, J.; Robinson, I.K.; Chapman, H.N.; Barty, A.; Beetz, T.; Cui, C.; Hajdu, J.; Hau-Riege, S.P.; He, H.; Stadler, L.M.; Sepiol, B.; Harder, R.; Robinson, I.K.; Zontone, F.; Vogl, G.; Howells, M.; London, R.; Marchesini, S.; Shapiro, D.; Spence, J.C.H.; Weierstall, U.; Eisebitt, S.; Shapiro, D.; Lima, E.; Elser, V.; Howells, M.R.; Huang, X.; Jacobsen, C.; Kirz, J.; Miao, H.; Neiman, A.; Sayre, D.; Thibault, P.; Vartanyants, I.A.; Robinson, I.K.; Onken, J.D.; Pfeifer, M.A.; Williams, G.J.; Pfeiffer, F.; Metzger, H.; Zhong, Z.; Bauer, G.; Nishino, Y.; Miao, J.; Kohmura, Y.; Yamamoto, M.; Takahashi, Y.; Koike, K.; Ebisuzaki, T.; Ishikawa, T.; Spence, J.C.H.; Doak, B.

    2005-01-01

    The contributions of the participants have been organized into 3 topics: 1) phase retrieval methods, 2) X-ray photon correlation spectroscopy, and 3) coherent diffraction imaging. This document gathers the abstracts of the presentations and of the posters

  9. International workshop on phase retrieval and coherent scattering. Coherence 2005

    Energy Technology Data Exchange (ETDEWEB)

    Nugent, K.A.; Fienup, J.R.; Van Dyck, D.; Van Aert, S.; Weitkamp, T.; Diaz, A.; Pfeiffer, F.; Cloetens, P.; Stampanoni, M.; Bunk, O.; David, C.; Bronnikov, A.V.; Shen, Q.; Xiao, X.; Gureyev, T.E.; Nesterets, Ya.I.; Paganin, D.M.; Wilkins, S.W.; Mokso, R.; Cloetens, P.; Ludwig, W.; Hignette, O.; Maire, E.; Faulkner, H.M.L.; Rodenburg, J.M.; Wu, X.; Liu, H.; Grubel, G.; Ludwig, K.F.; Livet, F.; Bley, F.; Simon, J.P.; Caudron, R.; Le Bolloc' h, D.; Moussaid, A.; Gutt, C.; Sprung, M.; Madsen, A.; Tolan, M.; Sinha, S.K.; Scheffold, F.; Schurtenberger, P.; Robert, A.; Madsen, A.; Falus, P.; Borthwick, M.A.; Mochrie, S.G.J.; Livet, F.; Sutton, M.D.; Ehrburger-Dolle, F.; Bley, F.; Geissler, E.; Sikharulidze, I.; Jeu, W.H. de; Lurio, L.B.; Hu, X.; Jiao, X.; Jiang, Z.; Lurio, L.B.; Hu, X.; Jiao, X.; Jiang, Z.; Naryanan, S.; Sinha, S.K.; Lal, J.; Naryanan, S.; Sinha, S.K.; Lal, J.; Robinson, I.K.; Chapman, H.N.; Barty, A.; Beetz, T.; Cui, C.; Hajdu, J.; Hau-Riege, S.P.; He, H.; Stadler, L.M.; Sepiol, B.; Harder, R.; Robinson, I.K.; Zontone, F.; Vogl, G.; Howells, M.; London, R.; Marchesini, S.; Shapiro, D.; Spence, J.C.H.; Weierstall, U.; Eisebitt, S.; Shapiro, D.; Lima, E.; Elser, V.; Howells, M.R.; Huang, X.; Jacobsen, C.; Kirz, J.; Miao, H.; Neiman, A.; Sayre, D.; Thibault, P.; Vartanyants, I.A.; Robinson, I.K.; Onken, J.D.; Pfeifer, M.A.; Williams, G.J.; Pfeiffer, F.; Metzger, H.; Zhong, Z.; Bauer, G.; Nishino, Y.; Miao, J.; Kohmura, Y.; Yamamoto, M.; Takahashi, Y.; Koike, K.; Ebisuzaki, T.; Ishikawa, T.; Spence, J.C.H.; Doak, B

    2005-07-01

    The contributions of the participants have been organized into 3 topics: 1) phase retrieval methods, 2) X-ray photon correlation spectroscopy, and 3) coherent diffraction imaging. This document gathers the abstracts of the presentations and of the posters.

  10. Perturbative coherence in field theory

    International Nuclear Information System (INIS)

    Aldrovandi, R.; Kraenkel, R.A.

    1987-01-01

    A general condition for coherent quantization by perturbative methods is given, because the basic field equations of a fild theory are not always derivable from a Lagrangian. It's seen that non-lagrangian models way have well defined vertices, provided they satisfy what they call the 'coherence condition', which is less stringent than the condition for the existence of a Lagrangian. They note that Lagrangian theories are perturbatively coherent, in the sense that they have well defined vertices, and that they satisfy automatically that condition. (G.D.F.) [pt

  11. Models of coherent exciton condensation

    International Nuclear Information System (INIS)

    Littlewood, P B; Eastham, P R; Keeling, J M J; Marchetti, F M; Simons, B D; Szymanska, M H

    2004-01-01

    That excitons in solids might condense into a phase-coherent ground state was proposed about 40 years ago, and has been attracting experimental and theoretical attention ever since. Although experimental confirmation has been hard to come by, the concepts released by this phenomenon have been widely influential. This tutorial review discusses general aspects of the theory of exciton and polariton condensates, focusing on the reasons for coherence in the ground state wavefunction, the BCS to Bose crossover(s) for excitons and for polaritons, and the relationship of the coherent condensates to standard lasers

  12. Models of coherent exciton condensation

    Energy Technology Data Exchange (ETDEWEB)

    Littlewood, P B [Theory of Condensed Matter, Cavendish Laboratory, Cambridge CB3 0HE (United Kingdom); Eastham, P R [Theory of Condensed Matter, Cavendish Laboratory, Cambridge CB3 0HE (United Kingdom); Keeling, J M J [Theory of Condensed Matter, Cavendish Laboratory, Cambridge CB3 0HE (United Kingdom); Marchetti, F M [Theory of Condensed Matter, Cavendish Laboratory, Cambridge CB3 0HE (United Kingdom); Simons, B D [Theory of Condensed Matter, Cavendish Laboratory, Cambridge CB3 0HE (United Kingdom); Szymanska, M H [Theory of Condensed Matter, Cavendish Laboratory, Cambridge CB3 0HE (United Kingdom)

    2004-09-08

    That excitons in solids might condense into a phase-coherent ground state was proposed about 40 years ago, and has been attracting experimental and theoretical attention ever since. Although experimental confirmation has been hard to come by, the concepts released by this phenomenon have been widely influential. This tutorial review discusses general aspects of the theory of exciton and polariton condensates, focusing on the reasons for coherence in the ground state wavefunction, the BCS to Bose crossover(s) for excitons and for polaritons, and the relationship of the coherent condensates to standard lasers.

  13. Optimally cloned binary coherent states

    Science.gov (United States)

    Müller, C. R.; Leuchs, G.; Marquardt, Ch.; Andersen, U. L.

    2017-10-01

    Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal cloner.

  14. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    Science.gov (United States)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  15. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures

    Science.gov (United States)

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2018-01-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9m-by-10m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user’s wrist is stationary. PMID:29683151

  16. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    International Nuclear Information System (INIS)

    Quinto, M; Cafagna, F; Fiergolski, A; Radicioni, E

    2013-01-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality

  17. Radiologic image communication and archive service: a secure, scalable, shared approach

    Science.gov (United States)

    Fellingham, Linda L.; Kohli, Jagdish C.

    1995-11-01

    The Radiologic Image Communication and Archive (RICA) service is designed to provide a shared archive for medical images to the widest possible audience of customers. Images are acquired from a number of different modalities, each available from many different vendors. Images are acquired digitally from those modalities which support direct digital output and by digitizing films for projection x-ray exams. The RICA Central Archive receives standard DICOM 3.0 messages and data streams from the medical imaging devices at customer institutions over the public telecommunication network. RICA represents a completely scalable resource. The user pays only for what he is using today with the full assurance that as the volume of image data that he wishes to send to the archive increases, the capacity will be there to accept it. To provide this seamless scalability imposes several requirements on the RICA architecture: (1) RICA must support the full array of transport services. (2) The Archive Interface must scale cost-effectively to support local networks that range from the very small (one x-ray digitizer in a medical clinic) to the very large and complex (a large hospital with several CTs, MRs, Nuclear medicine devices, ultrasound machines, CRs, and x-ray digitizers). (3) The Archive Server must scale cost-effectively to support rapidly increasing demands for service providing storage for and access to millions of patients and hundreds of millions of images. The architecture must support the incorporation of improved technology as it becomes available to maintain performance and remain cost-effective as demand rises.

  18. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires

    Science.gov (United States)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.

    2013-12-01

    Recent studies confirm that climate change will cause wildfires to increase in frequency and severity in the coming decades especially for California and in much of the North American West. The most critical sustainability issue in the midst of these ever-changing dynamics is how to achieve a new social-ecological equilibrium of this fire ecology. Wildfire wind speeds and directions change in an instant, and first responders can only be effective when they take action as quickly as the conditions change. To deliver information needed for sustainable policy and management in this dynamically changing fire regime, we must capture these details to understand the environmental processes. We are building an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. The WIFIRE integrated CI system supports social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change. Networked observations, e.g., heterogeneous satellite data and real-time remote sensor data is integrated with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters architects implement CI pathways that enable joint innovation for wildfire management. Scientific workflows are used as an integrative distributed programming model and simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before

  19. BAMSI: a multi-cloud service for scalable distributed filtering of massive genome data.

    Science.gov (United States)

    Ausmees, Kristiina; John, Aji; Toor, Salman Z; Hellander, Andreas; Nettelblad, Carl

    2018-06-26

    The advent of next-generation sequencing (NGS) has made whole-genome sequencing of cohorts of individuals a reality. Primary datasets of raw or aligned reads of this sort can get very large. For scientific questions where curated called variants are not sufficient, the sheer size of the datasets makes analysis prohibitively expensive. In order to make re-analysis of such data feasible without the need to have access to a large-scale computing facility, we have developed a highly scalable, storage-agnostic framework, an associated API and an easy-to-use web user interface to execute custom filters on large genomic datasets. We present BAMSI, a Software as-a Service (SaaS) solution for filtering of the 1000 Genomes phase 3 set of aligned reads, with the possibility of extension and customization to other sets of files. Unique to our solution is the capability of simultaneously utilizing many different mirrors of the data to increase the speed of the analysis. In particular, if the data is available in private or public clouds - an increasingly common scenario for both academic and commercial cloud providers - our framework allows for seamless deployment of filtering workers close to data. We show results indicating that such a setup improves the horizontal scalability of the system, and present a possible use case of the framework by performing an analysis of structural variation in the 1000 Genomes data set. BAMSI constitutes a framework for efficient filtering of large genomic data sets that is flexible in the use of compute as well as storage resources. The data resulting from the filter is assumed to be greatly reduced in size, and can easily be downloaded or routed into e.g. a Hadoop cluster for subsequent interactive analysis using Hive, Spark or similar tools. In this respect, our framework also suggests a general model for making very large datasets of high scientific value more accessible by offering the possibility for organizations to share the cost of

  20. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science.

    Science.gov (United States)

    Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H

    2014-05-28

    Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem

  1. Coherence of light. 2. ed.

    International Nuclear Information System (INIS)

    Perina, J.

    1985-01-01

    This book puts the theory of coherence of light on a rigorous mathematical footing. It deals with the classical and quantum theories and with their inter-relationships, including many results from the author's own research. Particular attention is paid to the detection of optical fields, using the correlation functions, photocount statistics and coherent state. Radiometry with light fields of arbitrary states of coherence is discussed and the coherent state methods are demonstrated by photon statistics of radiation in random and nonlinear media, using the Heisenberg-Langevin and Fokker-Planck approaches to the interaction of radiation with matter. Many experimental and theoretical results are compared. A full list of references to theoretical and experimental literature is provided. The book is intended for researchers and postgraduate students in the fields of quantum optics, quantum electronics, statistical optics, nonlinear optics, optical communication and optoelectronics. (Auth.)

  2. Spin Coherence in Semiconductor Nanostructures

    National Research Council Canada - National Science Library

    Flatte, Michael E

    2006-01-01

    ... dots, tuning of spin coherence times for electron spin, tuning of dipolar magnetic fields for nuclear spin, spontaneous spin polarization generation and new designs for spin-based teleportation and spin transistors...

  3. Soft gluon coherence at LEP

    International Nuclear Information System (INIS)

    Gaidot, A.

    1993-01-01

    After a brief overview of the experimental status on colour coherence at LEP we will focus on two recent approaches to the subject: the sub-jet multiplicities and the azimuthal correlations between pair of particles. (author)

  4. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  5. Scalable quantum memory in the ultrastrong coupling regime.

    Science.gov (United States)

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  6. Fast & scalable pattern transfer via block copolymer nanolithography

    DEFF Research Database (Denmark)

    Li, Tao; Wang, Zhongli; Schulte, Lars

    2015-01-01

    A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin-casting of s......A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin...... on long range lateral order, including fabrication of substrates for catalysis, solar cells, sensors, ultrafiltration membranes and templating of semiconductors or metals....

  7. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer

    2013-03-01

    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  8. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  9. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  10. NPTool: Towards Scalability and Reliability of Business Process Management

    Science.gov (United States)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  11. Proof of Stake Blockchain: Performance and Scalability for Groupware Communications

    DEFF Research Database (Denmark)

    Spasovski, Jason; Eklund, Peter

    2017-01-01

    A blockchain is a distributed transaction ledger, a disruptive technology that creates new possibilities for digital ecosystems. The blockchain ecosystem maintains an immutable transaction record to support many types of digital services. This paper compares the performance and scalability of a web......-based groupware communication application using both non-blockchain and blockchain technologies. Scalability is measured where message load is synthesized over two typical communication topologies. The first is 1 to n network -- a typical client-server or star-topology with a central vertex (server) receiving all...... messages from the remaining n - 1 vertices (clients). The second is a more naturally occurring scale-free network topology, where multiple communication hubs are distributed throughout the network. System performance is tested with both blockchain and non-blockchain solutions using multiple cloud computing...

  12. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  13. Scalable, full-colour and controllable chromotropic plasmonic printing

    Science.gov (United States)

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  14. Coherent Addressing of Individual Neutral Atoms in a 3D Optical Lattice.

    Science.gov (United States)

    Wang, Yang; Zhang, Xianli; Corcovilos, Theodore A; Kumar, Aishwarya; Weiss, David S

    2015-07-24

    We demonstrate arbitrary coherent addressing of individual neutral atoms in a 5×5×5 array formed by an optical lattice. Addressing is accomplished using rapidly reconfigurable crossed laser beams to selectively ac Stark shift target atoms, so that only target atoms are resonant with state-changing microwaves. The effect of these targeted single qubit gates on the quantum information stored in nontargeted atoms is smaller than 3×10^{-3} in state fidelity. This is an important step along the path of converting the scalability promise of neutral atoms into reality.

  15. A Scalable Heuristic for Viral Marketing Under the Tipping Model

    Science.gov (United States)

    2013-09-01

    Flixster is a social media website that allows users to share reviews and other information about cinema . [35] It was extracted in Dec. 2010. – FourSquare...work of Reichman were developed independently . We also note that Reichman performs no experimental evaluation of the algorithm. A Scalable Heuristic...other dif- fusion models, such as the independent cascade model [21] and evolutionary graph theory [25] as well as probabilistic variants of the

  16. A Scalable Communication Architecture for Advanced Metering Infrastructure

    OpenAIRE

    Ngo Hoang , Giang; Liquori , Luigi; Nguyen Chan , Hung

    2013-01-01

    Advanced Metering Infrastructure (AMI), seen as foundation for overall grid modernization, is an integration of many technologies that provides an intelligent connection between consumers and system operators [ami 2008]. One of the biggest challenge that AMI faces is to scalable collect and manage a huge amount of data from a large number of customers. In our paper, we address this challenge by introducing a mixed peer-to-peer (P2P) and client-server communication architecture for AMI in whic...

  17. Scalable Multi-group Key Management for Advanced Metering Infrastructure

    OpenAIRE

    Benmalek , Mourad; Challal , Yacine; Bouabdallah , Abdelmadjid

    2015-01-01

    International audience; Advanced Metering Infrastructure (AMI) is composed of systems and networks to incorporate changes for modernizing the electricity grid, reduce peak loads, and meet energy efficiency targets. AMI is a privileged target for security attacks with potentially great damage against infrastructures and privacy. For this reason, Key Management has been identified as one of the most challenging topics in AMI development. In this paper, we propose a new Scalable multi-group key ...

  18. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    Directory of Open Access Journals (Sweden)

    Jacob R. Hauser

    2016-09-01

    Full Text Available 2-Cyanobenzothiazoles (CBTs are useful building blocks for: 1 luciferin derivatives for bioluminescent imaging; and 2 handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT, which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO, and discuss its advantages for scale-up over previously reported routes.

  19. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    Science.gov (United States)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  20. Architectural Techniques to Enable Reliable and Scalable Memory Systems

    OpenAIRE

    Nair, Prashant J.

    2017-01-01

    High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even bui...

  1. Coherent systems with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1980-01-01

    The basic rules of the Boolean algebra with restrictions on variables are briefly recalled. This special type of Boolean algebra allows one to handle fault trees of systems made of multistate (two or more than two states) components. Coherent systems are defined in the case of multistate components. This definition is consistent with that originally suggested by Barlow in the case of binary (two states) components. The basic properties of coherence are described and discussed. Coherent Boolean functions are also defined. It is shown that these functions are irredundant, that is they have only one base which is at the same time complete and irredundant. However, irredundant functions are not necessarily coherent. Finally a simplified algorithm for the calculation of the base of a coherent function is described. In the case that the function is not coherent, the algorithm can be used to reduce the size of the normal disjunctive form of the function. This in turn eases the application of the Nelson algorithm to calculate the complete base of the function. The simplified algorithm has been built in the computer program MUSTAFA-1. In a sample case the use of this algorithm caused a reduction of the CPU time by a factor of about 20. (orig.)

  2. Spreading of oil from protein stabilised emulsions at air/water interfaces

    NARCIS (Netherlands)

    Schokker, E.P.; Bos, M.A.; Kuijpers, A.J.; Wijnen, M.E.; Walstra, P.

    2002-01-01

    Spreading of a drop of an emulsion made with milk proteins on air/water interfaces was studied. From an unheated emulsion, all oil molecules could spread onto the air/water interface, indicating that the protein layers around the oil globules in the emulsion droplet were not coherent enough to

  3. Optical Coherence Tomography

    Directory of Open Access Journals (Sweden)

    Pier Alberto Testoni

    2007-01-01

    Full Text Available Optical coherence tomography (OCT is an optical imaging modality that performs high-resolution, cross-sectional, subsurface tomographic imaging of the microstructure of tissues. The physical principle of OCT is similar to that of B-mode ultrasound imaging, except that it uses infrared light waves rather than acoustic waves. The in vivo resolution is 10–25 times better (about 10 µm than with high-frequency ultrasound imaging, but the depth of penetration is limited to 1–3 mm, depending on tissue structure, depth of focus of the probe used, and pressure applied to the tissue surface. In the last decade, OCT technology has evolved from an experimental laboratory tool to a new diagnostic imaging modality with a wide spectrum of clinical applications in medical practice, including the gastrointestinal tract and pancreatico-biliary ductal system. OCT imaging from the gastrointestinal tract can be done in humans by using narrow-diameter, catheter-based probes that can be inserted through the accessory channel of either a conventional front-view endoscope, for investigating the epithelial structure of the gastrointestinal tract, or a side-view endoscope, inside a standard transparent ERCP (endoscopic retrograde cholangiopancreatography catheter, for investigating the pancreatico-biliary ductal system. The esophagus and esophagogastric junction have been the most widely investigated organs so far; more recently, duodenum, colon, and the pancreatico-biliary ductal system have also been extensively investigated. OCT imaging of the gastrointestinal wall structure is characterized by a multiple-layer architecture that permits an accurate evaluation of the mucosa, lamina propria, muscularis mucosae, and part of the submucosa. The technique may therefore be used to identify preneoplastic conditions of the gastrointestinal tract, such as Barrett's epithelium and dysplasia, and evaluate the depth of penetration of early-stage neoplastic lesions. OCT imaging

  4. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Science.gov (United States)

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  5. Event metadata records as a testbed for scalable data mining

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D

    2010-01-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  6. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  7. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  8. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  9. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  10. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  11. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  12. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  13. Coherent communication with continuous quantum variables

    Science.gov (United States)

    Wilde, Mark M.; Krovi, Hari; Brun, Todd A.

    2007-06-01

    The coherent bit (cobit) channel is a resource intermediate between classical and quantum communication. It produces coherent versions of teleportation and superdense coding. We extend the cobit channel to continuous variables by providing a definition of the coherent nat (conat) channel. We construct several coherent protocols that use both a position-quadrature and a momentum-quadrature conat channel with finite squeezing. Finally, we show that the quality of squeezing diminishes through successive compositions of coherent teleportation and superdense coding.

  14. Experimental generation of optical coherence lattices

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yahong; Cai, Yangjian, E-mail: serpo@dal.ca, E-mail: yangjiancai@suda.edu.cn [College of Physics, Optoelectronics and Energy and Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou 215006 (China); Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province and Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou 215006 (China); Ponomarenko, Sergey A., E-mail: serpo@dal.ca, E-mail: yangjiancai@suda.edu.cn [Department of Electrical and Computer Engineering, Dalhousie University, Halifax, Nova Scotia B3J 2X4 (Canada)

    2016-08-08

    We report experimental generation and measurement of recently introduced optical coherence lattices. The presented optical coherence lattice realization technique hinges on a superposition of mutually uncorrelated partially coherent Schell-model beams with tailored coherence properties. We show theoretically that information can be encoded into and, in principle, recovered from the lattice degree of coherence. Our results can find applications to image transmission and optical encryption.

  15. Optical coherent tomography in diagnoses of peripheral retinal degenarations

    Directory of Open Access Journals (Sweden)

    O. G. Pozdeyeva

    2013-01-01

    Full Text Available Purpose: Studying the capabilities of optical coherence tomography (RTVue-100, OPTOVUE, USA in evaluation of peripheral retinal degenerations, vitreoretinal adhesions, adjacent vitreous body as well as measurement of morphometric data.Methods: The study included 189 patients (239 eyes with peripheral retinal degeneration. 77 men and 112 women aged 18 to 84 underwent an ophthalmologic examination since November 2012 until October 2013. The peripheral retina was visualized with the help of optical coherence tomography («RTVue-100,» USA. The fundography was carried out using a Nikon NF505‑AF (Japan fundus camera. All patients were examined with a Goldmann lens.Results: Optical coherence tomography was used to evaluate different kinds of peripheral retinal degenerations, such as lattice and snail track degeneration, isolated retinal tears, cystoid retinal degeneration, pathological hyperpigmentation, retinoschisis and cobblestone degeneration. The following morphometric data were studied: dimensions of the lesion (average length, retinal thickness along the edge of the lesion, retinal thickness at the base of the lesion and the vitreoretinal interface.Conclusion: Optical coherence tomography is a promising in vivo visualization method which is useful in evaluation of peripheral retinal degenerations, vitreoretinal adhesions and tractions. It also provides a comprehensive protocolling system and monitoring. It will enable ophthalmologists to better define laser and surgical treatment indications and evaluate therapy effectiveness.

  16. Optical coherent tomography in diagnoses of peripheral retinal degenarations

    Directory of Open Access Journals (Sweden)

    O. G. Pozdeyeva

    2014-07-01

    Full Text Available Purpose: Studying the capabilities of optical coherence tomography (RTVue-100, OPTOVUE, USA in evaluation of peripheral retinal degenerations, vitreoretinal adhesions, adjacent vitreous body as well as measurement of morphometric data.Methods: The study included 189 patients (239 eyes with peripheral retinal degeneration. 77 men and 112 women aged 18 to 84 underwent an ophthalmologic examination since November 2012 until October 2013. The peripheral retina was visualized with the help of optical coherence tomography («RTVue-100,» USA. The fundography was carried out using a Nikon NF505‑AF (Japan fundus camera. All patients were examined with a Goldmann lens.Results: Optical coherence tomography was used to evaluate different kinds of peripheral retinal degenerations, such as lattice and snail track degeneration, isolated retinal tears, cystoid retinal degeneration, pathological hyperpigmentation, retinoschisis and cobblestone degeneration. The following morphometric data were studied: dimensions of the lesion (average length, retinal thickness along the edge of the lesion, retinal thickness at the base of the lesion and the vitreoretinal interface.Conclusion: Optical coherence tomography is a promising in vivo visualization method which is useful in evaluation of peripheral retinal degenerations, vitreoretinal adhesions and tractions. It also provides a comprehensive protocolling system and monitoring. It will enable ophthalmologists to better define laser and surgical treatment indications and evaluate therapy effectiveness.

  17. WEB COHERENCE LEARNING

    Directory of Open Access Journals (Sweden)

    Peter Karlsudd

    2008-09-01

    Full Text Available This article describes a learning system constructed to facilitate teaching and learning by creating a functional web-based contact between schools and organisations which in cooperation with the school contribute to pupils’/students’ cognitive development. Examples of such organisations include science centres, museums, art and music workshops and teacher education internships. With the support of the “Web Coherence Learning” IT application (abbreviated in Swedish to Webbhang developed by the University of Kalmar, the aim is to reinforce learning processes in the encounter with organisations outside school. In close cooperation with potential users a system was developed which can be described as consisting of three modules. The first module, “the organisation page” supports the organisation in simply setting up a homepage, where overarching information on organisation operations can be published and where functions like calendar, guestbook, registration and newsletter can be included. In the second module, “the activity page” the activities offered by the organisation are described. Here pictures and information may prepare and inspire pupils/students to their own activities before future visits. The third part, “the participant page” is a communication module linked to the activity page enabling school classes to introduce themselves and their work as well as documenting the work and communicating with the educators responsible for external activities. When the project is finished, the work will be available to further school classes, parents and other interested parties. System development and testing have been performed in a small pilot study where two creativity educators at an art museum have worked together with pupils and teachers from a compulsory school class. The system was used to establish, prior to the visit of the class, a deeper contact and to maintain a more qualitative continuous dialogue during and after

  18. Coherent optical DFT-spread OFDM transmission using orthogonal band multiplexing.

    Science.gov (United States)

    Yang, Qi; He, Zhixue; Yang, Zhu; Yu, Shaohua; Yi, Xingwen; Shieh, William

    2012-01-30

    Coherent optical OFDM (CO-OFDM) combined with orthogonal band multiplexing provides a scalable and flexible solution for achieving ultra high-speed rate. Among many CO-OFDM implementations, digital Fourier transform spread (DFT-S) CO-OFDM is proposed to mitigate fiber nonlinearity in long-haul transmission. In this paper, we first illustrate the principle of DFT-S OFDM. We then experimentally evaluate the performance of coherent optical DFT-S OFDM in a band-multiplexed transmission system. Compared with conventional clipping methods, DFT-S OFDM can reduce the OFDM peak-to-average power ratio (PAPR) value without suffering from the interference of the neighboring bands. With the benefit of much reduced PAPR, we successfully demonstrate 1.45 Tb/s DFT-S OFDM over 480 km SSMF transmission.

  19. Interface Simulation Distances

    Directory of Open Access Journals (Sweden)

    Pavol Černý

    2012-10-01

    Full Text Available The classical (boolean notion of refinement for behavioral interfaces of system components is the alternating refinement preorder. In this paper, we define a distance for interfaces, called interface simulation distance. It makes the alternating refinement preorder quantitative by, intuitively, tolerating errors (while counting them in the alternating simulation game. We show that the interface simulation distance satisfies the triangle inequality, that the distance between two interfaces does not increase under parallel composition with a third interface, and that the distance between two interfaces can be bounded from above and below by distances between abstractions of the two interfaces. We illustrate the framework, and the properties of the distances under composition of interfaces, with two case studies.

  20. Beyond Readability: Investigating Coherence of Clinical Text for Consumers

    Science.gov (United States)

    Hetzel, Scott; Dalrymple, Prudence; Keselman, Alla

    2011-01-01

    between (Original + Dictionary) and Vocabulary (P = .36) nor Coherent and Vocabulary (P = .62). No statistically significant effect of any document transformation was found either in the open-ended questionnaire (clinical trial: P = .86, Visit Note: P = .20) or in the error rate (clinical trial: P = .47, Visit Note: P = .25). However, post hoc power analysis suggested that increasing the sample size by approximately 6 participants per condition would result in a significant difference for the Visit Note, but not for the clinical trial text. Conclusions Statistically, the results of this study attest that improving coherence has a small effect on consumer comprehension of clinical text, but the task is extremely labor intensive and not scalable. Further research is needed using texts from more diverse clinical domains and more heterogeneous participants, including actual patients. Since comprehensibility of clinical text appears difficult to automate, informatics support tools may most productively support the health care professionals tasked with making clinical information understandable to patients. PMID:22138127

  1. Clock domain crossing modules for OCP-style read/write interfaces

    DEFF Research Database (Denmark)

    Herlev, Mathias; Sparsø, Jens

    The open core protocol (OCP) is an openly licensed, configurable, and scalable interface protocol for on-chip subsystem communications. The protocol defines read and write transactions from a master towards a slave across a point-to-point connection and the protocol assumes a single common clock....... This paper presents the design of two OCP clock domain crossing interface modules, that can be used to construct systems with multiple clock domains. One module (called OCPio) supports a single word read-write interface and the other module (called OCPburst) supports a four word burst read-write interface......-style read-write transaction interfaces. An OCP interface typically has control signals related to both the master issuing a read or write request and the slave producing a response. If all these control signals are passed across the clock domain boundary and synchronized it may add significant latency...

  2. Coherent states on Hilbert modules

    International Nuclear Information System (INIS)

    Ali, S Twareque; Bhattacharyya, T; Roy, S S

    2011-01-01

    We generalize the concept of coherent states, traditionally defined as special families of vectors on Hilbert spaces, to Hilbert modules. We show that Hilbert modules over C*-algebras are the natural settings for a generalization of coherent states defined on Hilbert spaces. We consider those Hilbert C*-modules which have a natural left action from another C*-algebra, say A. The coherent states are well defined in this case and they behave well with respect to the left action by A. Certain classical objects like the Cuntz algebra are related to specific examples of coherent states. Finally we show that coherent states on modules give rise to a completely positive definite kernel between two C*-algebras, in complete analogy to the Hilbert space situation. Related to this, there is a dilation result for positive operator-valued measures, in the sense of Naimark. A number of examples are worked out to illustrate the theory. Some possible physical applications are also mentioned.

  3. Progress in coherent laser radar

    Science.gov (United States)

    Vaughan, J. M.

    1986-01-01

    Considerable progress with coherent laser radar has been made over the last few years, most notably perhaps in the available range of high performance devices and components and the confidence with which systems may now be taken into the field for prolonged periods of operation. Some of this increasing maturity was evident at the 3rd Topical Meeting on Coherent Laser Radar: Technology and Applications. Topics included in discussions were: mesoscale wind fields, nocturnal valley drainage and clear air down bursts; airborne Doppler lidar studies and comparison of ground and airborne wind measurement; wind measurement over the sea for comparison with satellite borne microwave sensors; transport of wake vortices at airfield; coherent DIAL methods; a newly assembled Nd-YAG coherent lidar system; backscatter profiles in the atmosphere and wavelength dependence over the 9 to 11 micrometer region; beam propagation; rock and soil classification with an airborne 4-laser system; technology of a global wind profiling system; target calibration; ranging and imaging with coherent pulsed and CW system; signal fluctuations and speckle. Some of these activities are briefly reviewed.

  4. Coherent states in quantum physics

    CERN Document Server

    Gazeau, Jean-Pierre

    2009-01-01

    This self-contained introduction discusses the evolution of the notion of coherent states, from the early works of Schrödinger to the most recent advances, including signal analysis. An integrated and modern approach to the utility of coherent states in many different branches of physics, it strikes a balance between mathematical and physical descriptions.Split into two parts, the first introduces readers to the most familiar coherent states, their origin, their construction, and their application and relevance to various selected domains of physics. Part II, mostly based on recent original results, is devoted to the question of quantization of various sets through coherent states, and shows the link to procedures in signal analysis. Title: Coherent States in Quantum Physics Print ISBN: 9783527407095 Author(s): Gazeau, Jean-Pierre eISBN: 9783527628292 Publisher: Wiley-VCH Dewey: 530.12 Publication Date: 23 Sep, 2009 Pages: 360 Category: Science, Science: Physics LCCN: Language: English Edition: N/A LCSH:

  5. Experimental study of coherence vortices: Local properties of phase singularities in a spatial coherence function

    DEFF Research Database (Denmark)

    Wang, W.; Duan, Z.H.; Hanson, Steen Grüner

    2006-01-01

    By controlling the irradiance of an extended quasimonochromatic, spatially incoherent source, an optical field is generated that exhibits spatial coherence with phase singularities, called coherence vortices. A simple optical geometry for direct visualization of coherence vortices is proposed, an...

  6. RISA: Remote Interface for Science Analysis

    Science.gov (United States)

    Gabriel, C.; Ibarra, A.; de La Calle, I.; Salgado, J.; Osuna, P.; Tapiador, D.

    2008-08-01

    The Scientific Analysis System (SAS) is the package for interactive and pipeline data reduction of all XMM-Newton data. Freely distributed by ESA to run under many different operating systems, the SAS has been used by almost every one of the 1600 refereed scientific publications obtained so far from the mission. We are developing RISA, the Remote Interface for Science Analysis, which makes it possible to run SAS through fully configurable web service workflows, enabling observers to access and analyse data making use of all of the existing SAS functionalities, without any installation/download of software/data. The workflows run primarily but not exclusively on the ESAC Grid, which offers scalable processing resources, directly connected to the XMM-Newton Science Archive. A first project internal version of RISA was issued in May 2007, a public release is expected already within this year.

  7. Extremal-point densities of interface fluctuations

    International Nuclear Information System (INIS)

    Toroczkai, Z.; Korniss, G.; Das Sarma, S.; Zia, R. K. P.

    2000-01-01

    We introduce and investigate the stochastic dynamics of the density of local extrema (minima and maxima) of nonequilibrium surface fluctuations. We give a number of analytic results for interface fluctuations described by linear Langevin equations, and for on-lattice, solid-on-solid surface-growth models. We show that, in spite of the nonuniversal character of the quantities studied, their behavior against the variation of the microscopic length scales can present generic features, characteristic of the macroscopic observables of the system. The quantities investigated here provide us with tools that give an unorthodox approach to the dynamics of surface morphologies: a statistical analysis from the short-wavelength end of the Fourier decomposition spectrum. In addition to surface-growth applications, our results can be used to solve the asymptotic scalability problem of massively parallel algorithms for discrete-event simulations, which are extensively used in Monte Carlo simulations on parallel architectures. (c) 2000 The American Physical Society

  8. Direct Global Measurements of Tropspheric Winds Employing a Simplified Coherent Laser Radar using Fully Scalable Technology and Technique

    Science.gov (United States)

    Kavaya, Michael J.; Spiers, Gary D.; Lobl, Elena S.; Rothermel, Jeff; Keller, Vernon W.

    1996-01-01

    Innovative designs of a space-based laser remote sensing 'wind machine' are presented. These designs seek compatibility with the traditionally conflicting constraints of high scientific value and low total mission cost. Mission cost is reduced by moving to smaller, lighter, more off-the-shelf instrument designs which can be accommodated on smaller launch vehicles.

  9. EDITORIAL: Coherent Control

    Science.gov (United States)

    Fielding, Helen; Shapiro, Moshe; Baumert, Thomas

    2008-04-01

    Quantum mechanics, though a probabilistic theory, gives a 'deterministic' answer to the question of how the present determines the future. In essence, in order to predict future probabilities, we need to (numerically) propagate the time-dependent Schrödinger equation from the present to the future. It is interesting to note that classical mechanics of macroscopic bodies, though reputed to be a deterministic theory, does not allow, due to chaos (which unfortunately is more prevalent than integrability), such clear insights into the future. In contrast, small (e.g., atomic, molecular and photonic) systems which are best understood using the tools of quantum mechanics, do not suffer from chaos, rendering the prediction of the probability-distributions of future events possible. The field of quantum control deals with an important modification of this task, namely, it asks: given a wave function in the present, what dynamics, i.e. what Hamiltonian, guarantees a desired outcome or 'objective' in the future? In practice one may achieve this goal of modifying and finding the desired Hamiltonian by introducing external fields, e.g. laser light. It is then possible to reach the objective in a 'trial-and-error' fashion, performed either numerically or in the laboratory. We can guess or build a Hamiltonian, do an experiment, or propagate the initial wave function to the future, compare the result with the desirable objective, and correct the guess for the Hamiltonian until satisfactory agreement with the objective is reached. A systematic way of executing this procedure is the sub-field called 'optimal control'. The trial-and-error method is often very time consuming and rarely provides mechanistic insight. There are situations where analytical solutions exist, rendering the control strategies more transparent. This is especially so when one can identify quantum interferences as the heart of quantum control, the essence of the field called 'coherent control'. The experience

  10. Coherent Waves in Seismic Researches

    Science.gov (United States)

    Emanov, A.; Seleznev, V. S.

    2013-05-01

    Development of digital processing algorithms of seismic wave fields for the purpose of useful event picking to study environment and other objects is the basis for the establishment of new seismic techniques. In the submitted paper a fundamental property of seismic wave field coherence is used. The authors extended conception of coherence types of observed wave fields and devised a technique of coherent component selection from observed wave field. Time coherence and space coherence are widely known. In this paper conception "parameter coherence" has been added. The parameter by which wave field is coherent can be the most manifold. The reason is that the wave field is a multivariate process described by a set of parameters. Coherence in the first place means independence of linear connection in wave field of parameter. In seismic wave fields, recorded in confined space, in building-blocks and stratified mediums time coherent standing waves are formed. In prospecting seismology at observation systems with multiple overlapping head waves are coherent by parallel correlation course or, in other words, by one measurement on generalized plane of observation system. For detail prospecting seismology at observation systems with multiple overlapping on basis of coherence property by one measurement of area algorithms have been developed, permitting seismic records to be converted to head wave time sections which have neither reflected nor other types of waves. Conversion in time section is executed on any specified observation base. Energy storage of head waves relative to noise on basis of multiplicity of observation system is realized within area of head wave recording. Conversion on base below the area of wave tracking is performed with lack of signal/noise ratio relative to maximum of this ratio, fit to observation system. Construction of head wave time section and dynamic plots a basis of automatic processing have been developed, similar to CDP procedure in method of

  11. Tangible 3D modeling of coherent and themed structures

    DEFF Research Database (Denmark)

    Walther, Jeppe Ullè; Bærentzen, J. Andreas; Aanæs, Henrik

    2016-01-01

    We present CubeBuilder, a system for interactive, tangible 3D shape modeling. CubeBuilder allows the user to create a digital 3D model by placing physical, non-interlocking cubic blocks. These blocks may be placed in a completely arbitrary fashion and combined with other objects. In effect......, this turns the task of 3D modeling into a playful activity that hardly requires any learning on the part of the user. The blocks are registered using a depth camera and entered into the cube graph where each block is a node and adjacent blocks are connected by edges. From the cube graph, we transform......, allows the user to tangibly build structures of greater details than the blocks provide in and of themselves. We show a number of shapes that have been modeled by users and are indicative of the expressive power of the system. Furthermore, we demonstrate the scalability of the tangible interface which...

  12. Brain–muscle interface

    Indian Academy of Sciences (India)

    2011-05-16

    May 16, 2011 ... Clipboard: Brain–muscle interface: The next-generation BMI. Radhika Rajan Neeraj Jain ... Keywords. Assistive devices; brain–machine interface; motor cortex; paralysis; spinal cord injury ... Journal of Biosciences | News ...

  13. VPLS: an effective technology for building scalable transparent LAN services

    Science.gov (United States)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  14. Garbage collector interface

    OpenAIRE

    Ive, Anders; Blomdell, Anders; Ekman, Torbjörn; Henriksson, Roger; Nilsson, Anders; Nilsson, Klas; Robertz, Sven

    2002-01-01

    The purpose of the presented garbage collector interface is to provide a universal interface for many different implementations of garbage collectors. This is to simplify the integration and exchange of garbage collectors, but also to support incremental, non-conservative, and thread safe implementations. Due to the complexity of the interface, it is aimed at code generators and preprocessors. Experiences from ongoing implementations indicate that the garbage collector interface successfully ...

  15. Microcomputer interfacing and applications

    CERN Document Server

    Mustafa, M A

    1990-01-01

    This is the applications guide to interfacing microcomputers. It offers practical non-mathematical solutions to interfacing problems in many applications including data acquisition and control. Emphasis is given to the definition of the objectives of the interface, then comparing possible solutions and producing the best interface for every situation. Dr Mustafa A Mustafa is a senior designer of control equipment and has written many technical articles and papers on the subject of computers and their application to control engineering.

  16. Scalable Earth-observation Analytics for Geoscientists: Spacetime Extensions to the Array Database SciDB

    Science.gov (United States)

    Appel, Marius; Lahn, Florian; Pebesma, Edzer; Buytaert, Wouter; Moulds, Simon

    2016-04-01

    Today's amount of freely available data requires scientists to spend large parts of their work on data management. This is especially true in environmental sciences when working with large remote sensing datasets, such as obtained from earth-observation satellites like the Sentinel fleet. Many frameworks like SpatialHadoop or Apache Spark address the scalability but target programmers rather than data analysts, and are not dedicated to imagery or array data. In this work, we use the open-source data management and analytics system SciDB to bring large earth-observation datasets closer to analysts. Its underlying data representation as multidimensional arrays fits naturally to earth-observation datasets, distributes storage and computational load over multiple instances by multidimensional chunking, and also enables efficient time-series based analyses, which is usually difficult using file- or tile-based approaches. Existing interfaces to R and Python furthermore allow for scalable analytics with relatively little learning effort. However, interfacing SciDB and file-based earth-observation datasets that come as tiled temporal snapshots requires a lot of manual bookkeeping during ingestion, and SciDB natively only supports loading data from CSV-like and custom binary formatted files, which currently limits its practical use in earth-observation analytics. To make it easier to work with large multi-temporal datasets in SciDB, we developed software tools that enrich SciDB with earth observation metadata and allow working with commonly used file formats: (i) the SciDB extension library scidb4geo simplifies working with spatiotemporal arrays by adding relevant metadata to the database and (ii) the Geospatial Data Abstraction Library (GDAL) driver implementation scidb4gdal allows to ingest and export remote sensing imagery from and to a large number of file formats. Using added metadata on temporal resolution and coverage, the GDAL driver supports time-based ingestion of

  17. Coherent states for quadratic Hamiltonians

    International Nuclear Information System (INIS)

    Contreras-Astorga, Alonso; Fernandez C, David J; Velazquez, Mercedes

    2011-01-01

    The coherent states for a set of quadratic Hamiltonians in the trap regime are constructed. A matrix technique which allows us to directly identify the creation and annihilation operators will be presented. Then, the coherent states as simultaneous eigenstates of the annihilation operators will be derived, and will be compared with those attained through the displacement operator method. The corresponding wavefunction will be found, and a general procedure for obtaining several mean values involving the canonical operators in these states will be described. The results will be illustrated through the asymmetric Penning trap.

  18. Coherent γ-ray production

    International Nuclear Information System (INIS)

    Bertolotti, M.; Sibilia, C.

    1985-01-01

    In this article the authors discuss a new approach for developing a coherent source of γ-rays. They offer a completely different scheme for development of the source that should overcome most of the problems encountered in ''classical γ-ray lasers,'' and in which the use of inverse Compton scattering of laser radiation onto a relativistic electron beam is made. This kind of interaction has been used to obtain γ-ray photons with good polarization and monochromaticity properties. The authors describe a new geometry of interaction which allows one to obtain coherent emission

  19. Overcoming the drawback of lower sense margin in tunnel FET based dynamic memory along with enhanced charge retention and scalability

    Science.gov (United States)

    Navlakha, Nupur; Kranti, Abhinav

    2017-11-01

    The work reports on the use of a planar tri-gate tunnel field effect transistor (TFET) to operate as dynamic memory at 85 °C with an enhanced sense margin (SM). Two symmetric gates (G1) aligned to the source at a partial region of intrinsic film result into better electrostatic control that regulates the read mechanism based on band-to-band tunneling, while the other gate (G2), positioned adjacent to the first front gate is responsible for charge storage and sustenance. The proposed architecture results in an enhanced SM of ˜1.2 μA μm-1 along with a longer retention time (RT) of ˜1.8 s at 85 °C, for a total length of 600 nm. The double gate architecture towards the source increases the tunneling current and also reduces short channel effects, enhancing SM and scalability, thereby overcoming the critical bottleneck faced by TFET based dynamic memories. The work also discusses the impact of overlap/underlap and interface charges on the performance of TFET based dynamic memory. Insights into device operation demonstrate that the choice of appropriate architecture and biases not only limit the trade-off between SM and RT, but also result in improved scalability with drain voltage and total length being scaled down to 0.8 V and 115 nm, respectively.

  20. Sideband cooling and coherent dynamics in a microchip multi-segmented ion trap

    Energy Technology Data Exchange (ETDEWEB)

    Schulz, Stephan A; Poschinger, Ulrich; Ziesel, Frank; Schmidt-Kaler, Ferdinand [Universitaet Ulm, Institut fuer Quanteninformationsverarbeitung, Albert-Einstein-Allee 11, D-89069 Ulm (Germany)], E-mail: stephan.schulz@uni-ulm.de

    2008-04-15

    Miniaturized ion trap arrays with many trap segments present a promising architecture for scalable quantum information processing. The miniaturization of segmented linear Paul traps allows partitioning the microtrap into different storage and processing zones. The individual position control of many ions-each of them carrying qubit information in its long-lived electronic levels-by the external trap control voltages is important for the implementation of next generation large-scale quantum algorithms. We present a novel scalable microchip multi-segmented ion trap with two different adjacent zones, one for the storage and another dedicated to the processing of quantum information using single ions and linear ion crystals. A pair of radio-frequency-driven electrodes and 62 independently controlled dc electrodes allows shuttling of single ions or linear ion crystals with numerically designed axial potentials at axial and radial trap frequencies of a few megahertz. We characterize and optimize the microtrap using sideband spectroscopy on the narrow S{sub 1/2}{r_reversible}D{sub 5/2} qubit transition of the {sup 40}Ca{sup +} ion, and demonstrate coherent single-qubit Rabi rotations and optical cooling methods. We determine the heating rate using sideband cooling measurements to the vibrational ground state, which is necessary for subsequent two-qubit quantum logic operations. The applicability for scalable quantum information processing is proved.

  1. A scalable lock-free hash table with open addressing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Puge; Karlsson, Sven

    2016-01-01

    and concurrent operations without any locks. In this paper, we present a new fully lock-free open addressed hash table with a simpler design than prior published work. We split hash table insertions into two atomic phases: first inserting a value ignoring other concurrent operations, then in the second phase......Concurrent data structures synchronized with locks do not scale well with the number of threads. As more scalable alternatives, concurrent data structures and algorithms based on widely available, however advanced, atomic operations have been proposed. These data structures allow for correct...

  2. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  3. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  4. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  5. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  6. Interactive segmentation: a scalable superpixel-based method

    Science.gov (United States)

    Mathieu, Bérengère; Crouzil, Alain; Puel, Jean-Baptiste

    2017-11-01

    This paper addresses the problem of interactive multiclass segmentation of images. We propose a fast and efficient new interactive segmentation method called superpixel α fusion (SαF). From a few strokes drawn by a user over an image, this method extracts relevant semantic objects. To get a fast calculation and an accurate segmentation, SαF uses superpixel oversegmentation and support vector machine classification. We compare SαF with competing algorithms by evaluating its performances on reference benchmarks. We also suggest four new datasets to evaluate the scalability of interactive segmentation methods, using images from some thousand to several million pixels. We conclude with two applications of SαF.

  7. Robust and scalable optical one-way quantum computation

    International Nuclear Information System (INIS)

    Wang Hefeng; Yang Chuiping; Nori, Franco

    2010-01-01

    We propose an efficient approach for deterministically generating scalable cluster states with photons. This approach involves unitary transformations performed on atoms coupled to optical cavities. Its operation cost scales linearly with the number of qubits in the cluster state, and photon qubits are encoded such that single-qubit operations can be easily implemented by using linear optics. Robust optical one-way quantum computation can be performed since cluster states can be stored in atoms and then transferred to photons that can be easily operated and measured. Therefore, this proposal could help in performing robust large-scale optical one-way quantum computation.

  8. Scalable Brain Network Construction on White Matter Fibers.

    Science.gov (United States)

    Chung, Moo K; Adluru, Nagesh; Dalton, Kim M; Alexander, Andrew L; Davidson, Richard J

    2011-02-12

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ε-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  9. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  10. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  11. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  12. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  13. A scalable parallel algorithm for multiple objective linear programs

    Science.gov (United States)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  14. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  15. Interface magnons. Magnetic superstructure

    International Nuclear Information System (INIS)

    Djafari-Rouhani, B.; Dobrzynski, L.

    1975-01-01

    The localized magnons at an interface between two Heisenberg ferromagnets are studied with a simple model. The effect of the coupling at the interface on the existence condition for the localized modes, the dispersion laws and the possible occurrence of magnetic superstructures due to soft modes are investigated. Finally a comparison is made with the similar results obtained for interface phonons [fr

  16. Water at Interfaces

    DEFF Research Database (Denmark)

    Björneholm, Olle; Hansen, Martin Hangaard; Hodgson, Andrew

    2016-01-01

    The interfaces of neat water and aqueous solutions play a prominent role in many technological processes and in the environment. Examples of aqueous interfaces are ultrathin water films that cover most hydrophilic surfaces under ambient relative humidities, the liquid/solid interface which drives...

  17. User Interface History

    DEFF Research Database (Denmark)

    Jørgensen, Anker Helms; Myers, Brad A

    2008-01-01

    User Interfaces have been around as long as computers have existed, even well before the field of Human-Computer Interaction was established. Over the years, some papers on the history of Human-Computer Interaction and User Interfaces have appeared, primarily focusing on the graphical interface e...

  18. Graphical Interfaces for Simulation.

    Science.gov (United States)

    Hollan, J. D.; And Others

    This document presents a discussion of the development of a set of software tools to assist in the construction of interfaces to simulations and real-time systems. Presuppositions to the approach to interface design that was used are surveyed, the tools are described, and the conclusions drawn from these experiences in graphical interface design…

  19. A wireless, compact, and scalable bioimpedance measurement system for energy-efficient multichannel body sensor solutions

    International Nuclear Information System (INIS)

    Ramos, J; Ausín, J L; Lorido, A M; Redondo, F; Duque-Carrillo, J F

    2013-01-01

    In this paper, we present the design, realization and evaluation of a multichannel measurement system based on a cost-effective high-performance integrated circuit for electrical bioimpedance (EBI) measurements in the frequency range from 1 kHz to 1 MHz, and a low-cost commercially available radio frequency transceiver device, which provides reliable wireless communication. The resulting on-chip spectrometer provides high measuring EBI capabilities and constitutes the basic node to built EBI wireless sensor networks (EBI-WSNs). The proposed EBI-WSN behaves as a high-performance wireless multichannel EBI spectrometer where the number of nodes, i.e., number of channels, is completely scalable to satisfy specific requirements of body sensor networks. One of its main advantages is its versatility, since each EBI node is independently configurable and capable of working simultaneously. A prototype of the EBI node leads to a very small printed circuit board of approximately 8 cm 2 including chip-antenna, which can operate several years on one 3-V coin cell battery. A specifically tailored graphical user interface (GUI) for EBI-WSN has been also designed and implemented in order to configure the operation of EBI nodes and the network topology. EBI analysis parameters, e.g., single-frequency or spectroscopy, time interval, analysis by EBI events, frequency and amplitude ranges of the excitation current, etc., are defined by the GUI.

  20. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    Science.gov (United States)

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  1. Low-cost scalable quartz crystal microbalance array for environmental sensing

    Energy Technology Data Exchange (ETDEWEB)

    Anazagasty, Cristain [University of Puerto Rico; Hianik, Tibor [Comenius University, Bratislava, Slovakia; Ivanov, Ilia N [ORNL

    2016-01-01

    Proliferation of environmental sensors for internet of things (IoT) applications has increased the need for low-cost platforms capable of accommodating multiple sensors. Quartz crystal microbalance (QCM) crystals coated with nanometer-thin sensor films are suitable for use in high-resolution (~1 ng) selective gas sensor applications. We demonstrate a scalable array for measuring frequency response of six QCM sensors controlled by low-cost Arduino microcontrollers and a USB multiplexer. Gas pulses and data acquisition were controlled by a LabVIEW user interface. We test the sensor array by measuring the frequency shift of crystals coated with different compositions of polymer composites based on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) while films are exposed to water vapor and oxygen inside a controlled environmental chamber. Our sensor array exhibits comparable performance to that of a commercial QCM system, while enabling high-throughput 6 QCM testing for under $1,000. We use deep neural network structures to process sensor response and demonstrate that the QCM array is suitable for gas sensing, environmental monitoring, and electronic-nose applications.

  2. SINDBAD: a realistic multi-purpose and scalable X-ray simulation tool for NDT applications

    International Nuclear Information System (INIS)

    Tabary, J.; Hugonnard, P.; Mathy, F.

    2007-01-01

    The X-ray radiographic simulation software SINDBAD, has been developed to help the design stage of radiographic systems or to evaluate the efficiency of image processing techniques, in both medical imaging and Non-Destructive Evaluation (NDE) industrial fields. This software can model any radiographic set-up, including the X-ray source, the beam interaction inside the object represented by its Computed Aided Design (CAD) model, and the imaging process in the detector. For each step of the virtual experimental bench, SINDBAD combines different modelling modules, accessed via Graphical User Interfaces (GUI), to provide realistic synthetic images. In this paper, we present an overview of all the functionalities which are available in SINDBAD, with a complete description of all the physics taken into account in models as well as the CAD and GUI facilities available in many computing platforms. We underline the different modules usable for different applications which make SINDBAD a multi-purposed and scalable X-ray simulation tool. (authors)

  3. Scalable 2D Mesoporous Silicon Nanosheets for High-Performance Lithium-Ion Battery Anode.

    Science.gov (United States)

    Chen, Song; Chen, Zhuo; Xu, Xingyan; Cao, Chuanbao; Xia, Min; Luo, Yunjun

    2018-03-01

    Constructing unique mesoporous 2D Si nanostructures to shorten the lithium-ion diffusion pathway, facilitate interfacial charge transfer, and enlarge the electrode-electrolyte interface offers exciting opportunities in future high-performance lithium-ion batteries. However, simultaneous realization of 2D and mesoporous structures for Si material is quite difficult due to its non-van der Waals structure. Here, the coexistence of both mesoporous and 2D ultrathin nanosheets in the Si anodes and considerably high surface area (381.6 m 2 g -1 ) are successfully achieved by a scalable and cost-efficient method. After being encapsulated with the homogeneous carbon layer, the Si/C nanocomposite anodes achieve outstanding reversible capacity, high cycle stability, and excellent rate capability. In particular, the reversible capacity reaches 1072.2 mA h g -1 at 4 A g -1 even after 500 cycles. The obvious enhancements can be attributed to the synergistic effect between the unique 2D mesoporous nanostructure and carbon capsulation. Furthermore, full-cell evaluations indicate that the unique Si/C nanostructures have a great potential in the next-generation lithium-ion battery. These findings not only greatly improve the electrochemical performances of Si anode, but also shine some light on designing the unique nanomaterials for various energy devices. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Trade policy and health: from conflicting interests to policy coherence.

    Science.gov (United States)

    Blouin, Chantal

    2007-03-01

    Policy incoherence at the interface between trade policy and health can take many forms, such as international trade commitments that strengthen protection of pharmaceutical patents, or promotion of health tourism that exacerbates the shortage of physicians in rural areas. Focusing on the national policy-making process, we make recommendations regarding five conditions that are necessary, but not sufficient, to ensure that international trade policies are coherent with national health objectives. These conditions are: space for dialogue and joint fact-finding; leadership by ministries of health; institutional mechanisms for coordination; meaningful engagement with stakeholders; and a strong evidence base.

  5. Quantization of interface currents

    Energy Technology Data Exchange (ETDEWEB)

    Kotani, Motoko [AIMR, Tohoku University, Sendai (Japan); Schulz-Baldes, Hermann [Department Mathematik, Universität Erlangen-Nürnberg, Erlangen (Germany); Villegas-Blas, Carlos [Instituto de Matematicas, Cuernavaca, UNAM, Cuernavaca (Mexico)

    2014-12-15

    At the interface of two two-dimensional quantum systems, there may exist interface currents similar to edge currents in quantum Hall systems. It is proved that these interface currents are macroscopically quantized by an integer that is given by the difference of the Chern numbers of the two systems. It is also argued that at the interface between two time-reversal invariant systems with half-integer spin, one of which is trivial and the other non-trivial, there are dissipationless spin-polarized interface currents.

  6. Water at Interfaces.

    Science.gov (United States)

    Björneholm, Olle; Hansen, Martin H; Hodgson, Andrew; Liu, Li-Min; Limmer, David T; Michaelides, Angelos; Pedevilla, Philipp; Rossmeisl, Jan; Shen, Huaze; Tocci, Gabriele; Tyrode, Eric; Walz, Marie-Madeleine; Werner, Josephina; Bluhm, Hendrik

    2016-07-13

    The interfaces of neat water and aqueous solutions play a prominent role in many technological processes and in the environment. Examples of aqueous interfaces are ultrathin water films that cover most hydrophilic surfaces under ambient relative humidities, the liquid/solid interface which drives many electrochemical reactions, and the liquid/vapor interface, which governs the uptake and release of trace gases by the oceans and cloud droplets. In this article we review some of the recent experimental and theoretical advances in our knowledge of the properties of aqueous interfaces and discuss open questions and gaps in our understanding.

  7. Scalable modulation technology and the tradeoff of reach, spectral efficiency, and complexity

    Science.gov (United States)

    Bosco, Gabriella; Pilori, Dario; Poggiolini, Pierluigi; Carena, Andrea; Guiomar, Fernando

    2017-01-01

    Bandwidth and capacity demand in metro, regional, and long-haul networks is increasing at several tens of percent per year, driven by video streaming, cloud computing, social media and mobile applications. To sustain this traffic growth, an upgrade of the widely deployed 100-Gbit/s long-haul optical systems, based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) modulation format associated with coherent detection and digital signal processing (DSP), is mandatory. In fact, optical transport techniques enabling a per-channel bit rate beyond 100 Gbit/s have recently been the object of intensive R and D activities, aimed at both improving the spectral efficiency and lowering the cost per bit in fiber transmission systems. In this invited contribution, we review the different available options to scale the per-channel bit-rate to 400 Gbit/s and beyond, i.e. symbol-rate increase, use of higher-order quadrature amplitude modulation (QAM) modulation formats and use of super-channels with DSP-enabled spectral shaping and advanced multiplexing technologies. In this analysis, trade-offs of system reach, spectral efficiency and transceiver complexity are addressed. Besides scalability, next generation optical networks will require a high degree of flexibility in the transponders, which should be able to dynamically adapt the transmission rate and bandwidth occupancy to the light path characteristics. In order to increase the flexibility of these transponders (often referred to as "flexponders"), several advanced modulation techniques have recently been proposed, among which sub-carrier multiplexing, hybrid formats (over time, frequency and polarization), and constellation shaping. We review these techniques, highlighting their limits and potential in terms of performance, complexity and flexibility.

  8. Coherent control of quantum dots

    DEFF Research Database (Denmark)

    Johansen, Jeppe; Lodahl, Peter; Hvam, Jørn Märcher

    In recent years much effort has been devoted to the use of semiconductor quantum dotsystems as building blocks for solid-state-based quantum logic devices. One importantparameter for such devices is the coherence time, which determines the number ofpossible quantum operations. From earlier...

  9. Coherent Radiation of Electron Cloud

    International Nuclear Information System (INIS)

    Heifets, S.

    2004-01-01

    The electron cloud in positron storage rings is pinched when a bunch passes by. For short bunches, the radiation due to acceleration of electrons of the cloud is coherent. Detection of such radiation can be used to measure the density of the cloud. The estimate of the power and the time structure of the radiated signal is given in this paper

  10. Asymmetric Penning trap coherent states

    International Nuclear Information System (INIS)

    Contreras-Astorga, Alonso; Fernandez, David J.

    2010-01-01

    By using a matrix technique, which allows to identify directly the ladder operators, the coherent states of the asymmetric Penning trap are derived as eigenstates of the appropriate annihilation operators. They are compared with those obtained through the displacement operator method.

  11. Optimally cloned binary coherent states

    DEFF Research Database (Denmark)

    Mueller, C. R.; Leuchs, G.; Marquardt, Ch

    2017-01-01

    their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal...

  12. Coherent beam-beam effect

    International Nuclear Information System (INIS)

    Chao, A.W.; Keil, E.

    1979-06-01

    The stability of the coherent beam-beam effect between rigid bunches is studied analytically and numerically for a linear force by evaluating eigenvalues. For a realistic force, the stability is investigated by following the bunches for many revolutions. 4 refs., 13 figs., 2 tabs

  13. Optical coherent control in semiconductors

    DEFF Research Database (Denmark)

    Østergaard, John Erland; Vadim, Lyssenko; Hvam, Jørn Märcher

    2001-01-01

    of quantum control including the recent applications to semiconductors and nanostructures. We study the influence of inhomogeneous broadening in semiconductors on CC results. Photoluminescence (PL) and the coherent emission in four-wave mixing (FWM) is recorded after resonant excitation with phase...

  14. Dialogue Coherence: A Generation Framework

    NARCIS (Netherlands)

    Beun, R.J.; Eijk, R.M. van

    2007-01-01

    This paper presents a framework for the generation of coherent elementary conversational sequences at the speech act level. We will embrace the notion of a cooperative dialogue game in which two players produce speech acts to transfer relevant information with respect to their commitments.

  15. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    Science.gov (United States)

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  16. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  17. SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS

    Data.gov (United States)

    National Aeronautics and Space Administration — SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Biomass monitoring,...

  18. On Scalability and Replicability of Smart Grid Projects—A Case Study

    Directory of Open Access Journals (Sweden)

    Lukas Sigrist

    2016-03-01

    Full Text Available This paper studies the scalability and replicability of smart grid projects. Currently, most smart grid projects are still in the R&D or demonstration phases. The full roll-out of the tested solutions requires a suitable degree of scalability and replicability to prevent project demonstrators from remaining local experimental exercises. Scalability and replicability are the preliminary requisites to perform scaling-up and replication successfully; therefore, scalability and replicability allow for or at least reduce barriers for the growth and reuse of the results of project demonstrators. The paper proposes factors that influence and condition a project’s scalability and replicability. These factors involve technical, economic, regulatory and stakeholder acceptance related aspects, and they describe requirements for scalability and replicability. In order to assess and evaluate the identified scalability and replicability factors, data has been collected from European and national smart grid projects by means of a survey, reflecting the projects’ view and results. The evaluation of the factors allows quantifying the status quo of on-going projects with respect to the scalability and replicability, i.e., they provide a feedback on to what extent projects take into account these factors and on whether the projects’ results and solutions are actually scalable and replicable.

  19. Ultracold molecules: vehicles to scalable quantum information processing

    International Nuclear Information System (INIS)

    Brickman Soderberg, Kathy-Anne; Gemelke, Nathan; Chin Cheng

    2009-01-01

    In this paper, we describe a novel scheme to implement scalable quantum information processing using Li-Cs molecular states to entangle 6 Li and 133 Cs ultracold atoms held in independent optical lattices. The 6 Li atoms will act as quantum bits to store information and 133 Cs atoms will serve as messenger bits that aid in quantum gate operations and mediate entanglement between distant qubit atoms. Each atomic species is held in a separate optical lattice and the atoms can be overlapped by translating the lattices with respect to each other. When the messenger and qubit atoms are overlapped, targeted single-spin operations and entangling operations can be performed by coupling the atomic states to a molecular state with radio-frequency pulses. By controlling the frequency and duration of the radio-frequency pulses, entanglement can be either created or swapped between a qubit messenger pair. We estimate operation fidelities for entangling two distant qubits and discuss scalability of this scheme and constraints on the optical lattice lasers. Finally we demonstrate experimental control of the optical potentials sufficient to translate atoms in the lattice.

  20. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Science.gov (United States)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  1. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  2. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications

    Science.gov (United States)

    Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert; Appel, Eric A.

    2016-12-01

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.

  3. A highly scalable peptide-based assay system for proteomics.

    Directory of Open Access Journals (Sweden)

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  4. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  5. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  6. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  7. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  8. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Directory of Open Access Journals (Sweden)

    Zihao Yang

    2017-09-01

    Full Text Available The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15 wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  9. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  10. On eliminating synchronous communication in molecular simulations to improve scalability

    Science.gov (United States)

    Straatsma, T. P.; Chavarría-Miranda, Daniel G.

    2013-12-01

    Molecular dynamics simulation, as a complementary tool to experimentation, has become an important methodology for the understanding and design of molecular systems as it provides access to properties that are difficult, impossible or prohibitively expensive to obtain experimentally. Many of the available software packages have been parallelized to take advantage of modern massively concurrent processing resources. The challenge in achieving parallel efficiency is commonly attributed to the fact that molecular dynamics algorithms are communication intensive. This paper illustrates how an appropriately chosen data distribution and asynchronous one-sided communication approach can be used to effectively deal with the data movement within the Global Arrays/ARMCI programming model framework. A new put_notify capability is presented here, allowing the implementation of the molecular dynamics algorithm without any explicit global or local synchronization or global data reduction operations. In addition, this push-data model is shown to very effectively allow hiding data communication behind computation. Rather than data movement or explicit global reductions, the implicit synchronization of the algorithm becomes the primary challenge for scalability. Without any explicit synchronous operations, the scalability of molecular simulations is shown to depend only on the ability to evenly balance computational load.

  11. Shape-changing interfaces:

    DEFF Research Database (Denmark)

    Rasmussen, Majken Kirkegård; Pedersen, Esben Warming; Petersen, Marianne Graves

    2015-01-01

    Shape change is increasingly used in physical user interfaces, both as input and output. Yet, the progress made and the key research questions for shape-changing interfaces are rarely analyzed systematically. We review a sample of existing work on shape-changing interfaces to address these shortc......Shape change is increasingly used in physical user interfaces, both as input and output. Yet, the progress made and the key research questions for shape-changing interfaces are rarely analyzed systematically. We review a sample of existing work on shape-changing interfaces to address...... these shortcomings. We identify eight types of shape that are transformed in various ways to serve both functional and hedonic design purposes. Interaction with shape-changing interfaces is simple and rarely merges input and output. Three questions are discussed based on the review: (a) which design purposes may...

  12. Coherence for vectorial waves and majorization

    OpenAIRE

    Luis, Alfredo

    2016-01-01

    We show that majorization provides a powerful approach to the coherence conveyed by partially polarized transversal electromagnetic waves. Here we present the formalism, provide some examples and compare with standard measures of polarization and coherence of vectorial waves.

  13. Acquisition System and Detector Interface for Power Pulsed Detectors

    CERN Document Server

    Cornat, R

    2012-01-01

    A common DAQ system is being developed within the CALICE collaboration. It provides a flexible and scalable architecture based on giga-ethernet and 8b/10b serial links in order to transmit either slow control data, fast signals or read out data. A detector interface (DIF) is used to connect detectors to the DAQ system based on a single firmware shared among the collaboration but targeted on various physical implementations. The DIF allows to build, store and queue packets of data as well as to control the detectors providing USB and serial link connectivity. The overall architecture is foreseen to manage several hundreds of thousands channels.

  14. Solution-Processing of Organic Solar Cells: From In Situ Investigation to Scalable Manufacturing

    KAUST Repository

    Abdelsamie, Maged

    2016-12-05

    Photovoltaics provide a feasible route to fulfilling the substantial increase in demand for energy worldwide. Solution processable organic photovoltaics (OPVs) have attracted attention in the last decade because of the promise of low-cost manufacturing of sufficiently efficient devices at high throughput on large-area rigid or flexible substrates with potentially low energy and carbon footprints. In OPVs, the photoactive layer is made of a bulk heterojunction (BHJ) layer and is typically composed of a blend of an electron-donating (D) and an electron-accepting (A) materials which phase separate at the nanoscale and form a heterojunction at the D-A interface that plays a crucial role in the generation of charges. Despite the tremendous progress that has been made in increasing the efficiency of organic photovoltaics over the last few years, with power conversion efficiency increasing from 8% to 13% over the duration of this PhD dissertation, there have been numerous debates on the mechanisms of formation of the crucial BHJ layer and few clues about how to successfully transfer these lessons to scalable processes. This stems in large part from a lack of understanding of how BHJ layers form from solution. This lack of understanding makes it challenging to design BHJs and to control their formation in laboratory-based processes, such as spin-coating, let alone their successful transfer to scalable processes required for the manufacturing of organic solar cells. Consequently, the OPV community has in recent years sought out to better understand the key characteristics of state of the art lab-based organic solar cells and made efforts to shed light on how the BHJ forms in laboratory-based processes as well as in scalable processes. We take the view that understanding the formation of the solution-processed bulk heterojunction (BHJ) photoactive layer, where crucial photovoltaic processes take place, is the one of the most crucial steps to developing strategies towards the

  15. Electron beam instrumentation techniques using coherent radiation

    International Nuclear Information System (INIS)

    Wang, D.X.

    1997-01-01

    Much progress has been made on coherent radiation research since coherent synchrotron radiation was first observed in 1989. The use of coherent radiation as a bunch length diagnostic tool has been studied by several groups. In this paper, brief introductions to coherent radiation and far-infrared measurement are given, the progress and status of their beam diagnostic application are reviewed, different techniques are described, and their advantages and limitations are discussed

  16. On P-coherent endomorphism rings

    Indian Academy of Sciences (India)

    A ring is called right -coherent if every principal right ideal is finitely presented. Let M R be a right -module. We study the -coherence of the endomorphism ring of M R . It is shown that is a right -coherent ring if and only if every endomorphism of M R has a pseudokernel in add M R ; S is a left -coherent ring if and ...

  17. On Radar Resolution in Coherent Change Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Bickel, Douglas L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    It is commonly observed that resolution plays a role in coherent change detection. Although this is the case, the relationship of the resolution in coherent change detection is not yet defined . In this document, we present an analytical method of evaluating this relationship using detection theory. Specifically we examine the effect of resolution on receiver operating characteristic curves for coherent change detection.

  18. Some remarks on quantum coherence theory

    International Nuclear Information System (INIS)

    Burzynski, A.

    1982-01-01

    This paper is devoted to the basic topics connected with coherence in quantum mechanics and quantum theory of radiation. In particular the formalism of the normal ordered coherence functions in cases of one and many degrees of freedom is described in detail. A few examples illustrate the analysis of the coherence properties of the various quantum states of the field of radiation. (author)

  19. Coherence-driven argumentation to norm consensus

    NARCIS (Netherlands)

    Joseph, S.; Prakken, H.

    2009-01-01

    In this paper coherence-based models are proposed as an alternative to logic-based BDI and argumentation models for the reasoning of normative agents. A model is provided for how two coherence-based agents can deliberate on how to regulate a domain of interest. First a deductive coherence model

  20. Coherent states for polynomial su(2) algebra

    International Nuclear Information System (INIS)

    Sadiq, Muhammad; Inomata, Akira

    2007-01-01

    A class of generalized coherent states is constructed for a polynomial su(2) algebra in a group-free manner. As a special case, the coherent states for the cubic su(2) algebra are discussed. The states so constructed reduce to the usual SU(2) coherent states in the linear limit

  1. Propagation of coherent light pulses with PHASE

    Science.gov (United States)

    Bahrdt, J.; Flechsig, U.; Grizzoli, W.; Siewert, F.

    2014-09-01

    The current status of the software package PHASE for the propagation of coherent light pulses along a synchrotron radiation beamline is presented. PHASE is based on an asymptotic expansion of the Fresnel-Kirchhoff integral (stationary phase approximation) which is usually truncated at the 2nd order. The limits of this approximation as well as possible extensions to higher orders are discussed. The accuracy is benchmarked against a direct integration of the Fresnel-Kirchhoff integral. Long range slope errors of optical elements can be included by means of 8th order polynomials in the optical element coordinates w and l. Only recently, a method for the description of short range slope errors has been implemented. The accuracy of this method is evaluated and examples for realistic slope errors are given. PHASE can be run either from a built-in graphical user interface or from any script language. The latter method provides substantial flexibility. Optical elements including apertures can be combined. Complete wave packages can be propagated, as well. Fourier propagators are included in the package, thus, the user may choose between a variety of propagators. Several means to speed up the computation time were tested - among them are the parallelization in a multi core environment and the parallelization on a cluster.

  2. Reservation system with graphical user interface

    KAUST Repository

    Mohamed, Mahmoud A. Abdelhamid; Jamjoom, Hani T.; Podlaseck, Mark E.; Qu, Huiming; Shae, Zon-Yin; Sheopuri, Anshul

    2012-01-01

    Techniques for providing a reservation system are provided. The techniques include displaying a scalable visualization object, wherein the scalable visualization object comprises an expanded view element of the reservation system depicting

  3. Propagation of superconducting coherence via chiral quantum-Hall edge channels.

    Science.gov (United States)

    Park, Geon-Hyoung; Kim, Minsoo; Watanabe, Kenji; Taniguchi, Takashi; Lee, Hu-Jong

    2017-09-08

    Recently, there has been significant interest in superconducting coherence via chiral quantum-Hall (QH) edge channels at an interface between a two-dimensional normal conductor and a superconductor (N-S) in a strong transverse magnetic field. In the field range where the superconductivity and the QH state coexist, the coherent confinement of electron- and hole-like quasiparticles by the interplay of Andreev reflection and the QH effect leads to the formation of Andreev edge states (AES) along the N-S interface. Here, we report the electrical conductance characteristics via the AES formed in graphene-superconductor hybrid systems in a three-terminal configuration. This measurement configuration, involving the QH edge states outside a graphene-S interface, allows the detection of the longitudinal and QH conductance separately, excluding the bulk contribution. Convincing evidence for the superconducting coherence and its propagation via the chiral QH edge channels is provided by the conductance enhancement on both the upstream and the downstream sides of the superconducting electrode as well as in bias spectroscopy results below the superconducting critical temperature. Propagation of superconducting coherence via QH edge states was more evident as more edge channels participate in the Andreev process for high filling factors with reduced valley-mixing scattering.

  4. Scalable graphene production: perspectives and challenges of plasma applications

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  5. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  6. Scalable graphene production: perspectives and challenges of plasma applications.

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  7. Diffusion between evolving interfaces

    International Nuclear Information System (INIS)

    Juntunen, Janne; Merikoski, Juha

    2010-01-01

    Diffusion in an evolving environment is studied by continuous-time Monte Carlo simulations. Diffusion is modeled by continuous-time random walkers on a lattice, in a dynamic environment provided by bubbles between two one-dimensional interfaces driven symmetrically towards each other. For one-dimensional random walkers constrained by the interfaces, the bubble size distribution dominates diffusion. For two-dimensional random walkers, it is also controlled by the topography and dynamics of the interfaces. The results of the one-dimensional case are recovered in the limit where the interfaces are strongly driven. Even with simple hard-core repulsion between the interfaces and the particles, diffusion is found to depend strongly on the details of the dynamical rules of particles close to the interfaces.

  8. User interface support

    Science.gov (United States)

    Lewis, Clayton; Wilde, Nick

    1989-01-01

    Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.

  9. Complex Interfaces Under Change

    DEFF Research Database (Denmark)

    Rosbjerg, Dan

    The hydrosphere is dynamic across the major compartments of the Earth system: the atmosphere, the oceans and seas, the land surface water, and the groundwater within the strata below the two last compartments. The global geography of the hydrosphere essentially depends on thermodynamic and mechan...... these interfaces and interfaced compartments and processes. Climate, sea-level, oceanographic currents and hydrological processes are all affected, while anthropogenic changes are often intense in the geographic settings corresponding to such interfaces....... and mechanical processes that develop within this structure. Water-related processes at the interfaces between the compartments are complex, depending both on the interface itself, and on the characteristics of the interfaced compartments. Various aspects of global change directly or indirectly impact...

  10. Coherence in electron energy loss spectrometry

    International Nuclear Information System (INIS)

    Schattschneider, P.; Werner, W.S.M.

    2005-01-01

    Coherence effects in electron energy loss spectrometry (EELS) and in energy filtering are largely neglected although they occur frequently due to Bragg scattering in crystals. We discuss how coherence in the inelastically scattered wave field can be described by the mixed dynamic form factor (MDFF), and how it relates to the density matrix of the scattered electrons. Among the many aspects of 'inelastic coherence' are filtered high-resolution images, dipole-forbidden transitions, coherence in plasma excitations, errors in chemical microanalysis, coherent double plasmons, and circular dichroism

  11. Quantum coherence and correlations in quantum system

    Science.gov (United States)

    Xi, Zhengjun; Li, Yongming; Fan, Heng

    2015-01-01

    Criteria of measure quantifying quantum coherence, a unique property of quantum system, are proposed recently. In this paper, we first give an uncertainty-like expression relating the coherence and the entropy of quantum system. This finding allows us to discuss the relations between the entanglement and the coherence. Further, we discuss in detail the relations among the coherence, the discord and the deficit in the bipartite quantum system. We show that, the one-way quantum deficit is equal to the sum between quantum discord and the relative entropy of coherence of measured subsystem. PMID:26094795

  12. Theory of coherent resonance energy transfer

    International Nuclear Information System (INIS)

    Jang, Seogjoo; Cheng, Y.-C.; Reichman, David R.; Eaves, Joel D.

    2008-01-01

    A theory of coherent resonance energy transfer is developed combining the polaron transformation and a time-local quantum master equation formulation, which is valid for arbitrary spectral densities including common modes. The theory contains inhomogeneous terms accounting for nonequilibrium initial preparation effects and elucidates how quantum coherence and nonequilibrium effects manifest themselves in the coherent energy transfer dynamics beyond the weak resonance coupling limit of the Foerster and Dexter (FD) theory. Numerical tests show that quantum coherence can cause significant changes in steady state donor/acceptor populations from those predicted by the FD theory and illustrate delicate cooperation of nonequilibrium and quantum coherence effects on the transient population dynamics.

  13. Parent-martensite interface structure in ferrous systems

    International Nuclear Information System (INIS)

    Ma, X.; Pond, R.C.

    2007-01-01

    Recently, a Topological Model of martensitic transformations has been presented wherein the habit plane is a semi-coherent structure, and the transformation mechanism is shown explicitly to be diffusionless. This approach is used here to model martensitic transformations in ferrous alloys. The habit plane comprises coherent (1 1 1) γ parallel (0 1 1) α terraces where the coherency strains are accommodated by a network of dislocations, originating in the martensite phase, and disconnections (transformation dislocations). The disconnections can move conservatively across the interface, thereby effecting the transformation. Since the disconnections exhibit step character, the overall habit plane deviates from the terrace plane. A range of network geometries is predicted corresponding to orientation relationships varying from Nishiyama-Wasserman to Kurdjumov-Sachs. This range of solutions includes habit planes close to {2 9 5}, {5 7 5} and {1 2 1}, in good agreement with experimental observations in various ferrous alloys

  14. Refinement by interface instantiation

    DEFF Research Database (Denmark)

    Hallerstede, Stefan; Hoang, Thai Son

    2012-01-01

    be easily refined. Our first contribution hence is a proposal for a new construct called interface that encapsulates the external variables, along with a mechanism for interface instantiation. Using the new construct and mechanism, external variables can be refined consistently. Our second contribution...... is an approach for verifying the correctness of Event-B extensions using the supporting Rodin tool. We illustrate our approach by proving the correctness of interface instantiation....

  15. Universal computer interfaces

    CERN Document Server

    Dheere, RFBM

    1988-01-01

    Presents a survey of the latest developments in the field of the universal computer interface, resulting from a study of the world patent literature. Illustrating the state of the art today, the book ranges from basic interface structure, through parameters and common characteristics, to the most important industrial bus realizations. Recent technical enhancements are also included, with special emphasis devoted to the universal interface adapter circuit. Comprehensively indexed.

  16. The global coherence initiative: creating a coherent planetary standing wave.

    Science.gov (United States)

    McCraty, Rollin; Deyhle, Annette; Childre, Doc

    2012-03-01

    The much anticipated year of 2012 is now here. Amidst the predictions and cosmic alignments that many are aware of, one thing is for sure: it will be an interesting and exciting year as the speed of change continues to increase, bringing both chaos and great opportunity. One benchmark of these times is a shift in many people from a paradigm of competition to one of greater cooperation. All across the planet, increasing numbers of people are practicing heart-based living, and more groups are forming activities that support positive change and creative solutions for manifesting a better world. The Global Coherence Initiative (GCI) is a science-based, co-creative project to unite people in heart-focused care and intention. GCI is working in concert with other initiatives to realize the increased power of collective intention and consciousness. The convergence of several independent lines of evidence provides strong support for the existence of a global information field that connects all living systems and consciousness. Every cell in our bodies is bathed in an external and internal environment of fluctuating invisible magnetic forces that can affect virtually every cell and circuit in biological systems. Therefore, it should not be surprising that numerous physiological rhythms in humans and global collective behaviors are not only synchronized with solar and geomagnetic activity, but disruptions in these fields can create adverse effects on human health and behavior. The most likely mechanism for explaining how solar and geomagnetic influences affect human health and behavior are a coupling between the human nervous system and resonating geomagnetic frequencies, called Schumann resonances, which occur in the earth-ionosphere resonant cavity and Alfvén waves. It is well established that these resonant frequencies directly overlap with those of the human brain and cardiovascular system. If all living systems are indeed interconnected and communicate with each other

  17. A Scalable Framework to Detect Personal Health Mentions on Twitter.

    Science.gov (United States)

    Yin, Zhijun; Fabbri, Daniel; Rosenbloom, S Trent; Malin, Bradley

    2015-06-05

    Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual's health. The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (P<.001). For instance, more than 80% of the tweets about migraines (83/100) and allergies (85

  18. Final Report. Center for Scalable Application Development Software

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  19. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  20. Scalable Task Assignment for Heterogeneous Multi-Robot Teams

    Directory of Open Access Journals (Sweden)

    Paula García

    2013-02-01

    Full Text Available This work deals with the development of a dynamic task assignment strategy for heterogeneous multi-robot teams in typical real world scenarios. The strategy must be efficiently scalable to support problems of increasing complexity with minimum designer intervention. To this end, we have selected a very simple auction-based strategy, which has been implemented and analysed in a multi-robot cleaning problem that requires strong coordination and dynamic complex subtask organization. We will show that the selection of a simple auction strategy provides a linear computational cost increase with the number of robots that make up the team and allows the solving of highly complex assignment problems in dynamic conditions by means of a hierarchical sub-auction policy. To coordinate and control the team, a layered behaviour-based architecture has been applied that allows the reusing of the auction-based strategy to achieve different coordination levels.

  1. A Practical and Scalable Tool to Find Overlaps between Sequences

    Directory of Open Access Journals (Sweden)

    Maan Haj Rachid

    2015-01-01

    Full Text Available The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment.

  2. A Software and Hardware IPTV Architecture for Scalable DVB Distribution

    Directory of Open Access Journals (Sweden)

    Georg Acher

    2009-01-01

    Full Text Available Many standards and even more proprietary technologies deal with IP-based television (IPTV. But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders.

  3. Smartphone based scalable reverse engineering by digital image correlation

    Science.gov (United States)

    Vidvans, Amey; Basu, Saurabh

    2018-03-01

    There is a need for scalable open source 3D reconstruction systems for reverse engineering. This is because most commercially available reconstruction systems are capital and resource intensive. To address this, a novel reconstruction technique is proposed. The technique involves digital image correlation based characterization of surface speeds followed by normalization with respect to angular speed during rigid body rotational motion of the specimen. Proof of concept of the same is demonstrated and validated using simulation and empirical characterization. Towards this, smart-phone imaging and inexpensive off the shelf components along with those fabricated additively using poly-lactic acid polymer with a standard 3D printer are used. Some sources of error in this reconstruction methodology are discussed. It is seen that high curvatures on the surface suppress accuracy of reconstruction. Reasons behind this are delineated in the nature of the correlation function. Theoretically achievable resolution during smart-phone based 3D reconstruction by digital image correlation is derived.

  4. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  5. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack; Salihoglu, Semih; Widom, Jennifer; Olukotun, Kunle

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel's programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  6. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  7. Optimization of Hierarchical Modulation for Use of Scalable Media

    Directory of Open Access Journals (Sweden)

    Heneghan Conor

    2010-01-01

    Full Text Available This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP and Lower Priority (LP stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR for the particular examples shown.

  8. A Scalable Policy and SNMP Based Network Management Framework

    Institute of Scientific and Technical Information of China (English)

    LIU Su-ping; DING Yong-sheng

    2009-01-01

    Traditional SNMP-based network management can not deal with the task of managing large-scaled distributed network,while policy-based management is one of the effective solutions in network and distributed systems management. However,cross-vendor hardware compatibility is one of the limitations in policy-based management. Devices existing in current network mostly support SNMP rather than Common Open Policy Service (COPS) protocol. By analyzing traditional network management and policy-based network management, a scalable network management framework is proposed. It is combined with Internet Engineering Task Force (IETF) framework for policybased management and SNMP-based network management. By interpreting and translating policy decision to SNMP message,policy can be executed in traditional SNMP-based device.

  9. MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Directory of Open Access Journals (Sweden)

    Prem Prakash Jayaraman

    2014-05-01

    Full Text Available Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data acrossmultiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.

  10. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    Directory of Open Access Journals (Sweden)

    Kae Nemoto

    2014-08-01

    Full Text Available Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  11. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    Science.gov (United States)

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  12. Implementation of the Timepix ASIC in the Scalable Readout System

    Energy Technology Data Exchange (ETDEWEB)

    Lupberger, M., E-mail: lupberger@physik.uni-bonn.de; Desch, K.; Kaminski, J.

    2016-09-11

    We report on the development of electronics hardware, FPGA firmware and software to provide a flexible multi-chip readout of the Timepix ASIC within the framework of the Scalable Readout System (SRS). The system features FPGA-based zero-suppression and the possibility to read out up to 4×8 chips with a single Front End Concentrator (FEC). By operating several FECs in parallel, in principle an arbitrary number of chips can be read out, exploiting the scaling features of SRS. Specifically, we tested the system with a setup consisting of 160 Timepix ASICs, operated as GridPix devices in a large TPC field cage in a 1 T magnetic field at a DESY test beam facility providing an electron beam of up to 6 GeV. We discuss the design choices, the dedicated hardware components, the FPGA firmware as well as the performance of the system in the test beam.

  13. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou

    2007-07-01

    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  14. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  15. A Secure and Scalable Data Communication Scheme in Smart Grids

    Directory of Open Access Journals (Sweden)

    Chunqiang Hu

    2018-01-01

    Full Text Available The concept of smart grid gained tremendous attention among researchers and utility providers in recent years. How to establish a secure communication among smart meters, utility companies, and the service providers is a challenging issue. In this paper, we present a communication architecture for smart grids and propose a scheme to guarantee the security and privacy of data communications among smart meters, utility companies, and data repositories by employing decentralized attribute based encryption. The architecture is highly scalable, which employs an access control Linear Secret Sharing Scheme (LSSS matrix to achieve a role-based access control. The security analysis demonstrated that the scheme ensures security and privacy. The performance analysis shows that the scheme is efficient in terms of computational cost.

  16. A scalable implementation of RI-SCF on parallel computers

    International Nuclear Information System (INIS)

    Fruechtl, H.A.; Kendall, R.A.; Harrison, R.J.

    1996-01-01

    In order to avoid the integral bottleneck of conventional SCF calculations, the Resolution of the Identity (RI) method is used to obtain an approximate solution to the Hartree-Fock equations. In this approximation only three-center integrals are needed to build the Fock matrix. It has been implemented as part of the NWChem package of portable and scalable ab initio programs for parallel computers. Utilizing the V-approximation, both the Coulomb and exchange contribution to the Fock matrix can be calculated from a transformed set of three-center integrals which have to be precalculated and stored. A distributed in-core method as well as a disk based implementation have been programmed. Details of the implementation as well as the parallel programming tools used are described. We also give results and timings from benchmark calculations

  17. Scalable Lunar Surface Networks and Adaptive Orbit Access

    Science.gov (United States)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  18. Electromagnetic Interface Testing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Electromagnetic Interface Testing facilitysupports such testing asEmissions, Field Strength, Mode Stirring, EMP Pulser, 4 Probe Monitoring/Leveling System, and...

  19. Operational resource theory of total quantum coherence

    Science.gov (United States)

    Yang, Si-ren; Yu, Chang-shui

    2018-01-01

    Quantum coherence is an essential feature of quantum mechanics and is an important physical resource in quantum information. Recently, the resource theory of quantum coherence has been established parallel with that of entanglement. In the resource theory, a resource can be well defined if given three ingredients: the free states, the resource, the (restricted) free operations. In this paper, we study the resource theory of coherence in a different light, that is, we consider the total coherence defined by the basis-free coherence maximized among all potential basis. We define the distillable total coherence and the total coherence cost and in both the asymptotic regime and the single-copy regime show the reversible transformation between a state with certain total coherence and the state with the unit reference total coherence. Extensively, we demonstrate that the total coherence can also be completely converted to the total correlation with the equal amount by the free operations. We also provide the alternative understanding of the total coherence, respectively, based on the entanglement and the total correlation in a different way.

  20. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...