WorldWideScience

Sample records for scalable coherent interface

  1. Application of the Scalable Coherent Interface to Data Acquisition at LHC

    CERN Multimedia

    2002-01-01

    RD24 : The RD24 activities in 1996 were dominated by test and integration of PCI-SCI bridges for VME-bus and for PC's for the 1996 milestones. In spite of the dispersion of RD24 membership into the ATLAS, ALICE and the proposed LHC-B experiments, collaboration and sharing of resources of SCI laboratories and equipment continued with excellent results and several doctoral theses. The availability of cheap PCI-SCI adapters has allowed construction of VME multicrate testbenches based on a variety of VME processors and work-stations. Transparent memory-to-memory accesses between remote PCI buses over SCI have been established under the Linux, Lynx-OS and Windows-NT operating systems as a proof that scalable multicrate systems are ready to be implemented with off-the-shelf products. Commercial SCI-PCI adapters are based on a PCI-SCI ASIC from Dolphin. The FPGA based PCI-SCI adapter, designed by CERN and LBL for data acquisition at LHC and STAR allows addition of DAQ functions. The step from multicrate systems towa...

  2. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  3. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Science.gov (United States)

    Zeiher, Johannes; Schauß, Peter; Hild, Sebastian; Macrı, Tommaso; Bloch, Immanuel; Gross, Christian

    2015-07-01

    Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a "superatom," is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  4. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  5. Quantum Computing and Control by Optical Manipulation of Molecular Coherences: Towards Scalability

    Science.gov (United States)

    2007-09-14

    coherence. Also, significant progress has been made in approaching the single molecule limit in TFRCARS implementations - a crucial step in considering scalable quantum computing using the molecular Hilbert space and nonlinear optics.

  6. Interface induced spin-orbit interaction in silicon quantum dots and prospects of scalability

    Science.gov (United States)

    Ferdous, Rifat; Wai, Kok; Veldhorst, Menno; Hwang, Jason; Yang, Henry; Klimeck, Gerhard; Dzurak, Andrew; Rahman, Rajib

    A scalable quantum computing architecture requires reproducibility over key qubit properties, like resonance frequency, coherence time etc. Randomness in these properties would necessitate individual knowledge of each qubit in a quantum computer. Spin qubits hosted in Silicon (Si) quantum dots (QD) is promising as a potential building block for a large-scale quantum computer, because of their longer coherence times. The Stark shift of the electron g-factor in these QDs has been used to selectively address multiple qubits. From atomistic tight-binding studies we investigated the effect of interface non-ideality on the Stark shift of the g-factor in a Si QD. We find that based on the location of a monoatomic step at the interface with respect to the dot center both the sign and magnitude of the Stark shift change. Thus the presence of interface steps in these devices will cause variability in electron g-factor and its Stark shift based on the location of the qubit. This behavior will also cause varying sensitivity to charge noise from one qubit to another, which will randomize the dephasing times T2*. This predicted device-to-device variability is experimentally observed recently in three qubits fabricated at a Si/Si02 interface, which validates the issues discussed.

  7. Coherent Josephson qubit suitable for scalable quantum integrated circuits.

    Science.gov (United States)

    Barends, R; Kelly, J; Megrant, A; Sank, D; Jeffrey, E; Chen, Y; Yin, Y; Chiaro, B; Mutus, J; Neill, C; O'Malley, P; Roushan, P; Wenner, J; White, T C; Cleland, A N; Martinis, John M

    2013-08-23

    We demonstrate a planar, tunable superconducting qubit with energy relaxation times up to 44 μs. This is achieved by using a geometry designed to both minimize radiative loss and reduce coupling to materials-related defects. At these levels of coherence, we find a fine structure in the qubit energy lifetime as a function of frequency, indicating the presence of a sparse population of incoherent, weakly coupled two-level defects. We elucidate this defect physics by experimentally varying the geometry and by a model analysis. Our "Xmon" qubit combines facile fabrication, straightforward connectivity, fast control, and long coherence, opening a viable route to constructing a chip-based quantum computer.

  8. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    Science.gov (United States)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  9. Goodbye to WIMPs: A Scalable Interface for ALMA Operations

    Science.gov (United States)

    Schwarz, J.; Pietriga, E.; Schilling, M.; Grosbol, P.

    2011-07-01

    The operators of the ALMA Observatory will monitor and control more than 50 mm/submm radio antennas and their associated instrumentation from an operations site that is separated from this hardware by 35-50 km. Software that enables them to identify trouble spots and react to failures quickly in this environment will be critical to the safe and efficient functioning of the observatory. Early commissioning of ALMA uses a operator interface implemented with a standard window, icon, menu, pointing device (WIMP) toolkit. Early experience indicates that this paradigm will not scale well as the number of antennas approaches its full complement. Operators lose time as they manipulate overlapping or tabbed windows to drill-down to detailed diagnostic data, losing a feeling for "where they are" in the process. The WIMP model reaches its limits when there is so much information to present to users that they cannot focus on details while maintaining a view from above. To simplify the operators' tasks and let them concentrate on the real issues at hand rather than continually re-organizing their use of screen space, we are replacing the existing top-level interface with a multi-scale interface that takes advantage of semantic zooming, dynamic network visualization and other advanced filtering, navigation and visualization features. Following the first of several planned participatory design workshops, we have developed prototypes to show how users' needs can be met with the kinds of navigation that become possible when the restrictions of the WIMP model are lifted. Cycles of design and implementation coupled with active user feedback will characterize this project up through deployment.

  10. Scalable wide-field optical coherence tomography-based angiography for in vivo imaging applications.

    Science.gov (United States)

    Xu, Jingjiang; Wei, Wei; Song, Shaozhen; Qi, Xiaoli; Wang, Ruikang K

    2016-05-01

    Recent advances in optical coherence tomography (OCT)-based angiography have demonstrated a variety of biomedical applications in the diagnosis and therapeutic monitoring of diseases with vascular involvement. While promising, its imaging field of view (FOV) is however still limited (typically less than 9 mm(2)), which somehow slows down its clinical acceptance. In this paper, we report a high-speed spectral-domain OCT operating at 1310 nm to enable wide FOV up to 750 mm(2). Using optical microangiography (OMAG) algorithm, we are able to map vascular networks within living biological tissues. Thanks to 2,048 pixel-array line scan InGaAs camera operating at 147 kHz scan rate, the system delivers a ranging depth of ~7.5 mm and provides wide-field OCT-based angiography at a single data acquisition. We implement two imaging modes (i.e., wide-field mode and high-resolution mode) in the OCT system, which gives highly scalable FOV with flexible lateral resolution. We demonstrate scalable wide-field vascular imaging for multiple finger nail beds in human and whole brain in mice with skull left intact at a single 3D scan, promising new opportunities for wide-field OCT-based angiography for many clinical applications.

  11. Nanophotonic coherent light-matter interfaces based on rare-earth-doped crystals

    Science.gov (United States)

    Zhong, Tian; Kindem, Jonathan M.; Miyazono, Evan; Faraon, Andrei

    2015-09-01

    Quantum light-matter interfaces connecting stationary qubits to photons will enable optical networks for quantum communications, precise global time keeping, photon switching and studies of fundamental physics. Rare-earth-ion-doped crystals are state-of-the-art materials for optical quantum memories and quantum transducers between optical photons, microwave photons and spin waves. Here we demonstrate coupling of an ensemble of neodymium rare-earth-ions to photonic nanocavities fabricated in the yttrium orthosilicate host crystal. Cavity quantum electrodynamics effects including Purcell enhancement (F=42) and dipole-induced transparency are observed on the highly coherent 4I9/2-4F3/2 optical transition. Fluctuations in the cavity transmission due to statistical fine structure of the atomic density are measured, indicating operation at the quantum level. Coherent optical control of cavity-coupled rare-earth ions is performed via photon echoes. Long optical coherence times (T2~100 μs) and small inhomogeneous broadening are measured for the cavity-coupled rare-earth ions, thus demonstrating their potential for on-chip scalable quantum light-matter interfaces.

  12. A tunable waveguide-coupled cavity design for scalable interfaces to solid-state quantum emitters

    Directory of Open Access Journals (Sweden)

    Sara L. Mouradian

    2017-04-01

    Full Text Available Photonic nanocavities in diamond have emerged as useful structures for interfacing photons and embedded atomic color centers, such as the nitrogen vacancy center. Here, we present a hybrid nanocavity design that enables (i a loaded quality factor exceeding 50 000 (unloaded Q>106 with 75% of the enhanced emission collected into an underlying waveguide circuit, (ii MEMS-based cavity spectral tuning without straining the diamond, and (iii the use of a diamond waveguide with straight sidewalls to minimize surface defects and charge traps. This system addresses the need for scalable on-chip photonic interfaces to solid-state quantum emitters.

  13. Electronic structure and relative stability of the coherent and semi-coherent HfO2/III-V interfaces

    Science.gov (United States)

    Lahti, A.; Levämäki, H.; Mäkelä, J.; Tuominen, M.; Yasir, M.; Dahl, J.; Kuzmin, M.; Laukkanen, P.; Kokko, K.; Punkkinen, M. P. J.

    2018-01-01

    III-V semiconductors are prominent alternatives to silicon in metal oxide semiconductor devices. Hafnium dioxide (HfO2) is a promising oxide with a high dielectric constant to replace silicon dioxide (SiO2). The potentiality of the oxide/III-V semiconductor interfaces is diminished due to high density of defects leading to the Fermi level pinning. The character of the harmful defects has been intensively debated. It is very important to understand thermodynamics and atomic structures of the interfaces to interpret experiments and design methods to reduce the defect density. Various realistic gap defect state free models for the HfO2/III-V(100) interfaces are presented. Relative energies of several coherent and semi-coherent oxide/III-V semiconductor interfaces are determined for the first time. The coherent and semi-coherent interfaces represent the main interface types, based on the Ga-O bridges and As (P) dimers, respectively.

  14. Flexible and scalable wavelength multicast of coherent optical OFDM with tolerance against pump phase-noise using reconfigurable coherent multi-carrier pumping.

    Science.gov (United States)

    Lu, Guo-Wei; Bo, Tianwai; Sakamoto, Takahide; Yamamoto, Naokatsu; Chan, Calvin Chun-Kit

    2016-10-03

    Recently the ever-growing demand for dynamic and high-capacity services in optical networks has resulted in new challenges that require improved network agility and flexibility in order for network resources to become more "consumable" and dynamic, or elastic, in response to requests from higher network layers. Flexible and scalable wavelength conversion or multicast is one of the most important technologies needed for developing agility in the physical layer. This paper will investigate how, using a reconfigurable coherent multi-carrier as a pump, the multicast scalability and the flexibility in wavelength allocation of the converted signals can be effectively improved. Moreover, the coherence in the multiple carriers prevents the phase noise transformation from the local pump to the converted signals, which is imperative for the phase-noise-sensitive multi-level single- or multi-carrier modulated signal. To verify the feasibility of the proposed scheme, we experimentally demonstrate the wavelength multicast of coherent optical orthogonal frequency division multiplexing (CO-OFDM) signals using a reconfigurable coherent multi-carrier pump, showing flexibility in wavelength allocation, scalability in multicast, and tolerance against pump phase noise. Less than 0.5 dB and 1.8 dB power penalties at a bit-error rate (BER) of 10-3 are obtained for the converted CO-OFDM-quadrature phase-shift keying (QPSK) and CO-OFDM-16-ary quadrature amplitude modulation (16QAM) signals, respectively, even when using a distributed feedback laser (DFB) as a pump source. In contrast, with a free-running pumping scheme, the phase noise from DFB pumps severely deteriorates the CO-OFDM signals, resulting in a visible error-floor at a BER of 10-2 in the converted CO-OFDM-16QAM signals.

  15. Morphological characterization of dental prostheses interfaces using optical coherence tomography

    Science.gov (United States)

    Sinescu, Cosmin; Negrutiu, Meda L.; Ionita, Ciprian; Marsavina, Liviu; Negru, Radu; Caplescu, Cristiana; Bradu, Adrian; Topala, Florin; Rominu, Roxana O.; Petrescu, Emanuela; Leretter, Marius; Rominu, Mihai; Podoleanu, Adrian G.

    2010-03-01

    Fixed partial prostheses as integral ceramic, polymers, metal-ceramic or metal-polymers bridges are mainly used in the frontal part of the dental arch (especially the integral bridges). They have to satisfy high stress as well as esthetic requirements. The masticatory stress may induce fractures of the bridges. These may be triggered by initial materials defects or by alterations of the technological process. The fractures of these bridges lead to functional, esthetic and phonetic disturbances which finally render the prosthetic treatment inefficient. Dental interfaces represent one of the most significant aspects in the strength of the dental prostheses under the masticatory load. The purpose of this study is to evaluate the capability of optical coherence tomography (OCT) to characterize the dental prostheses interfaces. The materials used were several fixed partial prostheses integral ceramic, polymers, metal-ceramic and metal-polymers bridges. It is important to produce both C-scans and B-scans of the defects in order to differentiate morphological aspects of the bridge infrastructures. The material defects observed with OCT were investigated with micro-CT in order to prove their existence and positions. In conclusion, it is important to have a non invasive method to investigate dental prostheses interfaces before the insertion of prostheses in the oral cavity.

  16. Ergatis: a web interface and scalable software system for bioinformatics workflows

    Science.gov (United States)

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  17. A Hardware-Efficient Scalable Spike Sorting Neural Signal Processor Module for Implantable High-Channel-Count Brain Machine Interfaces.

    Science.gov (United States)

    Yang, Yuning; Boling, Sam; Mason, Andrew J

    2017-08-01

    Next-generation brain machine interfaces demand a high-channel-count neural recording system to wirelessly monitor activities of thousands of neurons. A hardware efficient neural signal processor (NSP) is greatly desirable to ease the data bandwidth bottleneck for a fully implantable wireless neural recording system. This paper demonstrates a complete multichannel spike sorting NSP module that incorporates all of the necessary spike detector, feature extractor, and spike classifier blocks. To meet high-channel-count and implantability demands, each block was designed to be highly hardware efficient and scalable while sharing resources efficiently among multiple channels. To process multiple channels in parallel, scalability analysis was performed, and the utilization of each block was optimized according to its input data statistics and the power, area and/or speed of each block. Based on this analysis, a prototype 32-channel spike sorting NSP scalable module was designed and tested on an FPGA using synthesized datasets over a wide range of signal to noise ratios. The design was mapped to 130 nm CMOS to achieve 0.75 μW power and 0.023 mm2 area consumptions per channel based on post synthesis simulation results, which permits scalability of digital processing to 690 channels on a 4×4 mm2 electrode array.

  18. Detection of chemical interfaces in coherent anti-Stokes Raman scattering microscopy: Dk-CARS. I. Axial interfaces.

    Science.gov (United States)

    Gachet, David; Rigneault, Hervé

    2011-12-01

    We develop a full vectorial theoretical investigation of the chemical interface detection in conventional coherent anti-Stokes Raman scattering (CARS) microscopy. In Part I, we focus on the detection of axial interfaces (i.e., parallel to the optical axis) following a recent experimental demonstration of the concept [Phys. Rev. Lett. 104, 213905 (2010)]. By revisiting the Young's double slit experiment, we show that background-free microscopy and spectroscopy is achievable through the angular analysis of the CARS far-field radiation pattern. This differential CARS in k space (Dk-CARS) technique is interesting for fast detection of interfaces between molecularly different media. It may be adapted to other coherent and resonant scattering processes.

  19. Advances in clinical application of optical coherence tomography in vitreomacular interface disease

    Directory of Open Access Journals (Sweden)

    Xiao-Li Xing

    2013-08-01

    Full Text Available Vitreous macular interface disease mainly includes vitreomacular traction syndrome, idiopathic macular epiretinal membrane and idiopathic macular hole. Optical coherence tomography(OCTas a new tool that provides high resolution biopsy cross section image non traumatic imaging inspection, has a unique high resolution, no damage characteristics, and hence clinical widely used, vitreous macular interface for clinical disease diagnosis, differential diagnosis and condition monitoring and quantitative evaluation, treatment options, etc provides important information and reference value. Vitreous macular interface disease in OCT image of anatomical morphology characteristics, improve the clinical on disease occurrence and development of knowledge. We reviewed the advances in the application of OCT in vitreomacular interface disease.

  20. Spin Coherence at the Nanoscale: Polymer Surfaces and Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Epstein, Arthur J. [Professor

    2013-09-10

    Breakthrough results were achieved during the reporting period in the areas of organic spintronics. (A) For the first time the giant magnetic resistance (GMR) was observed in spin valve with an organic spacer. Thus we demonstrated the ability of organic semiconductors to transport spin in GMR devices using rubrene as a prototype for organic semiconductors. (B) We discovered the electrical bistability and spin valve effect in a ferromagnet /organic semiconductor/ ferromagnet heterojunction. The mechanism of switching between conducting phases and its potential applications were suggested. (C) The ability of V(TCNE)x to inject spin into organic semiconductors such as rubrene was demonstrated for the first time. The mechanisms of spin injection and transport from and into organic magnets as well through organic semiconductors were elucidated. (D) In collaboration with the group of OSU Prof. Johnston-Halperin we reported the successful extraction of spin polarized current from a thin film of the organic-based room temperature ferrimagnetic semiconductor V[TCNE]x and its subsequent injection into a GaAs/AlGaAs light-emitting diode (LED). Thus all basic steps for fabrication of room temperature, light weight, flexible all organic spintronic devices were successfully performed. (E) A new synthesis/processing route for preparation of V(TCNE)x enabling control of interface and film thicknesses at the nanoscale was developed at OSU. Preliminary results show these films are higher quality and what is extremely important they are substantially more air stable than earlier prepared V(TCNE)x. In sum the breakthrough results we achieved in the past two years form the basis of a promising new technology, Multifunctional Flexible Organic-based Spintronics (MFOBS). MFOBS technology enables us fabrication of full function flexible spintronic devices that operate at room temperature.

  1. Study of lumineers' interfaces by means of optical coherence tomography

    Science.gov (United States)

    de Andrade Borges, Erica; Fernandes Cassimiro-Silva, Patrícia; Osório Fernandes, Luana; Leônidas Gomes, Anderson Stevens

    2015-06-01

    OCT has been used to evaluate dental materials, and is employed here to evaluate lumineers for the first time. Lumineers are used as esthetical indirect restoration, and after wearing and aging, several undesirable features such as gaps, bubbles and mismatch can appear in which would only be seen by invasive analysis. The OCT (spectral domain SD-OCT, 930nm central wavelength) was used to evaluate noninvasively the lumineer- cement-tooth interface. We analyzed 20 specimens of lumineers-teeth that were prepared in bovine teeth and randomly allocated in 4 experimental groups (n=5) with two different cementation techniques and two different types of cementing agent (RelyX U200 and RelyX Veneer, 3M ESPE, with the adhesive recommended by the manufacture). The lumineers were made of lithium disilicate and obtained using a vacuum injection technique. The analysis was performed by using 2D and 3D OCT images, obtained before and after cementing and the thermal cycling process to simulate thermal stress in a oral cavity. Initial measurements showed that the SD-OCT was able to see through the 500μm thick lumineer, as delivered by the fabricant, and internal stress was observed. Failures were found in the cementing process and also after ageing simulation by thermal cycling. The adhesive failures as bubbles, gaps and degradation of the cementation line are the natural precursors of other defects reported by several studies of clinical follow-up (detachments, fractures and cracks). Bubble dimensions ranging from 146 μm to 1427 μm were measured and the OCT was validated as an investigative and precise tool for evaluation of the lumineer-cement-tooth.

  2. Quantitative Chemically-Specific Coherent Diffractive Imaging of Buried Interfaces using a Tabletop EUV Nanoscope

    CERN Document Server

    Shanblatt, Elisabeth R; Gardner, Dennis F; Mancini, Giulia F; Karl, Robert M; Tanksalvala, Michael D; Bevis, Charles S; Vartanian, Victor H; Kapteyn, Henry C; Adams, Daniel E; Murnane, Margaret M

    2016-01-01

    Characterizing buried layers and interfaces is critical for a host of applications in nanoscience and nano-manufacturing. Here we demonstrate non-invasive, non-destructive imaging of buried interfaces using a tabletop, extreme ultraviolet (EUV), coherent diffractive imaging (CDI) nanoscope. Copper nanostructures inlaid in SiO2 are coated with 100 nm of aluminum, which is opaque to visible light and thick enough that neither optical microscopy nor atomic force microscopy can image the buried interfaces. Short wavelength (29 nm) high harmonic light can penetrate the aluminum layer, yielding high-contrast images of the buried structures. Moreover, differences in the absolute reflectivity of the interfaces before and after coating reveal the formation of interstitial diffusion and oxidation layers at the Al-Cu and Al-SiO2 boundaries. Finally, we show that EUV CDI provides a unique capability for quantitative, chemically-specific imaging of buried structures, and the material evolution that occurs at these buried ...

  3. Spectral Domain Optical Coherence Tomography in the Diagnosis and Management of Vitreoretinal Interface Pathologies

    OpenAIRE

    Barak, Yoreh; Ihnen, Mark A.; Schaal, Shlomit

    2012-01-01

    The introduction of spectral domain optical coherence tomography (SD-OCT) has enhanced Vitreoretinal Interface (VRI) imaging considerably and facilitated the diagnosis, followup, prognosis determination, and management of VRI-associated pathologies. HR-OCT became a common practical tool seen in almost every ophthalmology practice. Knowledge of SD-OCT image interpretation and recognition of pathologies are required for all ophthalmologists. This paper methodically reviews the normal aging proc...

  4. Application of color image processing and low-coherent optical computer tomography in evaluation of adhesive interfaces of dental restorations

    Science.gov (United States)

    Bessudnova, Nadezda O.; Shlyapnikova, Olga A.; Venig, Sergey B.; Genina, Elina A.; Sadovnikov, Alexandr V.

    2015-03-01

    Durability of bonded interfaces between dentin and a polymer material in resin-based composite restorations remains a clinical dentistry challenge. In the present study the evolution of bonded interfaces in biological active environment is estimated in vivo. A novel in vivo method of visual diagnostics that involves digital processing of color images of composite restorations and allows the evaluation of adhesive interface quality over time, has been developed and tested on a group of volunteers. However, the application of the method is limited to the analysis of superficial adhesive interfaces. Low-coherent optical computer tomography (OCT) has been tested as a powerful non-invasive tool for in vivo, in situ clinical diagnostics of adhesive interfaces over time. In the long-term perspective adhesive interface monitoring using standard methods of clinical diagnostics along with colour image analysis and OCT could make it possible to objectivise and prognosticate the clinical longevity of composite resin-based restorations with adhesive interfaces.

  5. Spectral Domain Optical Coherence Tomography in the Diagnosis and Management of Vitreoretinal Interface Pathologies

    Directory of Open Access Journals (Sweden)

    Yoreh Barak

    2012-01-01

    Full Text Available The introduction of spectral domain optical coherence tomography (SD-OCT has enhanced Vitreoretinal Interface (VRI imaging considerably and facilitated the diagnosis, followup, prognosis determination, and management of VRI-associated pathologies. HR-OCT became a common practical tool seen in almost every ophthalmology practice. Knowledge of SD-OCT image interpretation and recognition of pathologies are required for all ophthalmologists. This paper methodically reviews the normal aging process of the VRI and discusses several commonly encountered VRI pathologies. The role of SD-OCT imaging in VRI-associated disorders such as posterior vitreous detachment, vitreomacular traction syndrome, idiopathic epiretinal membranes, lamellar holes, pseudoholes, and full thickness macular holes is portrayed. Future perspectives of new OCT technologies based on SD-OCT are discussed.

  6. Theory of coherent transition radiation generated at a plasma-vacuum interface

    Energy Technology Data Exchange (ETDEWEB)

    Schroeder, Carl B.; Esarey, Eric; van Tilborg, Jeroen; Leemans, Wim P.

    2003-06-26

    Transition radiation generated by an electron beam, produced by a laser wakefield accelerator operating in the self-modulated regime, crossing the plasma-vacuum boundary is considered. The angular distributions and spectra are calculated for both the incoherent and coherent radiation. The effects of the longitudinal and transverse momentum distributions on the differential energy spectra are examined. Diffraction radiation from the finite transverse extent of the plasma is considered and shown to strongly modify the spectra and energy radiated for long wavelength radiation. This method of transition radiation generation has the capability of producing high peak power THz radiation, of order 100 (mu)J/pulse at the plasma-vacuum interface, which is several orders of magnitude beyond current state-of-the-art THz sources.

  7. Radiographic, microcomputer tomography, and optical coherence tomography investigations of ceramic interfaces

    Science.gov (United States)

    Sinescu, Cosmin; Negrutiu, Meda Lavinia; Ionita, Ciprian; Topala, Florin; Petrescu, Emanuela; Rominu, Roxana; Pop, Daniela Maria; Marsavina, Liviu; Negru, Radu; Bradu, Adrian; Rominu, Mihai; Podoleanu, Adrian Gh.

    2010-12-01

    Imagistic investigation of the metal-ceramic crowns and fixed partial prostheses represent a very important issue in nowadays dentistry. At this time, in dental office, it is difficult or even impossible to evaluate a metal ceramic crown or bridge before setting it in the oral cavity. The possibilities of ceramic fractures are due to small fracture lines or material defects inside the esthetic layers. Material and methods: In this study 25 metal ceramic crowns and fixed partial prostheses were investigated by radiographic method (Rx), micro computer tomography (MicroCT) and optical coherence tomography (OCT) working in Time Domain, at 1300 nm. The OCT system contains two interferometers and one scanner. For each incident analysis a stuck made of 100 slices was obtain. These slices were used in order to obtain a 3D model of the ceramic interface. Results: RX and MicroCT are very powerful instruments that provide a good characterization of the dental construct. It is important to observe the reflections due to the metal infrastructure that could affect the evaluation of the metal ceramic crowns and bridges. The OCT investigations could complete the imagistic evaluation of the dental construct by offering important information when it is need it.

  8. Coherence effects in propagation through one-dimensional photonic bandgap structures with a rough glass interface

    NARCIS (Netherlands)

    Mandatori, Antonio; Bertolotti, Mario; Sibilia, Concita; Hoenders, Bert J.; Scalora, Michael

    2007-01-01

    The effect of the coherence of a beam traveling through a photonic ID structure coupled with a rough glass is studied. The analysis is made for the case of spatial coherence showing the possibility to determine the coherence characteristics of the beam by an examination of the output field. We have

  9. Enhanced polarization by the coherent heterophase interface between polar and non-polar phases.

    Science.gov (United States)

    Kim, Gi-Yeop; Sung, Kil-Dong; Rhyim, Youngmok; Yoon, Seog-Young; Kim, Min-Soo; Jeong, Soon-Jong; Kim, Kwang-Ho; Ryu, Jungho; Kim, Sung-Dae; Choi, Si-Young

    2016-04-14

    A piezoelectric composite containing the ferroelectric polar (Bi(Na0.8K0.2)0.5TiO3: f-BNKT) and the non-polar (0.94Bi(Na0.75K0.25)0.5TiO3-0.06BiAlO3: BNKT-BA) phases exhibits synergetic properties which combine the beneficial aspects of each phase, i.e., the high saturated polarization (Ps) of the polar phase and the low coercive field (Ec) of the non-polar phase. To understand the origin of such a fruitful outcome from this type of polar/non-polar heterophase structure, comprehensive studies are conducted, including transmission electron microscopy (TEM) and finite element method (FEM) analyses. The TEM results show that the polar/non-polar composite has a core/shell structure in which the polar phase (core) is surrounded by a non-polar phase (shell). In situ electrical biasing TEM experiments visualize that the ferroelectric domains in the polar core are aligned even under an electric field of ∼1 kV mm(-1), which is much lower than its intrinsic coercive field (∼3 kV mm(-1)). From the FEM analyses, we can find that the enhanced polarization of the polar phase is promoted by an additional internal field at the phase boundary which originates from the preferential polarization of the relaxor-like non-polar phase. From the present study, we conclude that the coherent interface between polar and non-polar phases is a key factor for understanding the enhanced piezoelectric properties of the composite.

  10. Implementation of a scalable, web-based, automated clinical decision support risk-prediction tool for chronic kidney disease using C-CDA and application programming interfaces.

    Science.gov (United States)

    Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam

    2017-11-01

    Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support.

  11. A modular interface of IL-4 allows for scalable affinity without affecting specificity for the IL-4 receptor

    Directory of Open Access Journals (Sweden)

    Sebald Walter

    2006-04-01

    Full Text Available Abstract Background Interleukin 4 (IL-4 is a key regulator of the immune system and an important factor in the development of allergic hypersensitivity. Together with interleukin 13 (IL-13, IL-4 plays an important role in exacerbating allergic and asthmatic symptoms. For signal transduction, both cytokines can utilise the same receptor, consisting of the IL-4Rα and the IL-13Rα1 chain, offering an explanation for their overlapping biological functions. Since both cytokine ligands share only moderate similarity on the amino acid sequence level, molecular recognition of the ligands by both receptor subunits is of great interest. IL-4 and IL-13 are interesting targets for allergy and asthma therapies. Knowledge of the binding mechanism will be important for the generation of either IL-4 or IL-13 specific drugs. Results We present a structure/function analysis of the IL-4 ligand-receptor interaction. Structural determination of a number of IL-4 variants together with in vitro binding studies show that IL-4 and its high-affinity receptor subunit IL-4Rα interact via a modular protein-protein interface consisting of three independently-acting interaction clusters. For high-affinity binding of wild-type IL-4 to its receptor IL-4Rα, only two of these clusters (i.e. cluster 1 centered around Glu9 and cluster 2 around Arg88 contribute significantly to the free binding energy. Mutating residues Thr13 or Phe82 located in cluster 3 to aspartate results in super-agonistic IL-4 variants. All three clusters are fully engaged in these variants, generating a three-fold higher binding affinity for IL-4Rα. Mutagenesis studies reveal that IL-13 utilizes the same main binding determinants, i.e. Glu11 (cluster 1 and Arg64 (cluster 2, suggesting that IL-13 also uses this modular protein interface architecture. Conclusion The modular architecture of the IL-4-IL-4Rα interface suggests a possible mechanism by which proteins might be able to generate binding affinity

  12. The Observation of the Structure of M23C6/γ Coherent Interface in the 100Mn13 High Carbon High Manganese Steel

    Science.gov (United States)

    Xu, Zhenfeng; Ding, Zhimin; Liang, Bo

    2018-01-01

    The M23C6 carbides precipitate along the austenite grain boundary in the 100Mn13 high carbon high manganese steel after 1323 K (1050 °C) solution treatment and subsequent 748 K (475 °C) aging treatment. The grain boundary M23C6 carbides not only spread along the grain boundary and into the incoherent austenite grain, but also grow slowly into the coherent austenite grain. On the basis of the research with optical microscope, a further investigation for the M23C6/γ coherent interface was carried out by transmission electron microscope (TEM). The results show that the grain boundary M23C6 carbides have orientation relationships with only one of the adjacent austenite grains in the same planes: (\\bar{1}1\\bar{1})_{M_{2 3}}{C}_{ 6} } / (\\bar{1}1\\bar{1})_{γ } , (\\bar{1}11)_{M}_{ 2 3} {C}_{ 6} } //(\\bar{1}11)_{γ } ,[ 1 10]_{{{M}_{ 2 3} {C}_{ 6} }} //[ 1 10]_{γ } . The flat M23C6/γ coherent interface lies on the low indexed crystal planes {111}. Moreover, in M23C6/γ coherent interface, there are embossments which stretch into the coherent austenite grain γ. Dislocations distribute in the embossments and coherent interface frontier. According to the experimental observation, the paper suggests that the embossments can promote the M23C6/γ coherent interface move. Besides, the present work has analyzed chemical composition of experimental material and the crystal structures of austenite and M23C6, which indicates that the transformation can be completed through a little diffusion for C atoms and a simple variant for austenite unit cell.

  13. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  14. Evaluation of the stability of Boston type I keratoprosthesis-donor cornea interface using anterior segment optical coherence tomography.

    Science.gov (United States)

    Garcia, Julian P S; Ritterband, David C; Buxton, Douglas F; De la Cruz, Jose

    2010-09-01

    To evaluate the anatomic stability of an implanted Boston type I keratoprosthesis (KPro)-donor cornea interface and assess the presence or absence of a potential space (gap) between the KPro front plate and donor cornea using anterior segment optical coherence tomography (AS-OCT). The presence of a gap would raise concerns of a possible pathway for the exchange of extraocular fluid with the anterior chamber. Fifteen eyes implanted with a Boston type I KPro were studied by the noncontact technique of AS-OCT (AC Cornea OCT prototype; OTI, Canada). All the KPro devices had been implanted at least 4 weeks before the study (mean: 7 months, range: 1-22 months). Eight eyes had aphakic Kpros, and the other 7 had pseudophakic implants. Anesthetized eyes were imaged before and during pressure application using sterile cotton-tip applicators. Pressure was applied for 10 seconds on the nasal or temporal side of the eye. Images were analyzed for any possible changes in the KPro-donor cornea interface during the application of pressure. Of 15 eyes, 10 had the threaded front plate model with a T-shaped silhouette and corrugated sides, whereas 5 had the threadless type with a T-shaped silhouette and smooth sides on cross-sectional optical coherence tomography. Of the 15 eyes, 2 revealed a gap between the front plate and the surface of the donor cornea. The rest revealed no gaps. With pressure, none of the eyes, including the 2 with gaps, demonstrated any change in the KPro-donor cornea interface during dynamic imaging (eg, gaping or evidence of fluid escape along the KPro-donor cornea borders). In all eyes, the position of the titanium locking ring was visible and verified to be in an adequate position. The implanted KPro-donor cornea interface seems to be stable dynamically using AS-OCT. A gap that has been documented with this imaging tool showed neither gaping nor escape of anterior chamber fluid during dynamic cross-sectional imaging. Further studies will be needed to assess

  15. A Link between the Increase in Electroencephalographic Coherence and Performance Improvement in Operating a Brain-Computer Interface.

    Science.gov (United States)

    Angulo-Sherman, Irma Nayeli; Gutiérrez, David

    2015-01-01

    We study the relationship between electroencephalographic (EEG) coherence and accuracy in operating a brain-computer interface (BCI). In our case, the BCI is controlled through motor imagery. Hence, a number of volunteers were trained using different training paradigms: classical visual feedback, auditory stimulation, and functional electrical stimulation (FES). After each training session, the volunteers' accuracy in operating the BCI was assessed, and the event-related coherence (ErCoh) was calculated for all possible combinations of pairs of EEG sensors. After at least four training sessions, we searched for significant differences in accuracy and ErCoh using one-way analysis of variance (ANOVA) and multiple comparison tests. Our results show that there exists a high correlation between an increase in ErCoh and performance improvement, and this effect is mainly localized in the centrofrontal and centroparietal brain regions for the case of our motor imagery task. This result has a direct implication with the development of new techniques to evaluate BCI performance and the process of selecting a feedback modality that better enhances the volunteer's capacity to operate a BCI system.

  16. A Link between the Increase in Electroencephalographic Coherence and Performance Improvement in Operating a Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Irma Nayeli Angulo-Sherman

    2015-01-01

    Full Text Available We study the relationship between electroencephalographic (EEG coherence and accuracy in operating a brain-computer interface (BCI. In our case, the BCI is controlled through motor imagery. Hence, a number of volunteers were trained using different training paradigms: classical visual feedback, auditory stimulation, and functional electrical stimulation (FES. After each training session, the volunteers’ accuracy in operating the BCI was assessed, and the event-related coherence (ErCoh was calculated for all possible combinations of pairs of EEG sensors. After at least four training sessions, we searched for significant differences in accuracy and ErCoh using one-way analysis of variance (ANOVA and multiple comparison tests. Our results show that there exists a high correlation between an increase in ErCoh and performance improvement, and this effect is mainly localized in the centrofrontal and centroparietal brain regions for the case of our motor imagery task. This result has a direct implication with the development of new techniques to evaluate BCI performance and the process of selecting a feedback modality that better enhances the volunteer’s capacity to operate a BCI system.

  17. Coherent-Interface-Assembled Ag2O-Anchored Nanofibrillated Cellulose Porous Aerogels for Radioactive Iodine Capture.

    Science.gov (United States)

    Lu, Yun; Liu, Hongwei; Gao, Runan; Xiao, Shaoliang; Zhang, Ming; Yin, Yafang; Wang, Siqun; Li, Jian; Yang, Dongjiang

    2016-10-26

    Nanofibrillated cellulose (NFC) has received increasing attention in science and technology because of not only the availability of large amounts of cellulose in nature but also its unique structural and physical features. These high-aspect-ratio nanofibers have potential applications in water remediation and as a reinforcing scaffold in composites, coatings, and porous materials because of their fascinating properties. In this work, highly porous NFC aerogels were prepared based on tert-butanol freeze-drying of ultrasonically isolated bamboo NFC with 20-80 nm diameters. Then nonagglomerated 2-20-nm-diameter silver oxide (Ag2O) nanoparticles (NPs) were grown firmly onto the NFC scaffold with a high loading content of ∼500 wt % to fabricate Ag2O@NFC organic-inorganic composite aerogels (Ag2O@NFC). For the first time, the coherent interface and interaction mechanism between the cellulose Iβ nanofiber and Ag2O NPs are explored by high-resolution transmission electron microscopy and 3D electron tomography. Specifically, a strong hydrogen between Ag2O and NFC makes them grow together firmly along a coherent interface, where good lattice matching between specific crystal planes of Ag2O and NFC results in very small interfacial straining. The resulting Ag2O@NFC aerogels take full advantage of the properties of the 3D organic aerogel framework and inorganic NPs, such as large surface area, interconnected porous structures, and supreme mechanical properties. They open up a wide horizon for functional practical usage, for example, as a flexible superefficient adsorbent to capture I- ions from contaminated water and trap I2 vapor for safe disposal, as presented in this work. The viable binding mode between many types of inorganic NPs and organic NFC established here highlights new ways to investigate cellulose-based functional nanocomposites.

  18. Triboelectric Charging at the Nanostructured Solid/Liquid Interface for Area-Scalable Wave Energy Conversion and Its Use in Corrosion Protection.

    Science.gov (United States)

    Zhao, Xue Jiao; Zhu, Guang; Fan, You Jun; Li, Hua Yang; Wang, Zhong Lin

    2015-07-28

    We report a flexible and area-scalable energy-harvesting technique for converting kinetic wave energy. Triboelectrification as a result of direct interaction between a dynamic wave and a large-area nanostructured solid surface produces an induced current among an array of electrodes. An integration method ensures that the induced current between any pair of electrodes can be constructively added up, which enables significant enhancement in output power and realizes area-scalable integration of electrode arrays. Internal and external factors that affect the electric output are comprehensively discussed. The produced electricity not only drives small electronics but also achieves effective impressed current cathodic protection. This type of thin-film-based device is a potentially practical solution of on-site sustained power supply at either coastal or off-shore sites wherever a dynamic wave is available. Potential applications include corrosion protection, pollution degradation, water desalination, and wireless sensing for marine surveillance.

  19. Visual analytics in scalable visualization environments

    OpenAIRE

    Yamaoka, So

    2011-01-01

    Visual analytics is an interdisciplinary field that facilitates the analysis of the large volume of data through interactive visual interface. This dissertation focuses on the development of visual analytics techniques in scalable visualization environments. These scalable visualization environments offer a high-resolution, integrated virtual space, as well as a wide-open physical space that affords collaborative user interaction. At the same time, the sheer scale of these environments poses ...

  20. Interface Consistency

    DEFF Research Database (Denmark)

    Staunstrup, Jørgen

    1998-01-01

    This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...

  1. Boston type I keratoprosthesis-donor cornea interface evaluated by high-definition spectral-domain anterior segment optical coherence tomography

    Directory of Open Access Journals (Sweden)

    Alzaga Fernandez AG

    2012-08-01

    Full Text Available Ana G Alzaga Fernandez,* Nathan M Radcliffe,* Kimberly C Sippel, Mark I Rosenblatt, Priyanka Sood, Christopher E Starr, Jessica B Ciralsky, Donald J D'Amico, Szilárd KissDepartment of Ophthalmology, Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA*These authors contributed equally to this work and both are considered principal authorsBackground: The purpose of this study was to assess whether the resolution offered by two different, recently commercially available high-resolution, spectral-domain anterior segment optical coherence tomography (AS-OCT instruments allows for detailed anatomic characterization of the critical device-donor cornea interface in eyes implanted with the Boston type I permanent keratoprosthesis.Methods: Eighteen eyes of 17 patients implanted with the Boston type I keratoprosthesis were included in this retrospective case series. All eyes were quantitatively evaluated using the Cirrus HD-OCT while a subset (five eyes was also qualitatively imaged using the Spectralis Anterior Segment Module. Images from these instruments were analyzed for evidence of epithelial migration onto the anterior surface of the keratoprosthesis front plate, and presence of a vertical gap between the posterior surface of the front plate and the underlying carrier donor corneal tissue. Quantitative data was obtained utilizing the caliper function on the Cirrus HD-OCT.Results: The mean duration between AS-OCT imaging and keratoprosthesis placement was 29 months. As assessed by the Cirrus HD-OCT, 83% of eyes exhibited epithelial migration over the edge of the front plate. Fifty-six percent of the keratoprosthesis devices displayed good apposition of the device with the carrier corneal donor tissue. When a vertical gap was present (44% of eyes, the mean gap was 40 (range 8–104 microns. The Spectralis Anterior Segment Module also displayed sufficient resolution to allow for similar characterization of the device

  2. Measurement of quasi-ballistic heat transport across nanoscale interfaces using ultrafast coherent soft x-ray beams

    Energy Technology Data Exchange (ETDEWEB)

    Siemens, M.; Li, Q.; Yang, R.; Nelson, K.; Anderson, E.; Murnane, M.; Kapteyn, H.

    2009-03-02

    Understanding heat transport on nanoscale dimensions is important for fundamental advances in nanoscience, as well as for practical applications such as thermal management in nano-electronics, thermoelectric devices, photovoltaics, nanomanufacturing, as well as nanoparticle thermal therapy. Here we report the first time-resolved measurements of heat transport across nanostructured interfaces. We observe the transition from a diffusive to a ballistic thermal transport regime, with a corresponding increase in the interface resistivity for line widths smaller than the phonon mean free path in the substrate. Resistivities more than three times higher than the bulk value are measured for the smallest line widths of 65 nm. Our findings are relevant to the modeling and design of heat transport in nanoscale engineered systems, including nanoelectronics, photovoltaics and thermoelectric devices.

  3. Quantitative Analysis of Lens Nuclear Density Using Optical Coherence Tomography (OCT with a Liquid Optics Interface: Correlation between OCT Images and LOCS III Grading

    Directory of Open Access Journals (Sweden)

    You Na Kim

    2016-01-01

    Full Text Available Purpose. To quantify whole lens and nuclear lens densities using anterior-segment optical coherence tomography (OCT with a liquid optics interface and evaluate their correlation with Lens Opacities Classification System III (LOCS III lens grading and corrected distance visual acuity (BCVA. Methods. OCT images of the whole lens and lens nucleus of eyes with age-related nuclear cataract were analyzed using ImageJ software. The lens grade and nuclear density were represented in pixel intensity units (PIU and correlations between PIU, BCVA, and LOCS III were assessed. Results. Forty-seven eyes were analyzed. The mean whole lens and lens nuclear densities were 26.99 ± 5.23 and 19.43 ± 6.15 PIU, respectively. A positive linear correlation was observed between lens opacities (R2 = 0.187, p<0.01 and nuclear density (R2 = 0.316, p<0.01 obtained from OCT images and LOCS III. Preoperative BCVA and LOCS III were also positively correlated (R2 = 0.454, p<0.01. Conclusions. Whole lens and lens nuclear densities obtained from OCT correlated with LOCS III. Nuclear density showed a higher positive correlation with LOCS III than whole lens density. OCT with a liquid optics interface is a potential quantitative method for lens grading and can aid in monitoring and managing age-related cataracts.

  4. Quantitative Analysis of Lens Nuclear Density Using Optical Coherence Tomography (OCT) with a Liquid Optics Interface: Correlation between OCT Images and LOCS III Grading.

    Science.gov (United States)

    Kim, You Na; Park, Jin Hyoung; Tchah, Hungwon

    2016-01-01

    Purpose. To quantify whole lens and nuclear lens densities using anterior-segment optical coherence tomography (OCT) with a liquid optics interface and evaluate their correlation with Lens Opacities Classification System III (LOCS III) lens grading and corrected distance visual acuity (BCVA). Methods. OCT images of the whole lens and lens nucleus of eyes with age-related nuclear cataract were analyzed using ImageJ software. The lens grade and nuclear density were represented in pixel intensity units (PIU) and correlations between PIU, BCVA, and LOCS III were assessed. Results. Forty-seven eyes were analyzed. The mean whole lens and lens nuclear densities were 26.99 ± 5.23 and 19.43 ± 6.15 PIU, respectively. A positive linear correlation was observed between lens opacities (R (2) = 0.187, p density (R (2) = 0.316, p densities obtained from OCT correlated with LOCS III. Nuclear density showed a higher positive correlation with LOCS III than whole lens density. OCT with a liquid optics interface is a potential quantitative method for lens grading and can aid in monitoring and managing age-related cataracts.

  5. Boston type I keratoprosthesis-donor cornea interface evaluated by high-definition spectral-domain anterior segment optical coherence tomography.

    Science.gov (United States)

    Fernandez, Ana G Alzaga; Radcliffe, Nathan M; Sippel, Kimberly C; Rosenblatt, Mark I; Sood, Priyanka; Starr, Christopher E; Ciralsky, Jessica B; D'Amico, Donald J; Kiss, Szilárd

    2012-01-01

    The purpose of this study was to assess whether the resolution offered by two different, recently commercially available high-resolution, spectral-domain anterior segment optical coherence tomography (AS-OCT) instruments allows for detailed anatomic characterization of the critical device-donor cornea interface in eyes implanted with the Boston type I permanent keratoprosthesis. Eighteen eyes of 17 patients implanted with the Boston type I keratoprosthesis were included in this retrospective case series. All eyes were quantitatively evaluated using the Cirrus HD-OCT while a subset (five eyes) was also qualitatively imaged using the Spectralis Anterior Segment Module. Images from these instruments were analyzed for evidence of epithelial migration onto the anterior surface of the keratoprosthesis front plate, and presence of a vertical gap between the posterior surface of the front plate and the underlying carrier donor corneal tissue. Quantitative data was obtained utilizing the caliper function on the Cirrus HD-OCT. The mean duration between AS-OCT imaging and keratoprosthesis placement was 29 months. As assessed by the Cirrus HD-OCT, 83% of eyes exhibited epithelial migration over the edge of the front plate. Fifty-six percent of the keratoprosthesis devices displayed good apposition of the device with the carrier corneal donor tissue. When a vertical gap was present (44% of eyes), the mean gap was 40 (range 8-104) microns. The Spectralis Anterior Segment Module also displayed sufficient resolution to allow for similar characterization of the device-donor cornea interface. Spectral-domain AS-OCT permits high resolution imaging of the keratoprosthesis device-donor cornea interface. Both the Cirrus HD-OCT and the Spectralis Anterior Segment module allowed for visualization of epithelial coverage of the device-donor cornea interface, as well as identification of physical gaps. These imaging modalities, by yielding information in regard to integration of the

  6. [Optical coherence tomography in eyes with senile retinoschisis : SD-OCT versus ultrasound examinations and assessment of the vitreoretinal interface].

    Science.gov (United States)

    Bringewatt, A; Burzer, S; Feucht, N; Maier, M

    2017-05-15

    In addition to ocular ultrasonography (US), spectral domain optical coherence tomography (SD-OCT) is available in order to diagnose senile retinoschisis (sRS). SD-OCT also allows for classification of posterior vitreous detachment (PVD) in healthy eyes. Reevaluation of the value and additional benefit of both imaging procedures. SD-OCT-based evaluation of PVD stages in sRS patients. Diagnostic results of 33 eyes in 26 patients with clinical suspicion of sRS were retrospectively analysed. All patients received a SD-OCT and a 10 MHz US examination of the region of interest (RoI). In 32 eyes the PVD stage was classified by SD-OCT using the description by Uchino et al. The vitreous position in peripheral SD-OCT scans with sRS was reviewed. SD-OCT confirmed sRS in 29 eyes. US examination identified sRS in 26 eyes. In 11 eyes, the examination results of the two methods differed. In 7 eyes sRS was identified by SD-OCT but not by US examination. US examination confirmed sRS in 4 eyes for which SD-OCT scans were not useful. Most cases of sRS were detected in temporal located retinal lesions. There was no significant difference between the results of both imaging procedures regarding the RoI (p = 0.64). SD-OCT provided additional information in 27 eyes. Four eyes did not present PVD. Early and intermediate stages of PVD were detected in 9 eyes, while 19 eyes showed complete PVD. In most cases, the vitreous could not be identified in the SD-OCT scans of the periphery. In clinical practice, neither SD-OCT nor US ensure an explicit finding of sRS in each eye with sRS. However, both methods positively complement one another and together they improve image-based diagnosis. All stages of PVD may be found in eyes with sRS. The contribution of the vitreous to the pathogenesis of sRS remains uncertain.

  7. Coherent vertical electron transport and interface roughness effects in AlGaN/GaN intersubband devices

    Science.gov (United States)

    Grier, A.; Valavanis, A.; Edmunds, C.; Shao, J.; Cooper, J. D.; Gardner, G.; Manfra, M. J.; Malis, O.; Indjin, D.; Ikonić, Z.; Harrison, P.

    2015-12-01

    We investigate electron transport in epitaxially grown nitride-based resonant tunneling diodes (RTDs) and superlattice sequential tunneling devices. A density-matrix model is developed, and shown to reproduce the experimentally measured features of the current-voltage curves, with its dephasing terms calculated from semi-classical scattering rates. Lifetime broadening effects are shown to have a significant influence in the experimental data. Additionally, it is shown that the interface roughness geometry has a large effect on current magnitude, peak-to-valley ratios and misalignment features; in some cases eliminating negative differential resistance entirely in RTDs. Sequential tunneling device characteristics are dominated by a parasitic current that is most likely to be caused by dislocations; however, excellent agreement between the simulated and experimentally measured tunneling current magnitude and alignment bias is demonstrated. This analysis of the effects of scattering lifetimes, contact doping and growth quality on electron transport highlights critical optimization parameters for the development of III-nitride unipolar electronic and optoelectronic devices.

  8. Improving imaging of the air-liquid interface in living mice by aberration-corrected optical coherence tomography (mOCT) (Conference Presentation)

    Science.gov (United States)

    Schulz-Hildebrandt, Hinnerk; Sauer, Benjamin; Reinholz, Fred; Pieper, Mario; Mall, Markus; König, Peter; Huettmann, Gereon

    2017-04-01

    Failure in mucociliary clearance is responsible for severe diseases like cystic fibroses, primary ciliary dyskinesia or asthma. Visualizing the mucous transport in-vivo will help to understanding transport mechanisms as well as developing and validating new therapeutic intervention. However, in-vivo imaging is complicated by the need of high spatial and temporal resolution. Recently, we developed microscopy optical coherence tomography (mOCT) for non-invasive imaging of the liquid-air interface in intact murine trachea from its outside. Whereas axial resolution of 1.5 µm is achieved by the spectral width of supercontinuum light source, lateral resolution is limited by aberrations caused by the cylindric shape of the trachea and optical inhomogenities of the tissue. Therefore, we extended our mOCT by a deformable mirror for compensation of the probe induced aberrations. Instead of using a wavefront sensor for measuring aberrations, we harnessed optimization of the image quality to determine the correction parameter. With the aberration corrected mOCT ciliary function and mucus transport was measured in wild type and βENaC overexpressing mice, which served as a model for cystic fibrosis.

  9. SPRNG Scalable Parallel Random Number Generator LIbrary

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-16

    This revision corrects some errors in SPRNG 1. Users of newer SPRNG versions can obtain the corrected files and build their version with it. This version also improves the scalability of some of the application-based tests in the SPRNG test suite. It also includes an interface to a parallel Mersenne Twister, so that if users install the Mersenne Twister, then they can test this generator with the SPRNG test suite and also use some SPRNG features with that generator.

  10. Accessible coherence and coherence distribution

    Science.gov (United States)

    Ma, Teng; Zhao, Ming-Jing; Zhang, Hai-Jun; Fei, Shao-Ming; Long, Gui-Lu

    2017-04-01

    The definition of accessible coherence is proposed. Through local measurement on the other subsystem and one-way classical communication, a subsystem can access more coherence than the coherence of its density matrix. Based on the local accessible coherence, the part that cannot be locally accessed is also studied, which we call it remaining coherence. We study how the bipartite coherence is distributed by partition for both l1 norm coherence and relative entropy coherence, and the expressions for local accessible coherence and remaining coherence are derived. We also study some examples to illustrate the distribution.

  11. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J.; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  12. Scalability limitations of VIA-based technologies in supporting MPI

    Energy Technology Data Exchange (ETDEWEB)

    BRIGHTWELL,RONALD B.; MACCABE,ARTHUR BERNARD

    2000-04-17

    This paper analyzes the scalability limitations of networking technologies based on the Virtual Interface Architecture (VIA) in supporting the runtime environment needed for an implementation of the Message Passing Interface. The authors present an overview of the important characteristics of VIA and an overview of the runtime system being developed as part of the Computational Plant (Cplant) project at Sandia National Laboratories. They discuss the characteristics of VIA that prevent implementations based on this system to meet the scalability and performance requirements of Cplant.

  13. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  14. Using the scalable nonlinear equations solvers package

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W.D.; McInnes, L.C.; Smith, B.F.

    1995-02-01

    SNES (Scalable Nonlinear Equations Solvers) is a software package for the numerical solution of large-scale systems of nonlinear equations on both uniprocessors and parallel architectures. SNES also contains a component for the solution of unconstrained minimization problems, called SUMS (Scalable Unconstrained Minimization Solvers). Newton-like methods, which are known for their efficiency and robustness, constitute the core of the package. As part of the multilevel PETSc library, SNES incorporates many features and options from other parts of PETSc. In keeping with the spirit of the PETSc library, the nonlinear solution routines are data-structure-neutral, making them flexible and easily extensible. This users guide contains a detailed description of uniprocessor usage of SNES, with some added comments regarding multiprocessor usage. At this time the parallel version is undergoing refinement and extension, as we work toward a common interface for the uniprocessor and parallel cases. Thus, forthcoming versions of the software will contain additional features, and changes to parallel interface may result at any time. The new parallel version will employ the MPI (Message Passing Interface) standard for interprocessor communication. Since most of these details will be hidden, users will need to perform only minimal message-passing programming.

  15. Scalable Reliable SD Erlang Design

    OpenAIRE

    Chechina, Natalia; Trinder, Phil; Ghaffari, Amir; Green, Rickard; Lundin, Kenneth; Virding, Robert

    2014-01-01

    This technical report presents the design of Scalable Distributed (SD) Erlang: a set of language-level changes that aims to enable Distributed Erlang to scale for server applications on commodity hardware with at most 100,000 cores. We cover a number of aspects, specifically anticipated architecture, anticipated failures, scalable data structures, and scalable computation. Other two components that guided us in the design of SD Erlang are design principles and typical Erlang applications. The...

  16. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  17. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  18. Volitional Control of Neuromagnetic Coherence

    Directory of Open Access Journals (Sweden)

    Matthew D Sacchet

    2012-12-01

    Full Text Available Coherence of neural activity between circumscribed brain regions has been implicated as an indicator of intracerebral communication in various cognitive processes. While neural activity can be volitionally controlled with neurofeedback, the volitional control of coherence has not yet been explored. Learned volitional control of coherence could elucidate mechanisms of associations between cortical areas and its cognitive correlates and may have clinical implications. Neural coherence may also provide a signal for brain-computer interfaces (BCI. In the present study we used the Weighted Overlapping Segment Averaging (WOSA method to assess coherence between bilateral magnetoencephalograph (MEG sensors during voluntary digit movement as a basis for BCI control. Participants controlled an onscreen cursor, with a success rate of 124 of 180 (68.9%, sign-test p < 0.001 and 84 out of 100 (84%, sign-test p < 0.001. The present findings suggest that neural coherence may be volitionally controlled and may have specific behavioral correlates.

  19. Scalable and Hybrid Radio Resource Management for Future Wireless Networks

    DEFF Research Database (Denmark)

    Mino, E.; Luo, Jijun; Tragos, E.

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios, from local area to wide area wireless networks. The integration in a unique radio system of a cellular and local area type networks supposes...... describes a proposal for scalable and hybrid radio resource management to efficiently integrate the different WINNER system modes. Index...... a great advantage for the final user and for the operator, compared with the current situation, with disconnected systems, usually with different subscriptions, radio interfaces and terminals. To be a ubiquitous wireless system, the IST project WINNER II has defined three system modes. This contribution...

  20. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  1. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  2. Coherent interface structures and intergrain Josephson coupling in dense MgO/Mg{sub 2}Si/MgB{sub 2} nanocomposites

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Katsuya; Takahashi, Kazuyuki; Uchino, Takashi, E-mail: uchino@kobe-u.ac.jp [Department of Chemistry, Graduate School of Science, Kobe University, Nada, Kobe 657-8501 (Japan); Nagashima, Yukihito [Nippon Sheet Glass Co., Ltd., Konoike, Itami 664-8520 (Japan); Seto, Yusuke [Department of Planetology, Graduate School of Science, Kobe University, Nada, Kobe 657-8501 (Japan); Matsumoto, Megumi; Sakurai, Takahiro [Center for Support to Research and Education Activities, Kobe University, Nada, Kobe 657-8501 (Japan); Ohta, Hitoshi [Molecular Photoscience Research Center, Kobe University, Nada, Kobe 657-8501 (Japan)

    2016-07-07

    Many efforts are under way to control the structure of heterointerfaces in nanostructured composite materials for designing functionality and engineering application. However, the fabrication of high-quality heterointerfaces is challenging because the crystal/crystal interface is usually the most defective part of the nanocomposite materials. In this work, we show that fully dense insulator (MgO)/semiconductor(Mg{sub 2}Si)/superconductor(MgB{sub 2}) nanocomposites with atomically smooth and continuous interfaces, including epitaxial-like MgO/Mg{sub 2}Si interfaces, are obtained by solid phase reaction between metallic magnesium and a borosilicate glass. The resulting nanocomposites exhibit a semiconductor-superconducting transition at 36 K owing to the MgB{sub 2} nanograins surrounded by the MgO/Mg{sub 2}Si matrix. This transition is followed by the intergrain phase-lock transition at ∼24 K due to the construction of Josephson-coupled network, eventually leading to a near-zero resistance state at 17 K. The method not only provides a simple process to fabricate dense nanocomposites with high-quality interfaces, but also enables to investigate the electric and magnetic properties of embedded superconducting nanograins with good intergrain coupling.

  3. Scalable Gravity Offload System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A scalable gravity offload device simulates reduced gravity for the testing of various surface system elements such as mobile robots, excavators, habitats, and...

  4. Characterisation of dispersive systems using a coherer

    Directory of Open Access Journals (Sweden)

    Nikolić Pantelija M.

    2002-01-01

    Full Text Available The possibility of characterization of aluminium powders using a horizontal coherer has been considered. Al powders of known dimension were treated with a high frequency electromagnetic field or with a DC electric field, which were increased until a dielectric breakdown occurred. Using a multifunctional card PC-428 Electronic Design and a suitable interface between the coherer and PC, the activation time of the coherer was measured as a function of powder dimension and the distance between the coherer electrodes. It was also shown that the average dimension of powders of unknown size could be determined using the coherer.

  5. (Submitted) Scalable quantum circuit and control for a superconducting surface code

    NARCIS (Netherlands)

    Versluis, R.; Poletto, S.; Khammassi, N.; Haider, N.; Michalak, D.J.; Bruno, A.; Bertels, K.; DiCarlo, L.

    2016-01-01

    We present a scalable scheme for executing the error-correction cycle of a monolithic surface-code fabric composed of fast-flux-tuneable transmon qubits with nearest-neighbor coupling. An eight-qubit unit cell forms the basis for repeating both the quantum hardware and coherent control, enabling

  6. Coherent detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, C R [M/C 169-327, Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Church, S [Room 324 Varian Physics Bldg, 382 Via Pueblo Mall, Stanford, CA 94305-4060 (United States); Gaier, T [M/C 168-314, Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Lai, R [Northrop Grumman Corporation, Redondo Beach, CA 90278 (United States); Ruf, C [1533 Space Research Building, The University of Michigan, Ann Arbor, MI 48109-2143 (United States); Wollack, E, E-mail: charles.lawrence@jpl.nasa.go [NASA/GSFC, Code 665, Observational Cosmology Laboratory, Greenbelt, MD 20771 (United States)

    2009-03-01

    Coherent systems offer significant advantages in simplicity, testability, control of systematics, and cost. Although quantum noise sets the fundamental limit to their performance at high frequencies, recent breakthroughs suggest that near-quantum-limited noise up to 150 or even 200 GHz could be realized within a few years. If the demands of component separation can be met with frequencies below 200 GHz, coherent systems will be strong competitors for a space CMB polarization mission. The rapid development of digital correlator capability now makes space interferometers with many hundreds of elements possible. Given the advantages of coherent interferometers in suppressing systematic effects, such systems deserve serious study.

  7. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  8. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  9. Scalability study of solid xenon

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  10. Coherence matrix of plasmonic beams

    DEFF Research Database (Denmark)

    Novitsky, Andrey; Lavrinenko, Andrei

    2013-01-01

    We consider monochromatic electromagnetic beams of surface plasmon-polaritons created at interfaces between dielectric media and metals. We theoretically study non-coherent superpositions of elementary surface waves and discuss their spectral degree of polarization, Stokes parameters, and the form...

  11. Scalable parallel communications

    Science.gov (United States)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  12. Quality scalable video data stream

    OpenAIRE

    Wiegand, T.; Kirchhoffer, H.; Schwarz, H

    2008-01-01

    An apparatus for generating a quality-scalable video data stream (36) is described which comprises means (42) for coding a video signal (18) using block-wise transformation to obtain transform blocks (146, 148) of transformation coefficient values for a picture (140) of the video signal, a predetermined scan order (154, 156, 164, 166) with possible scan positions being defined among the transformation coefficient values within the transform blocks so that in each transform block, for each pos...

  13. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  14. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide.

    Science.gov (United States)

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-03-08

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits), and nanoscale sensors based on individual color centers. Toward this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1400 nm diameters. We obtain high collection efficiency of up to 22 kcounts/s optical saturation rates from a single silicon vacancy center while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  15. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide

    Science.gov (United States)

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G.; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-03-01

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits) and nanoscale sensors based on individual color centers. Towards this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1,400 nm diameters. We obtain high collection efficiency, up to 22 kcounts/s optical saturation rates from a single silicon vacancy center, while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  16. Coherence and Sense of Coherence

    DEFF Research Database (Denmark)

    Dau, Susanne

    2014-01-01

    of coherence is both related to conditional matters as learning environments, structure, clarity and linkage but also preconditioned matters and prerequisites among participants related to experiences and convenience. It is stressed that this calls for continuous assessment and reflections upon these terms...

  17. Design of a Scalable Event Notification Service: Interface and Architecture

    National Research Council Canada - National Science Library

    Carzaniga, Antonio; Rosenblum, David S; Wolf, Alexander L

    1998-01-01

    Event-based distributed systems are programmed to operate in response to events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems...

  18. Coherent control of mesoscopic atomic ensembles for quantum information

    OpenAIRE

    Beterov, I. I.; Saffman, M.; Zhukov, V. P.; Tretyakov, D. B.; Entin, V. M.; Yakshina, E. A.; Ryabtsev, I. I.; Mansell, C. W.; MacCormick, C.; Bergamini, S.; Fedoruk, M. P.

    2013-01-01

    We discuss methods for coherently controlling mesoscopic atomic ensembles where the number of atoms varies randomly from one experimental run to the next. The proposed schemes are based on adiabatic passage and Rydberg blockade and can be used for implementation of a scalable quantum register formed by an array of randomly loaded optical dipole traps.

  19. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  20. Flexible scalable photonic manufacturing method

    Science.gov (United States)

    Skunes, Timothy A.; Case, Steven K.

    2003-06-01

    A process for flexible, scalable photonic manufacturing is described. Optical components are actively pre-aligned and secured to precision mounts. In a subsequent operation, the mounted optical components are passively placed onto a substrate known as an Optical Circuit Board (OCB). The passive placement may be either manual for low volume applications or with a pick-and-place robot for high volume applications. Mating registration features on the component mounts and the OCB facilitate accurate optical alignment. New photonic circuits may be created by changing the layout of the OCB. Predicted yield data from Monte Carlo tolerance simulations for two fiber optic photonic circuits is presented.

  1. Coherence and Sense of Coherence

    DEFF Research Database (Denmark)

    Dau, Susanne

    2014-01-01

    examined is how activating of models of blended learning in undergraduate education for teacher and radiograph affects the knowledge development. This is approached by mixed methods. The empirical data consist of data from surveys as well as focus group interviews and some observation studies. These data...... are analyzed and interpreted through a critical hermeneutical process of prefiguration, configuration and re-figuration. The findings illustrate significantly importance of sense of coherence among participants as a condition for implementing new designs and new learning environments. It is revealed that sense...... of coherence is both related to conditional matters as learning environments, structure, clarity and linkage but also preconditioned matters and prerequisites among participants related to experiences and convenience. It is stressed that this calls for continuous assessment and reflections upon these terms...

  2. Coherence and Sense of Coherence

    DEFF Research Database (Denmark)

    Dau, Susanne

    2014-01-01

    of coherence is both related to conditional matters as learning environments, structure, clarity and linkage but also preconditioned matters and prerequisites among participants related to experiences and convenience. It is stressed that this calls for continuous assessment and reflections upon these terms......Constraints in the implementation of models of blended learning can be explained by several causes, but in this paper, it is illustrated that lack of sense of coherence is a major factor of these constraints along with the referential whole of the perceived learning environments. The question...... examined is how activating of models of blended learning in undergraduate education for teacher and radiograph affects the knowledge development. This is approached by mixed methods. The empirical data consist of data from surveys as well as focus group interviews and some observation studies. These data...

  3. Scalable web services for the PSIPRED Protein Analysis Workbench.

    Science.gov (United States)

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  4. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    Data.gov (United States)

    National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...

  5. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  6. Scalable rendering on PC clusters

    Energy Technology Data Exchange (ETDEWEB)

    WYLIE,BRIAN N.; LEWIS,VASILY; SHIRLEY,DAVID NOYES; PAVLAKOS,CONSTANTINE

    2000-04-25

    This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

  7. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  8. Scalable Creation of Long-Lived Multipartite Entanglement

    Science.gov (United States)

    Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.

    2017-10-01

    We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in 40Ca+, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ ⟩=(1 /√{2 })(|0000 ⟩+|1111 ⟩) , and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.

  9. Scalable Quantum Circuit and Control for a Superconducting Surface Code

    Science.gov (United States)

    Versluis, R.; Poletto, S.; Khammassi, N.; Tarasinski, B.; Haider, N.; Michalak, D. J.; Bruno, A.; Bertels, K.; DiCarlo, L.

    2017-09-01

    We present a scalable scheme for executing the error-correction cycle of a monolithic surface-code fabric composed of fast-flux-tunable transmon qubits with nearest-neighbor coupling. An eight-qubit unit cell forms the basis for repeating both the quantum hardware and coherent control, enabling spatial multiplexing. This control uses three fixed frequencies for all single-qubit gates and a unique frequency-detuning pattern for each qubit in the cell. By pipelining the interaction and readout steps of ancilla-based X - and Z -type stabilizer measurements, we can engineer detuning patterns that avoid all second-order transmon-transmon interactions except those exploited in controlled-phase gates, regardless of fabric size. Our scheme is applicable to defect-based and planar logical qubits, including lattice surgery.

  10. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    DEFF Research Database (Denmark)

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....... system that aims to improve the state of the art in scalability and reliability. We achieve scalability through limiting data sharing when possible, and through extensive use of lock-free data structures. Reliability is addressed with a careful re-design of the programming interface and structure...

  11. Quality Scalability Aware Watermarking for Visual Content.

    Science.gov (United States)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  12. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbit......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....

  13. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  14. Scalable encryption using alpha rooting

    Science.gov (United States)

    Wharton, Eric J.; Panetta, Karen A.; Agaian, Sos S.

    2008-04-01

    Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV pay channels. Providers need to be able to protect their products while at the same time being able to provide demonstrations to attract new customers without giving away the full value of the content. If an algorithm were introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When parameters are selected using the measure, the output image achieves a balance between protecting the important data in the image while still containing a good overall representation of the image. We will show results for this encryption method on a number of images, using histograms to evaluate the effectiveness of the encryption.

  15. Scalable architecture for a room temperature solid-state quantum information processor.

    Science.gov (United States)

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  16. Scalable libraries for solving systems of nonlinear equations and unconstrained minimization problems.

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W. D.; McInnes, L. C.; Smith, B. F.

    1997-10-27

    Developing portable and scalable software for the solution of large-scale optimization problems presents many challenges that traditional libraries do not adequately meet. Using object-oriented design in conjunction with other innovative techniques, they address these issues within the SNES (Scalable Nonlinear Equation Solvers) and SUMS (Scalable Unconstrained Minimization Solvers) packages, which are part of the multilevel PETSCs (Portable, Extensible Tools for Scientific computation) library. This paper focuses on the authors design philosophy and its benefits in providing a uniform and versatile framework for developing optimization software and solving large-scale nonlinear problems. They also consider a three-dimensional anisotropic Ginzburg-Landau model as a representative application that exploits the packages' flexible interface with user-specified data structures and customized routines for function evaluation and preconditioning.

  17. Finite Element Modeling on Scalable Parallel Computers

    Science.gov (United States)

    Cwik, T.; Zuffada, C.; Jamnejad, V.; Katz, D.

    1995-01-01

    A coupled finite element-integral equation was developed to model fields scattered from inhomogenous, three-dimensional objects of arbitrary shape. This paper outlines how to implement the software on a scalable parallel processor.

  18. Spin-Light Coherence for Single-Spin Measurement and Control in Diamond

    Science.gov (United States)

    Buckley, B. B.; Fuchs, G. D.; Bassett, L. C.; Awschalom, D. D.

    2010-11-01

    The exceptional spin coherence of nitrogen-vacancy centers in diamond motivates their function in emerging quantum technologies. Traditionally, the spin state of individual centers is measured optically and destructively. We demonstrate dispersive, single-spin coupling to light for both nondestructive spin measurement, through the Faraday effect, and coherent spin manipulation, through the optical Stark effect. These interactions can enable the coherent exchange of quantum information between single nitrogen-vacancy spins and light, facilitating coherent measurement, control, and entanglement that is scalable over large distances.

  19. Corfu: A Platform for Scalable Consistency

    OpenAIRE

    Wei, Michael

    2017-01-01

    Corfu is a platform for building systems which are extremely scalable, strongly consistent and robust. Unlike other systems which weaken guarantees to provide better performance, we have built Corfu with a resilient fabric tuned and engineered for scalability and strong consistency at its core: the Corfu shared log. On top of the Corfu log, we have built a layer of advanced data services which leverage the properties of the Corfu log. Today, Corfu is already replacing data platforms in commer...

  20. Fully scalable video coding in multicast applications

    Science.gov (United States)

    Lerouge, Sam; De Sutter, Robbie; Lambert, Peter; Van de Walle, Rik

    2004-01-01

    The increasing diversity of the characteristics of the terminals and networks that are used to access multimedia content through the internet introduces new challenges for the distribution of multimedia data. Scalable video coding will be one of the elementary solutions in this domain. This type of coding allows to adapt an encoded video sequence to the limitations of the network or the receiving device by means of very basic operations. Algorithms for creating fully scalable video streams, in which multiple types of scalability are offered at the same time, are becoming mature. On the other hand, research on applications that use such bitstreams is only recently emerging. In this paper, we introduce a mathematical model for describing such bitstreams. In addition, we show how we can model applications that use scalable bitstreams by means of definitions that are built on top of this model. In particular, we chose to describe a multicast protocol that is targeted at scalable bitstreams. This way, we will demonstrate that it is possible to define an abstract model for scalable bitstreams, that can be used as a tool for reasoning about such bitstreams and related applications.

  1. Temporal Coherence Strategies for Augmented Reality Labeling

    DEFF Research Database (Denmark)

    Madsen, Jacob Boesen; Tatzgern, Markus; Madsen, Claus B.

    2016-01-01

    Temporal coherence of annotations is an important factor in augmented reality user interfaces and for information visualization. In this paper, we empirically evaluate four different techniques for annotation. Based on these findings, we follow up with subjective evaluations in a second experiment...

  2. Highly scalable multichannel mesh electronics for stable chronic brain electrophysiology

    Science.gov (United States)

    Fu, Tian-Ming; Hong, Guosong; Viveros, Robert D.; Zhou, Tao

    2017-01-01

    Implantable electrical probes have led to advances in neuroscience, brain−machine interfaces, and treatment of neurological diseases, yet they remain limited in several key aspects. Ideally, an electrical probe should be capable of recording from large numbers of neurons across multiple local circuits and, importantly, allow stable tracking of the evolution of these neurons over the entire course of study. Silicon probes based on microfabrication can yield large-scale, high-density recording but face challenges of chronic gliosis and instability due to mechanical and structural mismatch with the brain. Ultraflexible mesh electronics, on the other hand, have demonstrated negligible chronic immune response and stable long-term brain monitoring at single-neuron level, although, to date, it has been limited to 16 channels. Here, we present a scalable scheme for highly multiplexed mesh electronics probes to bridge the gap between scalability and flexibility, where 32 to 128 channels per probe were implemented while the crucial brain-like structure and mechanics were maintained. Combining this mesh design with multisite injection, we demonstrate stable 128-channel local field potential and single-unit recordings from multiple brain regions in awake restrained mice over 4 mo. In addition, the newly integrated mesh is used to validate stable chronic recordings in freely behaving mice. This scalable scheme for mesh electronics together with demonstrated long-term stability represent important progress toward the realization of ideal implantable electrical probes allowing for mapping and tracking single-neuron level circuit changes associated with learning, aging, and neurodegenerative diseases. PMID:29109247

  3. Kinetic Interface

    DEFF Research Database (Denmark)

    2009-01-01

    A kinetic interface for orientation detection in a video training system is disclosed. The interface includes a balance platform instrumented with inertial motion sensors. The interface engages a participant's sense of balance in training exercises.......A kinetic interface for orientation detection in a video training system is disclosed. The interface includes a balance platform instrumented with inertial motion sensors. The interface engages a participant's sense of balance in training exercises....

  4. Scheme for achieving coherent perfect absorption by anisotropic metamaterials

    KAUST Repository

    Zhang, Xiujuan

    2017-02-22

    We propose a unified scheme to achieve coherent perfect absorption of electromagnetic waves by anisotropic metamaterials. The scheme describes the condition on perfect absorption and offers an inverse design route based on effective medium theory in conjunction with retrieval method to determine practical metamaterial absorbers. The scheme is scalable to frequencies and applicable to various incident angles. Numerical simulations show that perfect absorption is achieved in the designed absorbers over a wide range of incident angles, verifying the scheme. By integrating these absorbers, we further propose an absorber to absorb energy from two coherent point sources.

  5. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  6. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Directory of Open Access Journals (Sweden)

    Giovanni Delussu

    Full Text Available This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  7. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Science.gov (United States)

    Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  8. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data

    Science.gov (United States)

    Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191

  9. Microscopie "CARS" (Coherent anti-Stokes Raman scattering). Génération du signal au voisinage d'interfaces et à l'intérieur d'une cavité Fabry-Perot.

    OpenAIRE

    Gachet, D

    2007-01-01

    Coherent anti-Stokes Raman scattering (``CARS'') is a spectroscopic technique that gives access to intra-molecular vibrational information. It was first proposed as a contrast mechanism in microscopy in 1982, and was implemented under a convenient colinear configuration in 1999. Since then, the signal generation in CARS microscopy has been studied in the litterature on some simple configurations. In this PhD dissertation, we extend the CARS signal generation study in isotropic media using a f...

  10. Cohering power of quantum operations

    Energy Technology Data Exchange (ETDEWEB)

    Bu, Kaifeng, E-mail: bkf@zju.edu.cn [School of Mathematical Sciences, Zhejiang University, Hangzhou 310027 (China); Kumar, Asutosh, E-mail: asukumar@hri.res.in [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019 (India); Homi Bhabha National Institute, Anushaktinagar, Mumbai 400094 (India); Zhang, Lin, E-mail: linyz@zju.edu.cn [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Wu, Junde, E-mail: wjd@zju.edu.cn [School of Mathematical Sciences, Zhejiang University, Hangzhou 310027 (China)

    2017-05-18

    Highlights: • Quantum coherence. • Cohering power: production of quantum coherence by quantum operations. • Study of cohering power and generalized cohering power, and their comparison for differentmeasures of quantum coherence. • Operational interpretation of cohering power. • Bound on cohering power of a generic quantum operation. - Abstract: Quantum coherence and entanglement, which play a crucial role in quantum information processing tasks, are usually fragile under decoherence. Therefore, the production of quantum coherence by quantum operations is important to preserve quantum correlations including entanglement. In this paper, we study cohering power–the ability of quantum operations to produce coherence. First, we provide an operational interpretation of cohering power. Then, we decompose a generic quantum operation into three basic operations, namely, unitary, appending and dismissal operations, and show that the cohering power of any quantum operation is upper bounded by the corresponding unitary operation. Furthermore, we compare cohering power and generalized cohering power of quantum operations for different measures of coherence.

  11. Wanted: Scalable Tracers for Diffusion Measurements

    Science.gov (United States)

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  12. Scalable L-infinite coding of meshes.

    Science.gov (United States)

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  13. Quantifying the Coherence between Coherent States

    Science.gov (United States)

    Tan, Kok Chuan; Volkoff, Tyler; Kwon, Hyukjoon; Jeong, Hyunseok

    2017-11-01

    In this Letter, we detail an orthogonalization procedure that allows for the quantification of the amount of coherence present in an arbitrary superposition of coherent states. The present construction is based on the quantum coherence resource theory introduced by Baumgratz, Cramer, and Plenio and the coherence resource monotone that we identify is found to characterize the nonclassicality traditionally analyzed via the Glauber-Sudarshan P distribution. This suggests that identical quantum resources underlie both quantum coherence in the discrete finite dimensional case and the nonclassicality of quantum light. We show that our construction belongs to a family of resource monotones within the framework of a resource theory of linear optics, thus establishing deeper connections between the class of incoherent operations in the finite dimensional regime and linear optical operations in the continuous variable regime.

  14. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  15. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Quaglia, Davide

    2017-01-01

    The future smart power grid will consist of an unlimited number of smart devices that communicate with control units to maintain the grid’s sustainability, efficiency, and balancing. In order to build and verify such controllers over a large grid, a scalable simulation environment is needed....... This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...... and appliances. By using SGSim, different smart grid control strategies and protocols can be tested, validated and evaluated in a scalable environment....

  16. On Longitudinal Spectral Coherence

    DEFF Research Database (Denmark)

    Kristensen, Leif

    1979-01-01

    It is demonstrated that the longitudinal spectral coherence differs significantly from the transversal spectral coherence in its dependence on displacement and frequency. An expression for the longitudinal coherence is derived and it is shown how the scale of turbulence, the displacement between ...... observation sites and the turbulence intensity influence the results. The limitations of the theory are discussed....

  17. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    a long time to replicate, business model scalability can be cornered into four dimensions. In many corporate restructuring exercises and Mergers and Acquisitions there is a tendency to look for synergies in the form of cost reductions, lean workflows and market segments. However, this state of mind......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...

  18. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....

  19. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  20. COHERENCE PROPERTIES OF ELECTROMAGNETIC RADIATION,

    Science.gov (United States)

    ELECTROMAGNETIC RADIATION , COHERENT SCATTERING), (*COHERENT SCATTERING, ELECTROMAGNETIC RADIATION ), LIGHT, INTERFERENCE, INTENSITY, STATISTICAL FUNCTIONS, QUANTUM THEORY, BOSONS, INTERFEROMETERS, CHINA

  1. Quantum dot-micropillars: a bright source of coherent single photons

    DEFF Research Database (Denmark)

    Unsleber, Sebastian; He, Yu-Ming; Maier, Sebastian

    2016-01-01

    We present the efficient generation of coherent single photons based on quantum dots in micropillars. We utilize a scalable lithography scheme leading to quantum dot-micropillar devices with 74% extraction efficiency. Via pulsed strict resonant pumping, we show an indistinguishability...

  2. Scalable Detection and Isolation of Phishing

    NARCIS (Netherlands)

    Moreira Moura, Giovane; Pras, Aiko

    2009-01-01

    This paper presents a proposal for scalable detection and isolation of phishing. The main ideas are to move the protection from end users towards the network provider and to employ the novel bad neighborhood concept, in order to detect and isolate both phishing e-mail senders and phishing web

  3. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  4. Realization of a scalable airborne radar

    NARCIS (Netherlands)

    Halsema, D. van; Jongh, R.V. de; Es, J. van; Otten, M.P.G.; Vermeulen, B.C.B.; Liempt, L.J. van

    2008-01-01

    Modern airborne ground surveillance radar systems are increasingly based on Active Electronically Scanned Array (AESA) antennas. Efficient use of array technology and the need for radar solutions for various airborne platforms, manned and unmanned, leads to the design of scalable radar systems. The

  5. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  6. Subjective comparison of temporal and quality scalability

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; You, Junyong

    2011-01-01

    and quality scalability. The practical experiments with low resolution video sequences show that in general, distortion is a more crucial factor for the perceived subjective quality than frame rate. However, the results also depend on the content. Moreover,, we discuss the role of other different influence...

  7. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    Science.gov (United States)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  8. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  9. Maximum Relative Entropy of Coherence: An Operational Coherence Measure

    Science.gov (United States)

    Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde

    2017-10-01

    The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.

  10. Plasmonic Antennas as Design Elements for Coherent Ultrafast Nanophotonics

    CERN Document Server

    Brinks, Daan; Hildner, Richard; van Hulst, Niek F

    2012-01-01

    Coherent broadband excitation of plasmons brings ultrafast photonics to the nanoscale. However, to fully leverage this potential for ultrafast nanophotonic applications, the capacity to engineer and control the ultrafast response of a plasmonic system at will is crucial. Here, we develop a framework for systematic control and measurement of ultrafast dynamics of near-field hotspots. We show deterministic design of the coherent response of plasmonic antennas at femtosecond timescales. Exploiting the emerging properties of coupled antenna configurations, we use the calibrated antennas to engineer two sought-after applications of ultrafast plasmonics: a subwavelength resolution phase shaper, and an ultrafast hotspot switch. Moreover, we demonstrate that mixing localized resonances of lossy plasmonic particles is the mechanism behind nanoscale coherent control. This simple, reproducible and scalable approach promises to transform ultrafast plasmonics into a straightforward tool for use in fields as diverse as roo...

  11. Compiler-Enforced Cache Coherence Using a Functional Language

    Directory of Open Access Journals (Sweden)

    Rich Wolski

    1996-01-01

    Full Text Available The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language is able to address those issues.

  12. Scalable multi-GPU implementation of the MAGFLOW simulator

    Directory of Open Access Journals (Sweden)

    Giovanni Gallo

    2011-12-01

    Full Text Available We have developed a robust and scalable multi-GPU (Graphics Processing Unit version of the cellular-automaton-based MAGFLOW lava simulator. The cellular automaton is partitioned into strips that are assigned to different GPUs, with minimal overlapping. For each GPU, a host thread is launched to manage allocation, deallocation, data transfer and kernel launches; the main host thread coordinates all of the GPUs, to ensure temporal coherence and data integrity. The overlapping borders and maximum temporal step need to be exchanged among the GPUs at the beginning of every evolution of the cellular automaton; data transfers are asynchronous with respect to the computations, to cover the introduced overhead. It is not required to have GPUs of the same speed or capacity; the system runs flawlessly on homogeneous and heterogeneous hardware. The speed-up factor differs from that which is ideal (#GPUs× only for a constant overhead loss of about 4E−2 · T · #GPUs, with T as the total simulation time.

  13. Application Coherency Manager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal describes an Application Coherency Manager that implements and manages the interdependencies of simulation, data, and platform information. It will...

  14. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  15. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    supports the PostgreSQL dialect of SQL. The prototype implementation is a compiler that translates CVL into SQL and stored procedures. (c) TileHeat is a framework and basic algorithm for partial materialization of hot tile sets for scalable map distribution. The framework predicts future map workloads......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...... goal. The results for Tileheat show that the prediction method offers a substantial improvement over the current method used by the Danish Geodata Agency. Thus, a large amount of computations can potentially be saved by this public institution, who is responsible for the distribution of government...

  16. A Scalability Model for ECS's Data Server

    Science.gov (United States)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  17. Stencil Lithography for Scalable Micro- and Nanomanufacturing

    Directory of Open Access Journals (Sweden)

    Ke Du

    2017-04-01

    Full Text Available In this paper, we review the current development of stencil lithography for scalable micro- and nanomanufacturing as a resistless and reusable patterning technique. We first introduce the motivation and advantages of stencil lithography for large-area micro- and nanopatterning. Then we review the progress of using rigid membranes such as SiNx and Si as stencil masks as well as stacking layers. We also review the current use of flexible membranes including a compliant SiNx membrane with springs, polyimide film, polydimethylsiloxane (PDMS layer, and photoresist-based membranes as stencil lithography masks to address problems such as blurring and non-planar surface patterning. Moreover, we discuss the dynamic stencil lithography technique, which significantly improves the patterning throughput and speed by moving the stencil over the target substrate during deposition. Lastly, we discuss the future advancement of stencil lithography for a resistless, reusable, scalable, and programmable nanolithography method.

  18. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; Renesse, Robbert,

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  19. Stencil Lithography for Scalable Micro- and Nanomanufacturing

    OpenAIRE

    Ke Du; Junjun Ding; Yuyang Liu; Ishan Wathuthanthri; Chang-Hwan Choi

    2017-01-01

    In this paper, we review the current development of stencil lithography for scalable micro- and nanomanufacturing as a resistless and reusable patterning technique. We first introduce the motivation and advantages of stencil lithography for large-area micro- and nanopatterning. Then we review the progress of using rigid membranes such as SiNx and Si as stencil masks as well as stacking layers. We also review the current use of flexible membranes including a compliant SiNx membrane with spring...

  20. Scalable real space pseudopotential-density functional codes for materials applications

    Science.gov (United States)

    Chelikowsky, James R.; Lena, Charles; Schofield, Grady; Saad, Yousef; Deslippe, Jack; Yang, Chao

    2015-03-01

    Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs and clusters with and without spin polarization. Fully self-consistent solutions have been routinely obtained for systems with thousands of atoms. However, there are still systems where quantum mechanical accuracy is desired, but scalability proves to be a hindrance, such as large biological molecules or complex interfaces. We will present an overview of our work on new algorithms, which offer improved scalability by implementing another layer of parallelism, and by optimizing communication and memory management. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).

  1. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  2. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  3. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  4. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  5. Analysis of Technology for Compact Coherent Lidar

    Science.gov (United States)

    Amzajerdian, Farzin

    1997-01-01

    In view of the recent advances in the area of solid state and semiconductor lasers has created new possibilities for the development of compact and reliable coherent lidars for a wide range of applications. These applications include: Automated Rendezvous and Capture, wind shear and clear air turbulence detection, aircraft wake vortex detection, and automobile collision avoidance. The work performed by the UAH personnel under this Delivery Order, concentrated on design and analyses of a compact coherent lidar system capable of measuring range and velocity of hard targets, and providing air mass velocity data. The following is the scope of this work. a. Investigate various laser sources and optical signal detection configurations in support of a compact and lightweight coherent laser radar to be developed for precision range and velocity measurements of hard and fuzzy targets. Through interaction with MSFC engineers, the most suitable laser source and signal detection technique that can provide a reliable compact and lightweight laser radar design will be selected. b. Analyze and specify the coherent laser radar system configuration and assist with its optical and electronic design efforts. Develop a system design including its optical layout design. Specify all optical components and provide the general requirements of the electronic subsystems including laser beam modulator and demodulator drivers, detector electronic interface, and the signal processor. c. Perform a thorough performance analysis to predict the system measurement range and accuracy. This analysis will utilize various coherent laser radar sensitivity formulations and different target models.

  6. Interface Realisms

    DEFF Research Database (Denmark)

    Pold, Søren

    2005-01-01

    This article argues for seeing the interface as an important representational and aesthetic form with implications for postmodern culture and digital aesthetics. The interface emphasizes realism due in part to the desire for transparency in Human-Computer Interaction (HCI) and partly to the devel......This article argues for seeing the interface as an important representational and aesthetic form with implications for postmodern culture and digital aesthetics. The interface emphasizes realism due in part to the desire for transparency in Human-Computer Interaction (HCI) and partly...

  7. Understanding Causal Coherence Relations

    NARCIS (Netherlands)

    Mulder, G.

    2008-01-01

    The research reported in this dissertation focuses on the cognitive processes and representations involved in understanding causal coherence relations in text. Coherence relations are the meaning relations between the information units in the text, such as Cause-Consequence. These relations can be

  8. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  9. Reflection and transmission calculations in a multilayer structure with coherent, incoherent, and partially coherent interference, using the transmission line method.

    Science.gov (United States)

    Stathopoulos, N A; Savaidis, S P; Botsialas, A; Ioannidis, Z C; Georgiadou, D G; Vasilopoulou, M; Pagiatakis, G

    2015-02-20

    A generalized transmission line method (TLM) that provides reflection and transmission calculations for a multilayer dielectric structure with coherent, partial coherent, and incoherent layers is presented. The method is deployed on two different application fields. The first application of the method concerns the thickness measurement of the individual layers of an organic light-emitting diode. By using a fitting approach between experimental spectral reflectance measurements and the corresponding TLM calculations, it is shown that the thickness of the films can be estimated. The second application of the TLM concerns the calculation of the external quantum efficiency of an organic photovoltaic with partially coherent rough interfaces between the layers. Numerical results regarding the short circuit photocurrent for different layer thicknesses and rough interfaces are provided and the performance impact of the rough interface is discussed in detail.

  10. Cooperative dissociations of misfit dislocations at bimetal interfaces

    Directory of Open Access Journals (Sweden)

    K. Liu

    2016-11-01

    Full Text Available Using atomistic simulations, several semi-coherent cube-on-cube bimetal interfaces are comparatively investigated to unravel the combined effect of the character of misfit dislocations, the stacking fault energy difference between bimetal pairs, and their lattice mismatch on the dissociation of interfacial misfit dislocations. Different dissociation paths and features under loadings provide several unique deformation mechanisms that are critical for understanding interface strengthening. In particular, applied strains can cause either the formation of global interface coherency by the migration of misfit dislocations from an interface to an adjoining crystal interior or to an alternate packing of stacking faults connected by stair-rod dislocations.

  11. Measuring coherence with entanglement concurrence

    Science.gov (United States)

    Qi, Xianfei; Gao, Ting; Yan, Fengli

    2017-07-01

    Quantum coherence is a fundamental manifestation of the quantum superposition principle. Recently, Baumgratz et al (2014 Phys. Rev. Lett. 113 140401) presented a rigorous framework to quantify coherence from the view of theory of physical resource. Here we propose a new valid quantum coherence measure which is a convex roof measure, for a quantum system of arbitrary dimension, essentially using the generalized Gell-Mann matrices. Rigorous proof shows that the proposed coherence measure, coherence concurrence, fulfills all the requirements dictated by the resource theory of quantum coherence measures. Moreover, strong links between the resource frameworks of coherence concurrence and entanglement concurrence is derived, which shows that any degree of coherence with respect to some reference basis can be converted to entanglement via incoherent operations. Our work provides a clear quantitative and operational connection between coherence and entanglement based on two kinds of concurrence. This new coherence measure, coherence concurrence, may also be beneficial to the study of quantum coherence.

  12. Coherent structures in compressible free-shear-layer flows

    Energy Technology Data Exchange (ETDEWEB)

    Aeschliman, D.P.; Baty, R.S. [Sandia National Labs., Albuquerque, NM (United States). Engineering Sciences Center; Kennedy, C.A.; Chen, J.H. [Sandia National Labs., Livermore, CA (United States). Combustion and Physical Sciences Center

    1997-08-01

    Large scale coherent structures are intrinsic fluid mechanical characteristics of all free-shear flows, from incompressible to compressible, and laminar to fully turbulent. These quasi-periodic fluid structures, eddies of size comparable to the thickness of the shear layer, dominate the mixing process at the free-shear interface. As a result, large scale coherent structures greatly influence the operation and efficiency of many important commercial and defense technologies. Large scale coherent structures have been studied here in a research program that combines a synergistic blend of experiment, direct numerical simulation, and analysis. This report summarizes the work completed for this Sandia Laboratory-Directed Research and Development (LDRD) project.

  13. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two...

  14. Organic interfaces

    NARCIS (Netherlands)

    Poelman, W.A.; Tempelman, E.

    2014-01-01

    This paper deals with the consequences for product designers resulting from the replacement of traditional interfaces by responsive materials. Part 1 presents a theoretical framework regarding a new paradigm for man-machine interfacing. Part 2 provides an analysis of the opportunities offered by new

  15. Fluid Interfaces

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius

    2001-01-01

    Fluid interaction, interaction by the user with the system that causes few breakdowns, is essential to many user interfaces. We present two concrete software systems that try to support fluid interaction for different work practices. Furthermore, we present specificity, generality, and minimality...... as design goals for fluid interfaces....

  16. Coherent Polariton Laser

    Science.gov (United States)

    Kim, Seonghoon; Zhang, Bo; Wang, Zhaorong; Fischer, Julian; Brodbeck, Sebastian; Kamp, Martin; Schneider, Christian; Höfling, Sven; Deng, Hui

    2016-01-01

    The semiconductor polariton laser promises a new source of coherent light, which, compared to conventional semiconductor photon lasers, has input-energy threshold orders of magnitude lower. However, intensity stability, a defining feature of a coherent state, has remained poor. Intensity noise many times the shot noise of a coherent state has persisted, attributed to multiple mechanisms that are difficult to separate in conventional polariton systems. The large intensity noise, in turn, limits the phase coherence. Thus, the capability of the polariton laser as a source of coherence light is limited. Here, we demonstrate a polariton laser with shot-noise-limited intensity stability, as expected from a fully coherent state. This stability is achieved by using an optical cavity with high mode selectivity to enforce single-mode lasing, suppress condensate depletion, and establish gain saturation. Moreover, the absence of spurious intensity fluctuations enables the measurement of a transition from exponential to Gaussian decay of the phase coherence of the polariton laser. It suggests large self-interaction energies in the polariton condensate, exceeding the laser bandwidth. Such strong interactions are unique to matter-wave lasers and important for nonlinear polariton devices. The results will guide future development of polariton lasers and nonlinear polariton devices.

  17. Coherent Polariton Laser

    Directory of Open Access Journals (Sweden)

    Seonghoon Kim

    2016-03-01

    Full Text Available The semiconductor polariton laser promises a new source of coherent light, which, compared to conventional semiconductor photon lasers, has input-energy threshold orders of magnitude lower. However, intensity stability, a defining feature of a coherent state, has remained poor. Intensity noise many times the shot noise of a coherent state has persisted, attributed to multiple mechanisms that are difficult to separate in conventional polariton systems. The large intensity noise, in turn, limits the phase coherence. Thus, the capability of the polariton laser as a source of coherence light is limited. Here, we demonstrate a polariton laser with shot-noise-limited intensity stability, as expected from a fully coherent state. This stability is achieved by using an optical cavity with high mode selectivity to enforce single-mode lasing, suppress condensate depletion, and establish gain saturation. Moreover, the absence of spurious intensity fluctuations enables the measurement of a transition from exponential to Gaussian decay of the phase coherence of the polariton laser. It suggests large self-interaction energies in the polariton condensate, exceeding the laser bandwidth. Such strong interactions are unique to matter-wave lasers and important for nonlinear polariton devices. The results will guide future development of polariton lasers and nonlinear polariton devices.

  18. An open, interoperable, and scalable prehospital information technology network architecture.

    Science.gov (United States)

    Landman, Adam B; Rokos, Ivan C; Burns, Kevin; Van Gelder, Carin M; Fisher, Roger M; Dunford, James V; Cone, David C; Bogucki, Sandy

    2011-01-01

    Some of the most intractable challenges in prehospital medicine include response time optimization, inefficiencies at the emergency medical services (EMS)-emergency department (ED) interface, and the ability to correlate field interventions with patient outcomes. Information technology (IT) can address these and other concerns by ensuring that system and patient information is received when and where it is needed, is fully integrated with prior and subsequent patient information, and is securely archived. Some EMS agencies have begun adopting information technologies, such as wireless transmission of 12-lead electrocardiograms, but few agencies have developed a comprehensive plan for management of their prehospital information and integration with other electronic medical records. This perspective article highlights the challenges and limitations of integrating IT elements without a strategic plan, and proposes an open, interoperable, and scalable prehospital information technology (PHIT) architecture. The two core components of this PHIT architecture are 1) routers with broadband network connectivity to share data between ambulance devices and EMS system information services and 2) an electronic patient care report to organize and archive all electronic prehospital data. To successfully implement this comprehensive PHIT architecture, data and technology requirements must be based on best available evidence, and the system must adhere to health data standards as well as privacy and security regulations. Recent federal legislation prioritizing health information technology may position federal agencies to help design and fund PHIT architectures.

  19. The Scalable Reasoning System: Lightweight Visualization for Distributed Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Pike, William A.; Bruce, Joseph R.; Baddeley, Robert L.; Best, Daniel M.; Franklin, Lyndsey; May, Richard A.; Rice, Douglas M.; Riensche, Roderick M.; Younkin, Katarina

    2009-03-01

    A central challenge in visual analytics is the creation of accessible, widely distributable analysis applications that bring the benefits of visual discovery to as broad a user base as possible. Moreover, to support the role of visualization in the knowledge creation process, it is advantageous to allow users to describe the reasoning strategies they employ while interacting with analytic environments. We introduce an application suite called the Scalable Reasoning System (SRS), which provides web-based and mobile interfaces for visual analysis. The service-oriented analytic framework that underlies SRS provides a platform for deploying pervasive visual analytic environments across an enterprise. SRS represents a “lightweight” approach to visual analytics whereby thin client analytic applications can be rapidly deployed in a platform-agnostic fashion. Client applications support multiple coordinated views while giving analysts the ability to record evidence, assumptions, hypotheses and other reasoning artifacts. We describe the capabilities of SRS in the context of a real-world deployment at a regional law enforcement organization.

  20. Combined Scalable Video Coding Method for Wireless Transmission

    Directory of Open Access Journals (Sweden)

    Achmad Affandi

    2011-08-01

    Full Text Available Mobile video streaming is one of multimedia services that has developed very rapidly. Recently, bandwidth utilization for wireless transmission is the main problem in the field of multimedia communications. In this research, we offer a combination of scalable methods as the most attractive solution to this problem. Scalable method for wireless communication should adapt to input video sequence. Standard ITU (International Telecommunication Union - Joint Scalable Video Model (JSVM is employed to produce combined scalable video coding (CSVC method that match the required quality of video streaming services for wireless transmission. The investigation in this paper shows that combined scalable technique outperforms the non-scalable one, in using bit rate capacity at certain layer.

  1. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  2. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  3. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  4. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our...... parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in....

  5. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  6. Microprocessor interfacing

    CERN Document Server

    Vears, R E

    2014-01-01

    Microprocessor Interfacing provides the coverage of the Business and Technician Education Council level NIII unit in Microprocessor Interfacing (syllabus U86/335). Composed of seven chapters, the book explains the foundation in microprocessor interfacing techniques in hardware and software that can be used for problem identification and solving. The book focuses on the 6502, Z80, and 6800/02 microprocessor families. The technique starts with signal conditioning, filtering, and cleaning before the signal can be processed. The signal conversion, from analog to digital or vice versa, is expl

  7. Scalable Engineering of Quantum Optical Information Processing Architectures (SEQUOIA)

    Science.gov (United States)

    2016-12-13

    scalable architecture for LOQC and cluster state quantum computing (Ballistic or non-ballistic) - With parametric nonlinearities (Kerr, chi-2...Scalable Engineering of Quantum Optical Information-Processing Architectures (SEQUOIA) 5a. CONTRACT NUMBER W31-P4Q-15-C-0045 5b. GRANT NUMBER 5c...Technologies 13 December 2016 “Scalable Engineering of Quantum Optical Information-Processing Architectures (SEQUOIA)” Final R&D Status Report

  8. Optical Coherence Tomography

    DEFF Research Database (Denmark)

    Fercher, A.F.; Andersen, Peter E.

    2017-01-01

    Optical coherence tomography (OCT) is a technique that is used to peer inside a body noninvasively. Tissue structure defined by tissue absorption and scattering coefficients, and the speed of blood flow, are derived from the characteristics of light remitted by the body. Singly backscattered light...... detected by partial coherence interferometry (PCI) is used to synthesize the tomographic image coded in false colors. A prerequisite of this technique is a low time-coherent but high space-coherent light source, for example, a superluminescent diode or a supercontinuum source. Alternatively, the imaging...... technique can be realized by using ultrafast wavelength scanning light sources. For tissue imaging, the light source wavelengths are restricted to the red and near-infrared (NIR) region from about 600 to 1300 nm, the so-called therapeutic window, where absorption (μa ≈ 0.01 mm−1) is small enough. Transverse...

  9. Coherence in Industrial Transformation

    DEFF Research Database (Denmark)

    Jørgensen, Ulrik; Lauridsen, Erik Hagelskjær

    2003-01-01

    The notion of coherence is used to illustrate the general finding, that the impact of environmental management systems and environmental policy is highly dependent of the context and interrelatedness of the systems, procedures and regimes established in society....

  10. VCSEL Based Coherent PONs

    DEFF Research Database (Denmark)

    Jensen, Jesper Bevensee; Rodes, Roberto; Caballero Jambrina, Antonio

    2014-01-01

    We present a review of research performed in the area of coherent access technologies employing vertical cavity surface emitting lasers (VCSELs). Experimental demonstrations of optical transmission over a passive fiber link with coherent detection using VCSEL local oscillators and directly...... modulated VCSEL transmitters at bit rates up to 10 Gbps in the C-band as well as in the O-band are presented. The broad linewidth and frequency chirp associated with directly modulated VCSELs are utilized in an envelope detection receiver scheme which is demonstrated digitally (off-line) as well as analog...... (real-time). Additionally, it is shown that in the optical front-end of a coherent receiver for access networks, the 90 ° hybrid can be replaced by a 3-dB coupler. The achieved results show that VCSELs are attractive light source candidates for transmitter as well as local oscillator for coherent...

  11. Coherent combination of high-power, zigzag slab lasers

    Science.gov (United States)

    Goodno, G. D.; Komine, H.; McNaught, S. J.; Weiss, S. B.; Redmond, S.; Long, W.; Simpson, R.; Cheung, E. C.; Howland, D.; Epp, P.; Weber, M.; McClellan, M.; Sollee, J.; Injeyan, H.

    2006-05-01

    We demonstrate a scalable architecture for a high-power, high-brightness, solid-state laser based on coherent combinations of master oscillator power amplifier chains. A common master oscillator injects a sequence of multikilowatt Nd:YAG zigzag slab amplifiers. Adaptive optics correct the wavefront of each amplified beamlet. The beamlets are tiled side by side and actively phase locked to form a single output beam. The laser produces 19 kW with beam quality <2× diffraction limited. To the best of our knowledge, this is the brightest cw solid-state laser demonstrated to date.

  12. Manufacturing Interfaces

    NARCIS (Netherlands)

    van Houten, Frederikus J.A.M.

    1992-01-01

    The paper identifies the changing needs and requirements with respect to the interfacing of manufacturing functions. It considers the manufacturing system, its components and their relationships from the technological and logistic point of view, against the background of concurrent engineering.

  13. Coherence and chaos

    Energy Technology Data Exchange (ETDEWEB)

    Sudarshan, E.C.G.

    1993-12-31

    The annihilation operator for harmonic oscillator is a weighted shift operator and can be realized on a family of over complete coherent states. Shift operators arise in dynamical maps of systems exhibiting deterministic chaos. Generalized coherent states, called harmonious states, realize these maps in a simple manner. By analytic continuation the spectral family can be altered, thus furnishing an alternative perspective on resonant scattering. Singular distributions are necessary to reproduce the rich structure of chaotic and scattering systems.

  14. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  15. Using MPI to Implement Scalable Libraries

    Science.gov (United States)

    Lusk, Ewing

    MPI is an instantiation of a general-purpose programming model, and high-performance implementations of the MPI standard have provided scalability for a wide range of applications. Ease of use was not an explicit goal of the MPI design process, which emphasized completeness, portability, and performance. Thus it is not surprising that MPI is occasionally criticized for being inconvenient to use and thus a drag on software developer productivity. One approach to the productivity issue is to use MPI to implement simpler programming models. Such models may limit the range of parallel algorithms that can be expressed, yet provide sufficient generality to benefit a significant number of applications, even from different domains.We illustrate this concept with the ADLB (Asynchronous, Dynamic Load-Balancing) library, which can be used to express manager/worker algorithms in such a way that their execution is scalable, even on the largestmachines. ADLB makes sophisticated use ofMPI functionality while providing an extremely simple API for the application programmer.We will describe it in the context of solving Sudoku puzzles and a nuclear physics Monte Carlo application currently running on tens of thousands of processors.

  16. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  17. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  18. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  19. An Open Infrastructure for Scalable, Reconfigurable Analysis

    Energy Technology Data Exchange (ETDEWEB)

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  20. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  1. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    Directory of Open Access Journals (Sweden)

    Antonio José Calderón

    2016-03-01

    Full Text Available In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts. The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.

  2. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    Science.gov (United States)

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  3. Interferometric visibility and coherence

    Science.gov (United States)

    Biswas, Tanmoy; García Díaz, María; Winter, Andreas

    2017-07-01

    Recently, the basic concept of quantum coherence (or superposition) has gained a lot of renewed attention, after Baumgratz et al. (Phys. Rev. Lett. 113, 140401. (doi:10.1103/PhysRevLett.113.140401)), following Åberg (http://arxiv.org/abs/quant-ph/0612146), have proposed a resource theoretic approach to quantify it. This has resulted in a large number of papers and preprints exploring various coherence monotones, and debating possible forms for the resource theory. Here, we take the view that the operational foundation of coherence in a state, be it quantum or otherwise wave mechanical, lies in the observation of interference effects. Our approach here is to consider an idealized multi-path interferometer, with a suitable detector, in such a way that the visibility of the interference pattern provides a quantitative expression of the amount of coherence in a given probe state. We present a general framework of deriving coherence measures from visibility, and demonstrate it by analysing several concrete visibility parameters, recovering some known coherence measures and obtaining some new ones.

  4. A universal quantum information processor for scalable quantum communication and networks.

    Science.gov (United States)

    Yang, Xihua; Xue, Bolin; Zhang, Junxiang; Zhu, Shiyao

    2014-10-15

    Entanglement provides an essential resource for quantum computation, quantum communication, and quantum networks. How to conveniently and efficiently realize the generation, distribution, storage, retrieval, and control of multipartite entanglement is the basic requirement for realistic quantum information processing. Here, we present a theoretical proposal to efficiently and conveniently achieve a universal quantum information processor (QIP) via atomic coherence in an atomic ensemble. The atomic coherence, produced through electromagnetically induced transparency (EIT) in the Λ-type configuration, acts as the QIP and has full functions of quantum beam splitter, quantum frequency converter, quantum entangler, and quantum repeater. By employing EIT-based nondegenerate four-wave mixing processes, the generation, exchange, distribution, and manipulation of light-light, atom-light, and atom-atom multipartite entanglement can be efficiently and flexibly achieved in a deterministic way with only coherent light fields. This method greatly facilitates the operations in quantum information processing, and holds promising applications in realistic scalable quantum communication and quantum networks.

  5. Efficient quantum computing using coherent photon conversion.

    Science.gov (United States)

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting

  6. Stimulated coherent transition radiation

    Energy Technology Data Exchange (ETDEWEB)

    Hung-chi Lihn

    1996-03-01

    Coherent radiation emitted from a relativistic electron bunch consists of wavelengths longer than or comparable to the bunch length. The intensity of this radiation out-numbers that of its incoherent counterpart, which extends to wavelengths shorter than the bunch length, by a factor equal to the number of electrons in the bunch. In typical accelerators, this factor is about 8 to 11 orders of magnitude. The spectrum of the coherent radiation is determined by the Fourier transform of the electron bunch distribution and, therefore, contains information of the bunch distribution. Coherent transition radiation emitted from subpicosecond electron bunches at the Stanford SUNSHINE facility is observed in the far-infrared regime through a room-temperature pyroelectric bolometer and characterized through the electron bunch-length study. To measure the bunch length, a new frequency-resolved subpicosecond bunch-length measuring system is developed. This system uses a far-infrared Michelson interferometer to measure the spectrum of coherent transition radiation through optical autocorrelation with resolution far better than existing time-resolved methods. Hence, the radiation spectrum and the bunch length are deduced from the autocorrelation measurement. To study the stimulation of coherent transition radiation, a special cavity named BRAICER is invented. Far-infrared light pulses of coherent transition radiation emitted from electron bunches are delayed and circulated in the cavity to coincide with subsequent incoming electron bunches. This coincidence of light pulses with electron bunches enables the light to do work on electrons, and thus stimulates more radiated energy. The possibilities of extending the bunch-length measuring system to measure the three-dimensional bunch distribution and making the BRAICER cavity a broadband, high-intensity, coherent, far-infrared light source are also discussed.

  7. Network selection, Information filtering and Scalable computation

    Science.gov (United States)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  8. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  9. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  10. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...

  11. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  12. SAR image effects on coherence and coherence estimation.

    Energy Technology Data Exchange (ETDEWEB)

    Bickel, Douglas Lloyd

    2014-01-01

    Radar coherence is an important concept for imaging radar systems such as synthetic aperture radar (SAR). This document quantifies some of the effects in SAR which modify the coherence. Although these effects can disrupt the coherence within a single SAR image, this report will focus on the coherence between separate images, such as for coherent change detection (CCD) processing. There have been other presentations on aspects of this material in the past. The intent of this report is to bring various issues that affect the coherence together in a single report to support radar engineers in making decisions about these matters.

  13. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  14. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation......The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  15. Towards scalable Byzantine fault-tolerant replication

    Science.gov (United States)

    Zbierski, Maciej

    2017-08-01

    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  16. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  17. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    foreground layers is merited. (2) The typical map making professional has changed from a GIS specialist to a busy person with map making as a secondary skill. Today, thematic maps are produced by journalists, aid workers, amateur data enth siasts, and scientists alike. Therefore it is crucial...... that this diverse group of map makers is provided with easy-to-use and expressible thematic map design tools. Such tools should support customized selection of data for maps in scenarios where developer time is a scarce resource. (3) The Web provides access to massive data repositories for thematic maps...... based on an access log of recent requests. The results show that Glossy SQL og CVL can be used to compute cartographic selection by processing one or more complex queries in a relational database. The scalability of the approach has been verified up to half a million objects in the database. Furthermore...

  18. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Science.gov (United States)

    Tizon, Nicolas; Pesquet-Popescu, Béatrice

    2008-12-01

    This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate) variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  19. Designing Interfaces

    CERN Document Server

    Tidwell, Jenifer

    2010-01-01

    Despite all of the UI toolkits available today, it's still not easy to design good application interfaces. This bestselling book is one of the few reliable sources to help you navigate through the maze of design options. By capturing UI best practices and reusable ideas as design patterns, Designing Interfaces provides solutions to common design problems that you can tailor to the situation at hand. This updated edition includes patterns for mobile apps and social media, as well as web applications and desktop software. Each pattern contains full-color examples and practical design advice th

  20. Scalable collision detection using p-partition fronts on many-core processors.

    Science.gov (United States)

    Zhang, Xinyu; Kim, Young J

    2014-03-01

    We present a new parallel algorithm for collision detection using many-core computing platforms of CPUs or GPUs. Based on the notion of a $(p)$-partition front, our algorithm is able to evenly partition and distribute the workload of BVH traversal among multiple processing cores without the need for dynamic balancing, while minimizing the memory overhead inherent to the state-of-the-art parallel collision detection algorithms. We demonstrate the scalability of our algorithm on different benchmarking scenarios with and without using temporal coherence, including dynamic simulation of rigid bodies, cloth simulation, and random collision courses. In these experiments, we observe nearly linear performance improvement in terms of the number of processing cores on the CPUs and GPUs.

  1. Maintaining Web Cache Coherency

    Directory of Open Access Journals (Sweden)

    2000-01-01

    Full Text Available Document coherency is a challenging problem for Web caching. Once the documents are cached throughout the Internet, it is often difficult to keep them coherent with the origin document without generating a new traffic that could increase the traffic on the international backbone and overload the popular servers. Several solutions have been proposed to solve this problem, among them two categories have been widely discussed: the strong document coherency and the weak document coherency. The cost and the efficiency of the two categories are still a controversial issue, while in some studies the strong coherency is far too expensive to be used in the Web context, in other studies it could be maintained at a low cost. The accuracy of these analysis is depending very much on how the document updating process is approximated. In this study, we compare some of the coherence methods proposed for Web caching. Among other points, we study the side effects of these methods on the Internet traffic. The ultimate goal is to study the cache behavior under several conditions, which will cover some of the factors that play an important role in the Web cache performance evaluation and quantify their impact on the simulation accuracy. The results presented in this study show indeed some differences in the outcome of the simulation of a Web cache depending on the workload being used, and the probability distribution used to approximate updates on the cached documents. Each experiment shows two case studies that outline the impact of the considered parameter on the performance of the cache.

  2. Coherent light microscopy

    CERN Document Server

    Ferraro, Pietro; Zalevsky, Zeev

    2011-01-01

    This book deals with the latest achievements in the field of optical coherent microscopy. While many other books exist on microscopy and imaging, this book provides a unique resource dedicated solely to this subject. Similarly, many books describe applications of holography, interferometry and speckle to metrology but do not focus on their use for microscopy. The coherent light microscopy reference provided here does not focus on the experimental mechanics of such techniques but instead is meant to provide a users manual to illustrate the strengths and capabilities of developing techniques. Th

  3. Qubit lattice coherence induced by electromagnetic pulses in superconducting metamaterials

    Science.gov (United States)

    Ivić, Z.; Lazarides, N.; Tsironis, G. P.

    2016-01-01

    Quantum bits (qubits) are at the heart of quantum information processing schemes. Currently, solid-state qubits, and in particular the superconducting ones, seem to satisfy the requirements for being the building blocks of viable quantum computers, since they exhibit relatively long coherence times, extremely low dissipation, and scalability. The possibility of achieving quantum coherence in macroscopic circuits comprising Josephson junctions, envisioned by Legett in the 1980’s, was demonstrated for the first time in a charge qubit; since then, the exploitation of macroscopic quantum effects in low-capacitance Josephson junction circuits allowed for the realization of several kinds of superconducting qubits. Furthermore, coupling between qubits has been successfully achieved that was followed by the construction of multiple-qubit logic gates and the implementation of several algorithms. Here it is demonstrated that induced qubit lattice coherence as well as two remarkable quantum coherent optical phenomena, i.e., self-induced transparency and Dicke-type superradiance, may occur during light-pulse propagation in quantum metamaterials comprising superconducting charge qubits. The generated qubit lattice pulse forms a compound ”quantum breather” that propagates in synchrony with the electromagnetic pulse. The experimental confirmation of such effects in superconducting quantum metamaterials may open a new pathway to potentially powerful quantum computing. PMID:27403780

  4. Testing Interfaces

    DEFF Research Database (Denmark)

    Holbøll, Joachim T.; Henriksen, Mogens; Nilson, Jesper K.

    1999-01-01

    The wide use of solid insulating materials combinations in combinations has introduced problems in the interfaces between components. The most common insulating materials are cross-linked polyethylene (XLPE), silicone rubber (SIR) and ethylene-propylene rubbers (EPR). Assemblies of these materials...

  5. A scalable and continuous-upgradable optical wireless and wired convergent access network.

    Science.gov (United States)

    Sung, J Y; Cheng, K T; Chow, C W; Yeh, C H; Pan, C-L

    2014-06-02

    In this work, a scalable and continuous upgradable convergent optical access network is proposed. By using a multi-wavelength coherent comb source and a programmable waveshaper at the central office (CO), optical millimeter-wave (mm-wave) signals of different frequencies (from baseband to > 100 GHz) can be generated. Hence, it provides a scalable and continuous upgradable solution for end-user who needs 60 GHz wireless services now and > 100 GHz wireless services in the future. During the upgrade, user only needs to upgrade their optical networking unit (ONU). A programmable waveshaper is used to select the suitable optical tones with wavelength separation equals to the desired mm-wave frequency; while the CO remains intact. The centralized characteristics of the proposed system can easily add any new service and end-user. The centralized control of the wavelength makes the system more stable. Wired data rate of 17.45 Gb/s and w-band wireless data rate up to 3.36 Gb/s were demonstrated after transmission over 40 km of single-mode fiber (SMF).

  6. Consistency in use through model based user interface development

    OpenAIRE

    Trapp, M.; Schmettow, M.

    2006-01-01

    In dynamic environments envisioned under the concept of Ambient Intelligence the consistency of user interfaces is of particular importance. To encounter this, the variability of the environment has to be transformed to a coherent user experience. In this paper we explain several dimension of consistency and present our ideas and recent results on achieving adaptive and consistent user interfaces by exploiting the technology of model driven user interface development.

  7. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    Science.gov (United States)

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  8. NASA's Earth Observing Data and Information System - Supporting Interoperability through a Scalable Architecture (Invited)

    Science.gov (United States)

    Mitchell, A. E.; Lowe, D. R.; Murphy, K. J.; Ramapriyan, H. K.

    2013-12-01

    Initiated in 1990, NASA's Earth Observing System Data and Information System (EOSDIS) is currently a petabyte-scale archive of data designed to receive, process, distribute and archive several terabytes of science data per day from NASA's Earth science missions. Comprised of 12 discipline specific data centers collocated with centers of science discipline expertise, EOSDIS manages over 6800 data products from many science disciplines and sources. NASA supports global climate change research by providing scalable open application layers to the EOSDIS distributed information framework. This allows many other value-added services to access NASA's vast Earth Science Collection and allows EOSDIS to interoperate with data archives from other domestic and international organizations. EOSDIS is committed to NASA's Data Policy of full and open sharing of Earth science data. As metadata is used in all aspects of NASA's Earth science data lifecycle, EOSDIS provides a spatial and temporal metadata registry and order broker called the EOS Clearing House (ECHO) that allows efficient search and access of cross domain data and services through the Reverb Client and Application Programmer Interfaces (APIs). Another core metadata component of EOSDIS is NASA's Global Change Master Directory (GCMD) which represents more than 25,000 Earth science data set and service descriptions from all over the world, covering subject areas within the Earth and environmental sciences. With inputs from the ECHO, GCMD and Soil Moisture Active Passive (SMAP) mission metadata models, EOSDIS is developing a NASA ISO 19115 Best Practices Convention. Adoption of an international metadata standard enables a far greater level of interoperability among national and international data products. NASA recently concluded a 'Metadata Harmony Study' of EOSDIS metadata capabilities/processes of ECHO and NASA's Global Change Master Directory (GCMD), to evaluate opportunities for improved data access and use, reduce

  9. Dental Optical Coherence Tomography

    Directory of Open Access Journals (Sweden)

    Kun-Feng Lin

    2013-07-01

    Full Text Available This review paper describes the applications of dental optical coherence tomography (OCT in oral tissue images, caries, periodontal disease and oral cancer. The background of OCT, including basic theory, system setup, light sources, spatial resolution and system limitations, is provided. The comparisons between OCT and other clinical oral diagnostic methods are also discussed.

  10. Optical Coherence Tomography

    DEFF Research Database (Denmark)

    Andersen, Peter E.

    2015-01-01

    Optical coherence tomography (OCT) is a noninvasive imaging technique that provides real-time two- and three-dimensional images of scattering samples with micrometer resolution. Mapping the local reflectivity, OCT visualizes the morphology of the sample, in real time or at video rate. In addition...

  11. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  12. Coherent x-ray optics

    CERN Document Server

    Paganin, David M

    2006-01-01

    'Coherent X-Ray Optics' gives a thorough treatment of the rapidly expanding field of coherent x-ray optics, which has recently experienced something of a renaissance with the availability of third-generation synchrotron sources.

  13. A generic interface to reduce the efficiency-stability-cost gap of perovskite solar cells

    Science.gov (United States)

    Hou, Yi; Du, Xiaoyan; Scheiner, Simon; McMeekin, David P.; Wang, Zhiping; Li, Ning; Killian, Manuela S.; Chen, Haiwei; Richter, Moses; Levchuk, Ievgen; Schrenker, Nadine; Spiecker, Erdmann; Stubhan, Tobias; Luechinger, Norman A.; Hirsch, Andreas; Schmuki, Patrik; Steinrück, Hans-Peter; Fink, Rainer H.; Halik, Marcus; Snaith, Henry J.; Brabec, Christoph J.

    2017-12-01

    A major bottleneck delaying the further commercialization of thin-film solar cells based on hybrid organohalide lead perovskites is interface loss in state-of-the-art devices. We present a generic interface architecture that combines solution-processed, reliable, and cost-efficient hole-transporting materials without compromising efficiency, stability, or scalability of perovskite solar cells. Tantalum-doped tungsten oxide (Ta-WOx)/conjugated polymer multilayers offer a surprisingly small interface barrier and form quasi-ohmic contacts universally with various scalable conjugated polymers. In a simple device with regular planar architecture and a self-assembled monolayer, Ta-WOx–doped interface–based perovskite solar cells achieve maximum efficiencies of 21.2% and offer more than 1000 hours of light stability. By eliminating additional ionic dopants, these findings open up the entire class of organics as scalable hole-transporting materials for perovskite solar cells.

  14. Coherent states in quantum mechanics

    CERN Document Server

    Rodrigues, R D L; Fernandes, D

    2001-01-01

    We present a review work on the coherent states is non-relativistic quantum mechanics analysing the quantum oscillators in the coherent states. The coherent states obtained via a displacement operator that act on the wave function of ground state of the oscillator and the connection with Quantum Optics which were implemented by Glauber have also been considered. A possible generalization to the construction of new coherent states it is point out.

  15. Interface learning

    DEFF Research Database (Denmark)

    Thorhauge, Sally

    2014-01-01

    "Interface learning - New goals for museum and upper secondary school collaboration" investigates and analyzes the learning that takes place when museums and upper secondary schools in Denmark work together in local partnerships to develop and carry out school-related, museum-based coursework...... for students. The research focuses on the learning that the students experience in the interface of the two learning environments: The formal learning environment of the upper secondary school and the informal learning environment of the museum. Focus is also on the learning that the teachers and museum...... professionals experience as a result of their collaboration. The dissertation demonstrates how a given partnership’s collaboration affects the students’ learning experiences when they are doing the coursework. The dissertation presents findings that museum-school partnerships can use in order to develop...

  16. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  17. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  18. Scalable quantum information processing with photons and atoms

    Science.gov (United States)

    Pan, Jian-Wei

    Over the past three decades, the promises of super-fast quantum computing and secure quantum cryptography have spurred a world-wide interest in quantum information, generating fascinating quantum technologies for coherent manipulation of individual quantum systems. However, the distance of fiber-based quantum communications is limited due to intrinsic fiber loss and decreasing of entanglement quality. Moreover, probabilistic single-photon source and entanglement source demand exponentially increased overheads for scalable quantum information processing. To overcome these problems, we are taking two paths in parallel: quantum repeaters and through satellite. We used the decoy-state QKD protocol to close the loophole of imperfect photon source, and used the measurement-device-independent QKD protocol to close the loophole of imperfect photon detectors--two main loopholes in quantum cryptograph. Based on these techniques, we are now building world's biggest quantum secure communication backbone, from Beijing to Shanghai, with a distance exceeding 2000 km. Meanwhile, we are developing practically useful quantum repeaters that combine entanglement swapping, entanglement purification, and quantum memory for the ultra-long distance quantum communication. The second line is satellite-based global quantum communication, taking advantage of the negligible photon loss and decoherence in the atmosphere. We realized teleportation and entanglement distribution over 100 km, and later on a rapidly moving platform. We are also making efforts toward the generation of multiphoton entanglement and its use in teleportation of multiple properties of a single quantum particle, topological error correction, quantum algorithms for solving systems of linear equations and machine learning. Finally, I will talk about our recent experiments on quantum simulations on ultracold atoms. On the one hand, by applying an optical Raman lattice technique, we realized a two-dimensional spin-obit (SO

  19. Scalable, remote administration of Windows NT.

    Energy Technology Data Exchange (ETDEWEB)

    Gomberg, M.; Stacey, C.; Sayre, J.

    1999-06-08

    In the UNIX community there is an overwhelming perception that NT is impossible to manage remotely and that NT administration doesn't scale. This was essentially true with earlier versions of the operating system. Even today, out of the box, NT is difficult to manage remotely. Many tools, however, now make remote management of NT not only possible, but under some circumstances very easy. In this paper we discuss how we at Argonne's Mathematics and Computer Science Division manage all our NT machines remotely from a single console, with minimum locally installed software overhead. We also present NetReg, which is a locally developed tool for scalable registry management. NetReg allows us to apply a registry change to a specified set of machines. It is a command line utility that can be run in either interactive or batch mode and is written in Perl for Win32, taking heavy advantage of the Win32::TieRegistry module.

  20. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.

    2015-02-01

    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  1. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  2. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  3. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.

  4. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  5. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  6. Scalability and interoperability within glideinWMS

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, D.; /Wisconsin U., Madison; Sfiligoi, I.; /Fermilab; Padhi, S.; /UC, San Diego; Frey, J.; /Wisconsin U., Madison; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  7. Gabor domain optical coherence microscopy

    Science.gov (United States)

    Murali, Supraja

    Time domain Optical Coherence Tomography (TD-OCT), first reported in 1991, makes use of the low temporal coherence properties of a NIR broadband laser to create depth sectioning of up to 2mm under the surface using optical interferometry and point to point scanning. Prior and ongoing work in OCT in the research community has concentrated on improving axial resolution through the development of broadband sources and speed of image acquisition through new techniques such as Spectral domain OCT (SD-OCT). In SD-OCT, an entire depth scan is acquired at once with a low numerical aperture (NA) objective lens focused at a fixed point within the sample. In this imaging geometry, a longer depth of focus is achieved at the expense of lateral resolution, which is typically limited to 10 to 20 mum. Optical Coherence Microscopy (OCM), introduced in 1994, combined the advantages of high axial resolution obtained in OCT with high lateral resolution obtained by increasing the NA of the microscope placed in the sample arm. However, OCM presented trade-offs caused by the inverse quadratic relationship between the NA and the DOF of the optics used. For applications requiring high lateral resolution, such as cancer diagnostics, several solutions have been proposed including the periodic manual re-focusing of the objective lens in the time domain as well as the spectral domain C-mode configuration in order to overcome the loss in lateral resolution outside the DOF. In this research, we report for the first time, high speed, sub-cellular imaging (lateral resolution of 2 mum) in OCM using a Gabor domain image processing algorithm with a custom designed and fabricated dynamic focus microscope interfaced to a Ti:Sa femtosecond laser centered at 800 nm within an SD-OCM configuration. It is envisioned that this technology will provide a non-invasive replacement for the current practice of multiple biopsies for skin cancer diagnosis. The research reported here presents three important advances

  8. Optical Coherence and Quantum Optics

    CERN Document Server

    Mandel, Leonard

    1995-01-01

    This book presents a systematic account of optical coherence theory within the framework of classical optics, as applied to such topics as radiation from sources of different states of coherence, foundations of radiometry, effects of source coherence on the spectra of radiated fields, coherence theory of laser modes, and scattering of partially coherent light by random media. The book starts with a full mathematical introduction to the subject area and each chapter concludes with a set of exercises. The authors are renowned scientists and have made substantial contributions to many of the topi

  9. ARC Code TI: Block-GP: Scalable Gaussian Process Regression

    Data.gov (United States)

    National Aeronautics and Space Administration — Block GP is a Gaussian Process regression framework for multimodal data, that can be an order of magnitude more scalable than existing state-of-the-art nonlinear...

  10. Scalable pattern recognition algorithms applications in computational biology and bioinformatics

    CERN Document Server

    Maji, Pradipta

    2014-01-01

    Reviews the development of scalable pattern recognition algorithms for computational biology and bioinformatics Includes numerous examples and experimental results to support the theoretical concepts described Concludes each chapter with directions for future research and a comprehensive bibliography

  11. Scalability of telecom cloud architectures for live-TV distribution

    OpenAIRE

    Asensio Carmona, Adrian; Contreras, Luis Miguel; Ruiz Ramírez, Marc; López Álvarez, Victor; Velasco Esteban, Luis Domingo

    2015-01-01

    A hierarchical distributed telecom cloud architecture for live-TV distribution exploiting flexgrid networking and SBVTs is proposed. Its scalability is compared to that of a centralized architecture. Cost savings as high as 32 % are shown. Peer Reviewed

  12. Evaluating the Scalability of Enterprise JavaBeans Technology

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yan (Jenny); Gorton, Ian; Liu, Anna; Chen, Shiping; Paul A Strooper; Pornsiri Muenchaisri

    2002-12-04

    One of the major problems in building large-scale distributed systems is to anticipate the performance of the eventual solution before it has been built. This problem is especially germane to Internet-based e-business applications, where failure to provide high performance and scalability can lead to application and business failure. The fundamental software engineering problem is compounded by many factors, including individual application diversity, software architecture trade-offs, COTS component integration requirements, and differences in performance of various software and hardware infrastructures. In this paper, we describe the results of an empirical investigation into the scalability of a widely used distributed component technology, Enterprise JavaBeans (EJB). A benchmark application is developed and tested to measure the performance of a system as both the client load and component infrastructure are scaled up. A scalability metric from the literature is then applied to analyze the scalability of the EJB component infrastructure under two different architectural solutions.

  13. Scalable RFCMOS Model for 90 nm Technology

    Directory of Open Access Journals (Sweden)

    Ah Fatt Tong

    2011-01-01

    Full Text Available This paper presents the formation of the parasitic components that exist in the RF MOSFET structure during its high-frequency operation. The parasitic components are extracted from the transistor's S-parameter measurement, and its geometry dependence is studied with respect to its layout structure. Physical geometry equations are proposed to represent these parasitic components, and by implementing them into the RF model, a scalable RFCMOS model, that is, valid up to 49.85 GHz is demonstrated. A new verification technique is proposed to verify the quality of the developed scalable RFCMOS model. The proposed technique can shorten the verification time of the scalable RFCMOS model and ensure that the coded scalable model file is error-free and thus more reliable to use.

  14. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  15. Improving the Performance Scalability of the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, Arthur [Lawrence Livermore National Laboratory (LLNL); Worley, Patrick H [ORNL

    2012-01-01

    The Community Atmosphere Model (CAM), which serves as the atmosphere component of the Community Climate System Model (CCSM), is the most computationally expensive CCSM component in typical configurations. On current and next-generation leadership class computing systems, the performance of CAM is tied to its parallel scalability. Improving performance scalability in CAM has been a challenge, due largely to algorithmic restrictions necessitated by the polar singularities in its latitude-longitude computational grid. Nevertheless, through a combination of exploiting additional parallelism, implementing improved communication protocols, and eliminating scalability bottlenecks, we have been able to more than double the maximum throughput rate of CAM on production platforms. We describe these improvements and present results on the Cray XT5 and IBM BG/P. The approaches taken are not specific to CAM and may inform similar scalability enhancement activities for other codes.

  16. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...

  17. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Moerman Ingrid

    2007-01-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  18. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Augustin I. Gavrilescu

    2007-02-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  19. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    Science.gov (United States)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  20. SDC: Scalable description coding for adaptive streaming media

    OpenAIRE

    Quinlan, Jason J.; Zahran, Ahmed H.; Sreenan, Cormac J.

    2012-01-01

    Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-percei...

  1. Coherent branching feature bisimulation

    Directory of Open Access Journals (Sweden)

    Tessa Belder

    2015-04-01

    Full Text Available Progress in the behavioral analysis of software product lines at the family level benefits from further development of the underlying semantical theory. Here, we propose a behavioral equivalence for feature transition systems (FTS generalizing branching bisimulation for labeled transition systems (LTS. We prove that branching feature bisimulation for an FTS of a family of products coincides with branching bisimulation for the LTS projection of each the individual products. For a restricted notion of coherent branching feature bisimulation we furthermore present a minimization algorithm and show its correctness. Although the minimization problem for coherent branching feature bisimulation is shown to be intractable, application of the algorithm in the setting of a small case study results in a significant speed-up of model checking of behavioral properties.

  2. Ultrafast coherent nanoscopy

    Science.gov (United States)

    Chen, Xue-Wen; Mohammadi, Ahmad; Baradaran Ghasemi, Amir Hossein; Agio, Mario

    2013-10-01

    The dramatic advances of nanotechnology experienced in recent years enabled us to fabricate optical nanostructures or nano-antennas that greatly enhance the conversion of localised electromagnetic energy into radiation and vice versa. Nano-antennas offer the required improvements in terms of bandwidth, interaction strength and resolution for combining ultrafast spectroscopy, nano-optics and quantum optics to fundamentally push forward the possibility of the coherent optical access on individual nanostructures or even molecules above cryogenic temperatures, where dephasing processes typically occur at very short time scales. In this context, we discuss recent progress in the theoretical description of light-matter interaction at the nanoscale and related experimental findings. Moreover, we present concrete examples in support of our vision and propose a series of experiments that aim at exploring novel promising regimes of optical coherence and quantum optics in advanced spectroscopy. We envisage extensions to ultrafast and nonlinear phenomena, especially in the direction of multidimensional nanoscopy.

  3. Optical Coherence Microscopy

    Science.gov (United States)

    Gelikonov, Grigory V.; Gelikonov, Valentin M.; Ksenofontov, Sergey U.; Morosov, Andrey N.; Myakov, Alexey V.; Potapov, Yury P.; Saposhnikova, Veronika V.; Sergeeva, Ekaterina A.; Shabanov, Dmitry V.; Shakhova, Natalia M.; Zagainova, Elena V.

    This chapter presents the practical embodiment of two types of optical coherence microscope (OCM) modality that differ by probing method. The development and creation of a compact OCM device for imaging internal structures of biological tissue at the cellular level is presented. Ultrahigh axial resolution of 3.4 μm and lateral resolution of 3.9 μm within tissue was attained by combining broadband radiations of two spectrally shifted SLDs and implementing the dynamic focus concept, which allows in-depth scanning of a coherence gate and beam waist synchronously. This OCM prototype is portable and easy to operate; creation of a remote optical probe was feasible due to use of polarization maintaining fiber. The chapter also discusses the results of a theoretical investigation of OCM axial and lateral resolution degradation caused by light scattering in biological tissue. We demonstrate the first OCM images of biological objects using examples of plant and human tissue ex vivo.

  4. Scalable Track Detection in SAR CCD Images

    Energy Technology Data Exchange (ETDEWEB)

    Chow, James G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Quach, Tu-Thach [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988, up fr om 0.907 obtained by the current state-of-the-art method.

  5. Spectral coherence in windturbine wakes

    Energy Technology Data Exchange (ETDEWEB)

    Hojstrup, J. [Riso National Lab., Roskilde (Denmark)

    1996-12-31

    This paper describes an experiment at a Danish wind farm to investigate the lateral and vertical coherences in the nonequilibrium turbulence of a wind turbine wake. Two meteorological masts were instrumented for measuring profiles of mean speed, turbulence, and temperature. Results are provided graphically for turbulence intensities, velocity spectra, lateral coherence, and vertical coherence. The turbulence was somewhat influenced by the wake, or possibly from aggregated wakes further upstream, even at 14.5 diameters. Lateral coherence (separation 5m) seemed to be unaffected by the wake at 7.5 diameters, but the flow was less coherent in the near wake. The wake appeared to have little influence on vertical coherence (separation 13m). Simple, conventional models for coherence appeared to be adequate descriptions for wake turbulence except for the near wake situation. 3 refs., 7 figs., 1 tab.

  6. The Puzzle of Coherence

    DEFF Research Database (Denmark)

    Andersen, Anne Bendix; Frederiksen, Kirsten; Beedholm, Kirsten

    2016-01-01

    Background During the past decade, politicians and healthcare providers have strived to create a coherent healthcare system across primary and secondary healthcare sectors in Denmark. Nevertheless, elderly patients with chronic diseases (EPCD) continue to report experiences of poor-quality care...... to an acute care ward to discharge and later in meetings with healthcare providers in general practice, outpatient clinics, home care and physiotherapy. Furthermore, field observations were conducted in general practice, home care and rehabilitation settings. Research design An explorative design based...

  7. Optical Coherency Matrix Tomography

    Science.gov (United States)

    2015-10-19

    Esat Kondakci, Ayman F. Abouraddy & Bahaa E. A. Saleh The coherence of an optical beam having multiple degrees of freedom (DoFs) is described by a...measurement yields a real number Ilm (projection l for polarization and m for the spatial DoF) corresponding to the projection of a tomographic...hermiticity, and semi-positive-definiteness of G50. We portray the real and imaginary components of G using the standard visualization from quantum state

  8. The Puzzle of Coherence

    DEFF Research Database (Denmark)

    Andersen, Anne Bendix; Frederiksen, Kirsten; Beedholm, Kirsten

    2016-01-01

    During the past decade, politicians and health care providers have strived to create a coherent health care system across primary and secondary health care systems in Denmark. Nevertheless, elderly patients with chronic diseases (EPCD) continue to report experiences of poor-quality care and lack ...... both nationally and internationally in preparation of health agreements, implementation of new collaboration forms among health care providers, and in improvement of delegation and transfer of information and assignments across sectors in health care....

  9. 'Interfaces' 4

    Directory of Open Access Journals (Sweden)

    Paolo Borsa

    2018-01-01

    Full Text Available Issue No. 4 is the first open issue of Interfaces: A Journal of Medieval European Literatures. It contains contributions by Henry Bainton (12th-century historiography, Lucie Doležalová (parabiblical texts and the canon, Máire Ní Mhaonaigh (Irish literary culture in Latin and Irish, Isabel Varillas Sánchez (legends of composition of canonical texts, Septuaginta, Wim Verbaal (letter collections, Bernard of Clairvaux, and Jonas Wellendorf (canons of skaldic poets in the 12th/13th century, preceded by a brief Introduction by the editors.

  10. Neurofeedback training of alpha-band coherence enhances motor performance.

    Science.gov (United States)

    Mottaz, Anais; Solcà, Marco; Magnin, Cécile; Corbet, Tiffany; Schnider, Armin; Guggisberg, Adrian G

    2015-09-01

    Neurofeedback training of motor cortex activations with brain-computer interface systems can enhance recovery in stroke patients. Here we propose a new approach which trains resting-state functional connectivity associated with motor performance instead of activations related to movements. Ten healthy subjects and one stroke patient trained alpha-band coherence between their hand motor area and the rest of the brain using neurofeedback with source functional connectivity analysis and visual feedback. Seven out of ten healthy subjects were able to increase alpha-band coherence between the hand motor cortex and the rest of the brain in a single session. The patient with chronic stroke learned to enhance alpha-band coherence of his affected primary motor cortex in 7 neurofeedback sessions applied over one month. Coherence increased specifically in the targeted motor cortex and in alpha frequencies. This increase was associated with clinically meaningful and lasting improvement of motor function after stroke. These results provide proof of concept that neurofeedback training of alpha-band coherence is feasible and behaviorally useful. The study presents evidence for a role of alpha-band coherence in motor learning and may lead to new strategies for rehabilitation. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  12. Coherency Sensitive Hashing.

    Science.gov (United States)

    Korman, Simon; Avidan, Shai

    2016-06-01

    Coherency Sensitive Hashing (CSH) extends Locality Sensitivity Hashing (LSH) and PatchMatch to quickly find matching patches between two images. LSH relies on hashing, which maps similar patches to the same bin, in order to find matching patches. PatchMatch, on the other hand, relies on the observation that images are coherent, to propagate good matches to their neighbors in the image plane, using random patch assignment to seed the initial matching. CSH relies on hashing to seed the initial patch matching and on image coherence to propagate good matches. In addition, hashing lets it propagate information between patches with similar appearance (i.e., map to the same bin). This way, information is propagated much faster because it can use similarity in appearance space or neighborhood in the image plane. As a result, CSH is at least three to four times faster than PatchMatch and more accurate, especially in textured regions, where reconstruction artifacts are most noticeable to the human eye. We verified CSH on a new, large scale, data set of 133 image pairs and experimented on several extensions, including: k nearest neighbor search, the addition of rotation and matching three dimensional patches in videos.

  13. Scalable persistent identifier systems for dynamic datasets

    Science.gov (United States)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.

    2016-12-01

    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  14. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  15. Physical principles for scalable neural recording.

    Science.gov (United States)

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  16. Memory-Scalable GPU Spatial Hierarchy Construction.

    Science.gov (United States)

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  17. Simple, Scalable, Script-Based Science Processor (S4P)

    Science.gov (United States)

    Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the

  18. Trident: scalable compute archives: workflows, visualization, and analysis

    Science.gov (United States)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub

  19. Quantum coherence versus quantum uncertainty

    Science.gov (United States)

    Luo, Shunlong; Sun, Yuan

    2017-08-01

    The notion of measurement is of both foundational and instrumental significance in quantum mechanics, and coherence destroyed by measurements (decoherence) lies at the very heart of quantum to classical transition. Qualitative aspects of this spirit have been widely recognized and analyzed ever since the inception of quantum theory. However, axiomatic and quantitative investigations of coherence are attracting great interest only recently with several figures of merit for coherence introduced [Baumgratz, Cramer, and Plenio, Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401]. While these resource theoretic approaches have many appealing and intuitive features, they rely crucially on various notions of incoherent operations which are sophisticated, subtle, and not uniquely defined, as have been critically assessed [Chitambar and Gour, Phys. Rev. Lett. 117, 030401 (2016), 10.1103/PhysRevLett.117.030401]. In this paper, we elaborate on the idea that coherence and quantum uncertainty are dual viewpoints of the same quantum substrate, and address coherence quantification by identifying coherence of a state (with respect to a measurement) with quantum uncertainty of a measurement (with respect to a state). Consequently, coherence measures may be set into correspondence with measures of quantum uncertainty. In particular, we take average quantum Fisher information as a measure of quantum uncertainty, and introduce the corresponding measure of coherence, which is demonstrated to exhibit desirable properties. Implications for interpreting quantum purity as maximal coherence, and quantum discord as minimal coherence, are illustrated.

  20. Directly measuring the concurrence of two-atom state via detecting coherent lights

    Science.gov (United States)

    Chen, Li; Yang, Ming; Zhang, Li-Hua; Cao, Zhuo-Liang

    2017-11-01

    Concurrence is an important parameter for quantifying quantum entanglement, but usually the state tomography must be determined before quantification. In this paper we propose a scheme, based on cavity-assisted atom–light interaction, to measure the concurrence of two-atom pure states and the Collins–Gisin state directly, without tomography. The concurrence of atomic states is encoded in the output coherent optical beams after interacting with cavities and the atoms therein, so the results of detection applied to the output coherent optical beams provide the concurrence data of the atomic states. This scheme provides an alternative method for directly measuring atomic entanglement by detecting coherent light, rather than measuring the atomic systems, which thus greatly simplifies the realization complexity of the direct measurement of atomic entanglement. In addition, as the cavity-assisted atom–light interaction used here is robust and scalable in realistic applications, the current scheme may be realized in the near future.

  1. Fiber-Coupled Diamond Quantum Nanophotonic Interface

    Science.gov (United States)

    Burek, Michael J.; Meuwly, Charles; Evans, Ruffin E.; Bhaskar, Mihir K.; Sipahigil, Alp; Meesala, Srujan; Machielse, Bartholomeus; Sukachev, Denis D.; Nguyen, Christian T.; Pacheco, Jose L.; Bielejec, Edward; Lukin, Mikhail D.; Lončar, Marko

    2017-08-01

    Color centers in diamond provide a promising platform for quantum optics in the solid state, with coherent optical transitions and long-lived electron and nuclear spins. Building upon recent demonstrations of nanophotonic waveguides and optical cavities in single-crystal diamond, we now demonstrate on-chip diamond nanophotonics with a high-efficiency fiber-optical interface achieving >90 % power coupling at visible wavelengths. We use this approach to demonstrate a bright source of narrow-band single photons based on a silicon-vacancy color center embedded within a waveguide-coupled diamond photonic crystal cavity. Our fiber-coupled diamond quantum nanophotonic interface results in a high flux (approximately 38 kHz) of coherent single photons (near Fourier limited at chip and separated by long distances.

  2. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes- neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  3. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  4. Fast and scalable image auto-tagging

    CERN Document Server

    Frejaville, Camille; Lepetit, Vincent

    Inside Invenio, the web-based integrated system for handling digital libraries developed at CERN, there is a media module, enabling users to upload photos and videos. Especially in CDS, the Invenio instance used at CERN, people use this digital library to upload pictures of official events that took place at CERN. However, so far, there was no way of tagging what’s inside these photos. This project is meant to solve the problem of tagging persons in a photo in an easy and fast way. First, by implementing a complete tagging interface that allows the user to square parts of the photo, resize them, move them and give them a name. Second, by running face detection so that squares already appear on faces and the user just has to fill the title field. Finally, by running a face recognition system that learned from previous tags created by users. In this report, we will show how we implemented the tagging interface, how we improved the existing face detector to make it more efficient, which face detection methods ...

  5. GSKY: A scalable distributed geospatial data server on the cloud

    Science.gov (United States)

    Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben

    2017-04-01

    Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as

  6. Coherent orthogonal polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Celeghini, E., E-mail: celeghini@fi.infn.it [Dipartimento di Fisica, Università di Firenze and INFN–Sezione di Firenze, I50019 Sesto Fiorentino, Firenze (Italy); Olmo, M.A. del, E-mail: olmo@fta.uva.es [Departamento de Física Teórica and IMUVA, Universidad de Valladolid, E-47005, Valladolid (Spain)

    2013-08-15

    We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relate these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines

  7. Autocorrelation low coherence interferometry

    Science.gov (United States)

    Modell, Mark D.; Ryabukho, Vladimir; Lyakin, Dmitry; Lychagov, Vladislav; Vitkin, Edward; Itzkan, Irving; Perelman, Lev T.

    2008-04-01

    This paper describes the development of a new modality of optical low coherence interferometry (LCI) that is called autocorrelation LCI (ALCI). The ALCI system employs a Michelson interferometer to measure longitudinal autocorrelation properties of the sample optical field and does not require a reference beam. As the result, there is no restrictions applied on the distance between the sample and the ALCI system, moreover, this distance can even change during the measurements. We report experiments using a proof-of-principle ALCI system on a multilayer phantom consisting of three surfaces defining two regions of different refractive indices. The experimental data are in excellent agreement with the predictions of the theoretical model.

  8. Optical coherence refractometry.

    Science.gov (United States)

    Tomlins, Peter H; Woolliams, Peter; Hart, Christian; Beaumont, Andrew; Tedaldi, Matthew

    2008-10-01

    We introduce a novel approach to refractometry using a low coherence interferometer at multiple angles of incidence. We show that for plane parallel samples it is possible to measure their phase refractive index rather than the group index that is usually measured by interferometric methods. This is a significant development because it enables bulk refractive index measurement of scattering and soft samples, not relying on surface measurements that can be prone to error. Our technique is also noncontact and compatible with in situ refractive index measurements. Here, we demonstrate this new technique on a pure silica test piece and a highly scattering resin slab, comparing the results with standard critical angle refractometry.

  9. The Puzzle of Coherence

    DEFF Research Database (Denmark)

    Andersen, Anne Bendix; Frederiksen, Kirsten; Beedholm, Kirsten

    2016-01-01

    During the past decade, politicians and health care providers have strived to create a coherent health care system across primary and secondary health care systems in Denmark. Nevertheless, elderly patients with chronic diseases (EPCD) continue to report experiences of poor-quality care and lack...... in general practice, outpatient clinics, home care and physiotherapy. Furthermore, field observations are conducted in general practice, home care and rehabilitation settings. Perspectives Knowledge about the practice of cross-sectorial collaboration is crucial to the future planning of collaborating...

  10. Diffraction coherence in optics

    CERN Document Server

    Françon, M; Green, L L

    2013-01-01

    Diffraction: Coherence in Optics presents a detailed account of the course on Fraunhofer diffraction phenomena, studied at the Faculty of Science in Paris. The publication first elaborates on Huygens' principle and diffraction phenomena for a monochromatic point source and diffraction by an aperture of simple form. Discussions focus on diffraction at infinity and at a finite distance, simplified expressions for the field, calculation of the path difference, diffraction by a rectangular aperture, narrow slit, and circular aperture, and distribution of luminous flux in the airy spot. The book th

  11. Coherent laser beam combining

    CERN Document Server

    Brignon, Arnaud

    2013-01-01

    Recently, the improvement of diode pumping in solid state lasers and the development of double clad fiber lasers have allowed to maintain excellent laser beam quality with single mode fibers. However, the fiber output power if often limited below a power damage threshold. Coherent laser beam combining (CLBC) brings a solution to these limitations by identifying the most efficient architectures and allowing for excellent spectral and spatial quality. This knowledge will become critical for the design of the next generation high-power lasers and is of major interest to many industrial, environme

  12. PetClaw: A scalable parallel nonlinear wave propagation solver for Python

    KAUST Repository

    Alghamdi, Amal

    2011-01-01

    We present PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation. PetClaw unifies two well-known scientific computing packages, Clawpack and PETSc, using Python interfaces into both. We rely on Clawpack to provide the infrastructure and kernels for time-dependent nonlinear wave propagation. Similarly, we rely on PETSc to manage distributed data arrays and the communication between them.We describe both the implementation and performance of PetClaw as well as our challenges and accomplishments in scaling a Python-based code to tens of thousands of cores on the BlueGene/P architecture. The capabilities of PetClaw are demonstrated through application to a novel problem involving elastic waves in a heterogeneous medium. Very finely resolved simulations are used to demonstrate the suppression of shock formation in this system.

  13. OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials

    Science.gov (United States)

    Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu

    The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.

  14. Modelling heterogeneous interfaces for solar water splitting

    Science.gov (United States)

    Pham, Tuan Anh; Ping, Yuan; Galli, Giulia

    2017-04-01

    The generation of hydrogen from water and sunlight offers a promising approach for producing scalable and sustainable carbon-free energy. The key of a successful solar-to-fuel technology is the design of efficient, long-lasting and low-cost photoelectrochemical cells, which are responsible for absorbing sunlight and driving water splitting reactions. To this end, a detailed understanding and control of heterogeneous interfaces between photoabsorbers, electrolytes and catalysts present in photoelectrochemical cells is essential. Here we review recent progress and open challenges in predicting physicochemical properties of heterogeneous interfaces for solar water splitting applications using first-principles-based approaches, and highlights the key role of these calculations in interpreting increasingly complex experiments.

  15. Optical noise and temporal coherence

    Science.gov (United States)

    Chavel, P.

    1980-08-01

    Previous articles have been devoted to the study of optical noise as a function of spatial coherence. The present one completes this study by considering temporal coherence. Noise arising from defects in the pupil plane and affecting the high spatial frequencies of an image is notably reduced by white-light illumination. Temporal coherence has little effect on noise arising from defects in the object plane. However, impulse noise due to small isolated defects is reduced in size. Physical arguments are presented to explain these phenomena and a mathematical study of partially coherent imaging in the presence of random defects is given.

  16. Measuring Quantum Coherence with Entanglement.

    Science.gov (United States)

    Streltsov, Alexander; Singh, Uttam; Dhar, Himadri Shekhar; Bera, Manabendra Nath; Adesso, Gerardo

    2015-07-10

    Quantum coherence is an essential ingredient in quantum information processing and plays a central role in emergent fields such as nanoscale thermodynamics and quantum biology. However, our understanding and quantitative characterization of coherence as an operational resource are still very limited. Here we show that any degree of coherence with respect to some reference basis can be converted to entanglement via incoherent operations. This finding allows us to define a novel general class of measures of coherence for a quantum system of arbitrary dimension, in terms of the maximum bipartite entanglement that can be generated via incoherent operations applied to the system and an incoherent ancilla. The resulting measures are proven to be valid coherence monotones satisfying all the requirements dictated by the resource theory of quantum coherence. We demonstrate the usefulness of our approach by proving that the fidelity-based geometric measure of coherence is a full convex coherence monotone, and deriving a closed formula for it on arbitrary single-qubit states. Our work provides a clear quantitative and operational connection between coherence and entanglement, two landmark manifestations of quantum theory and both key enablers for quantum technologies.

  17. Building a Community Infrastructure for Scalable On-Line Performance Analysis Tools around Open|Speedshop

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton

    2014-06-30

    Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, we built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of

  18. Electrokinetics of scalable, electric-field-assisted fabrication of vertically aligned carbon-nanotube/polymer composites

    Science.gov (United States)

    Castellano, Richard J.; Akin, Cevat; Giraldo, Gabriel; Kim, Sangil; Fornasiero, Francesco; Shan, Jerry W.

    2015-06-01

    Composite thin films incorporating vertically aligned carbon nanotubes (VACNTs) offer promise for a variety of applications where the vertical alignment of the CNTs is critical to meet performance requirements, e.g., highly permeable membranes, thermal interfaces, dry adhesives, and films with anisotropic electrical conductivity. However, current VACNT fabrication techniques are complex and difficult to scale up. Here, we describe a solution-based, electric-field-assisted approach as a cost-effective and scalable method to produce large-area VACNT composites. Multiwall-carbon nanotubes are dispersed in a polymeric matrix, aligned with an alternating-current (AC) electric field, and electrophoretically concentrated to one side of the thin film with a direct-current (DC) component to the electric field. This approach enables the fabrication of highly concentrated, individually aligned nanotube composites from suspensions of very dilute ( ϕ = 4 × 10 - 4 ) volume fraction. We experimentally investigate the basic electrokinetics of nanotube alignment under AC electric fields, and show that simple models can adequately predict the rate and degree of nanotube alignment using classical expressions for the induced dipole moment, hydrodynamic drag, and the effects of Brownian motion. The composite AC + DC field also introduces complex fluid motion associated with AC electro-osmosis and the electrochemistry of the fluid/electrode interface. We experimentally probe the electric-field parameters behind these electrokinetic phenomena, and demonstrate, with suitable choices of processing parameters, the ability to scalably produce large-area composites containing VACNTs at number densities up to 1010 nanotubes/cm2. This VACNT number density exceeds that of previous electric-field-fabricated composites by an order of magnitude, and the surface-area coverage of the 40 nm VACNTs is comparable to that of chemical-vapor-deposition-grown arrays of smaller-diameter nanotubes.

  19. Diagnostic Imaging Of The Vitreous By Optical Coherence Tomography

    OpenAIRE

    Itakura H

    2013-01-01

    Recently, a new treatment to the vitreoretinal interface diseases, vitreous injection of enzymatic vitreous melting drug is beginning to take place. This is a treatment to release the adhesion between the incompletely detached vitreous and the retina. Because the vitreous is transparent, to observe the relationship between the vitreous and the retina using only slit lamp microscope is difficult, optical coherence tomography (OCT) is necessary for adaptation decision of the vitreous injection....

  20. Influence of physiological coherence training on sense of coherence ...

    African Journals Online (AJOL)

    The goal of this study was to examine the influence of physiological coherence training, using the emWave2 apparatus on sense of coherence and zone perceptions. A within group, pre-test and post-test, outcome evaluative design was employed to assess changes in physiological and psychological variables.

  1. Extending the POSIX I/O interface: a parallel file system perspective.

    Energy Technology Data Exchange (ETDEWEB)

    Vilayannur, M.; Lang, S.; Ross, R.; Klundt, R.; Ward, L.; Mathematics and Computer Science; VMWare, Inc.; SNL

    2008-12-11

    The POSIX interface does not lend itself well to enabling good performance for high-end applications. Extensions are needed in the POSIX I/O interface so that high-concurrency HPC applications running on top of parallel file systems perform well. This paper presents the rationale, design, and evaluation of a reference implementation of a subset of the POSIX I/O interfaces on a widely used parallel file system (PVFS) on clusters. Experimental results on a set of micro-benchmarks confirm that the extensions to the POSIX interface greatly improve scalability and performance.

  2. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  3. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    Energy Technology Data Exchange (ETDEWEB)

    Masalma, Yahya [Universidad del Turabo; Jiao, Yu [ORNL

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  4. A coherent synchrotron X-ray microradiology investigation of bubble and droplet coalescence.

    Science.gov (United States)

    Weon, B M; Je, J H; Hwu, Y; Margaritondo, G

    2008-11-01

    A quantitative application of microradiology with coherent X-rays to the real-time study of microbubble and microdroplet coalescence phenomena, with specific emphasis on the size relations in three-body events, is presented. The results illustrate the remarkable effectiveness of coherent X-ray imaging in delineating interfaces in multiphase systems, in accurately measuring their geometric properties and in monitoring their dynamics.

  5. Current parallel I/O limitations to scalable data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  6. Scalable synthesis and energy applications of defect engineeered nano materials

    Science.gov (United States)

    Karakaya, Mehmet

    Nanomaterials and nanotechnologies have attracted a great deal of attention in a few decades due to their novel physical properties such as, high aspect ratio, surface morphology, impurities, etc. which lead to unique chemical, optical and electronic properties. The awareness of importance of nanomaterials has motivated researchers to develop nanomaterial growth techniques to further control nanostructures properties such as, size, surface morphology, etc. that may alter their fundamental behavior. Carbon nanotubes (CNTs) are one of the most promising materials with their rigidity, strength, elasticity and electric conductivity for future applications. Despite their excellent properties explored by the abundant research works, there is big challenge to introduce them into the macroscopic world for practical applications. This thesis first gives a brief overview of the CNTs, it will then go on mechanical and oil absorption properties of macro-scale CNT assemblies, then following CNT energy storage applications and finally fundamental studies of defect introduced graphene systems. Chapter Two focuses on helically coiled carbon nanotube (HCNT) foams in compression. Similarly to other foams, HCNT foams exhibit preconditioning effects in response to cyclic loading; however, their fundamental deformation mechanisms are unique. Bulk HCNT foams exhibit super-compressibility and recover more than 90% of large compressive strains (up to 80%). When subjected to striker impacts, HCNT foams mitigate impact stresses more effectively compared to other CNT foams comprised of non-helical CNTs (~50% improvement). The unique mechanical properties we revealed demonstrate that the HCNT foams are ideally suited for applications in packaging, impact protection, and vibration mitigation. The third chapter describes a simple method for the scalable synthesis of three-dimensional, elastic, and recyclable multi-walled carbon nanotube (MWCNT) based light weight bucky-aerogels (BAGs) that are

  7. A coherent Ising machine for 2000-node optimization problems.

    Science.gov (United States)

    Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-Ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki

    2016-11-04

    The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph. Copyright © 2016, American Association for the Advancement of Science.

  8. Thermal transport across metal silicide-silicon interfaces: An experimental comparison between epitaxial and nonepitaxial interfaces

    Science.gov (United States)

    Ye, Ning; Feser, Joseph P.; Sadasivam, Sridhar; Fisher, Timothy S.; Wang, Tianshi; Ni, Chaoying; Janotti, Anderson

    2017-02-01

    Silicides are used extensively in nano- and microdevices due to their low electrical resistivity, low contact resistance to silicon, and their process compatibility. In this work, the thermal interface conductance of TiSi2, CoSi2, NiSi, and PtSi are studied using time-domain thermoreflectance. Exploiting the fact that most silicides formed on Si(111) substrates grow epitaxially, while most silicides on Si(100) do not, we study the effect of epitaxy, and show that for a wide variety of interfaces there is no dependence of interface conductance on the detailed structure of the interface. In particular, there is no difference in the thermal interface conductance between epitaxial and nonepitaxial silicide/silicon interfaces, nor between epitaxial interfaces with different interface orientations. While these silicide-based interfaces yield the highest reported interface conductances of any known interface with silicon, none of the interfaces studied are found to operate close to the phonon radiation limit, indicating that phonon transmission coefficients are nonunity in all cases and yet remain insensitive to interfacial structure. In the case of CoSi2, a comparison is made with detailed computational models using (1) full-dispersion diffuse mismatch modeling (DMM) including the effect of near-interfacial strain, and (2) an atomistic Green' function (AGF) approach that integrates near-interface changes in the interatomic force constants obtained through density functional perturbation theory. Above 100 K, the AGF approach significantly underpredicts interface conductance suggesting that energy transport does not occur purely by coherent transmission of phonons, even for epitaxial interfaces. The full-dispersion DMM closely predicts the experimentally observed interface conductances for CoSi2, NiSi, and TiSi2 interfaces, while it remains an open question whether inelastic scattering, cross-interfacial electron-phonon coupling, or other mechanisms could also account for

  9. Natural product synthesis in the age of scalability.

    Science.gov (United States)

    Kuttruff, Christian A; Eastgate, Martin D; Baran, Phil S

    2014-04-01

    The ability to procure useful quantities of a molecule by simple, scalable routes is emerging as an important goal in natural product synthesis. Approaches to molecules that yield substantial material enable collaborative investigations (such as SAR studies or eventual commercial production) and inherently spur innovation in chemistry. As such, when evaluating a natural product synthesis, scalability is becoming an increasingly important factor. In this Highlight, we discuss recent examples of natural product synthesis from our laboratory and others, where the preparation of gram-scale quantities of a target compound or a key intermediate allowed for a deeper understanding of biological activities or enabled further investigational collaborations.

  10. Providing scalable system software for high-end simulations

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D. [Sandia National Labs., Albuquerque, NM (United States)

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  11. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can...... be challenging to get large data sets due to privacy and/or data protection regulations. This paper presents a scalable smart meter data generator using Spark that can generate realistic data sets. The proposed data generator is based on a supervised machine learning method that can generate data of any size...

  12. Optimally cloned binary coherent states

    DEFF Research Database (Denmark)

    Mueller, C. R.; Leuchs, G.; Marquardt, Ch

    2017-01-01

    Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive...

  13. Scalable polylithic on-package integratable apparatus and method

    Energy Technology Data Exchange (ETDEWEB)

    Khare, Surhud; Somasekhar, Dinesh; Borkar, Shekhar Y.

    2017-12-05

    Described is an apparatus which comprises: a first die including: a processing core; a crossbar switch coupled to the processing core; and a first edge interface coupled to the crossbar switch; and a second die including: a first edge interface positioned at a periphery of the second die and coupled to the first edge interface of the first die, wherein the first edge interface of the first die and the first edge interface of the second die are positioned across each other; a clock synchronization circuit coupled to the second edge interface; and a memory interface coupled to the clock synchronization circuit.

  14. Evolution equation for quantum coherence.

    Science.gov (United States)

    Hu, Ming-Liang; Fan, Heng

    2016-07-07

    The estimation of the decoherence process of an open quantum system is of both theoretical significance and experimental appealing. Practically, the decoherence can be easily estimated if the coherence evolution satisfies some simple relations. We introduce a framework for studying evolution equation of coherence. Based on this framework, we prove a simple factorization relation (FR) for the l1 norm of coherence, and identified the sets of quantum channels for which this FR holds. By using this FR, we further determine condition on the transformation matrix of the quantum channel which can support permanently freezing of the l1 norm of coherence. We finally reveal the universality of this FR by showing that it holds for many other related coherence and quantum correlation measures.

  15. Coherence and correspondence in medicine

    Directory of Open Access Journals (Sweden)

    Thomas G. Tape

    2009-03-01

    Full Text Available Many controversies in medical science can be framed as tension between a coherence approach (which seeks logic and explanation and a correspondence approach (which emphasizes empirical correctness. In many instances, a coherence-based theory leads to an understanding of disease that is not supported by empirical evidence. Physicians and patients alike tend to favor the coherence approach even in the face of strong, contradictory correspondence evidence. Examples include the management of atrial fibrillation, treatment of acute bronchitis, and the use of Vitamin E to prevent heart disease. Despite the frequent occurrence of controversy stemming from coherence-correspondence conflicts, medical professionals are generally unaware of these terms and the philosophical traditions that underlie them. Learning about the coherence-correspondence distinction and using the best of both approaches could not only help reconcile controversy but also lead to striking advances in medical science.

  16. Heterogeneous scalable framework for multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Karla Vanessa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.

  17. Open Core Protocol (OCP) Clock Domain Crossing Interfaces

    DEFF Research Database (Denmark)

    Herlev, Mathias; Poulsen, Christian Keis; Sparsø, Jens

    2014-01-01

    The open core protocol (OCP) is an openly licensed configurable and scalable interface protocol for on-chip subsystem communications. The protocol defines read and write transactions from a master towards a slave across a point-to-point connection and the protocol assumes a single common clock...... these control signals are passed across the clock-domain boundary and synchronized it may add significant latency to the duration of a transaction. Our interface designs avoid this and synchronize only a single signal transition in each direction during a read or a write transaction. While the problem...... of synchronizing a simple streaming interface is well described in the literature and often solved using bi-synchronous FIFOs we found surprisingly little published material addressing synchronization of bus-style read-write transaction interfaces....

  18. Human-computer interface including haptically controlled interactions

    Science.gov (United States)

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  19. Optical Coherence Tomography

    DEFF Research Database (Denmark)

    Mogensen, Mette; Themstrup, Lotte; Banzhaf, Christina

    2014-01-01

    Optical coherence tomography (OCT) has developed rapidly since its first realisation in medicine and is currently an emerging technology in the diagnosis of skin disease. OCT is an interferometric technique that detects reflected and backscattered light from tissue and is often described...... as the optical analogue to ultrasound. The inherent safety of the technology allows for in vivo use of OCT in patients. The main strength of OCT is the depth resolution. In dermatology, most OCT research has turned on non-melanoma skin cancer (NMSC) and non-invasive monitoring of morphological changes...... in a number of skin diseases based on pattern recognition, and studies have found good agreement between OCT images and histopathological architecture. OCT has shown high accuracy in distinguishing lesions from normal skin, which is of great importance in identifying tumour borders or residual neoplastic...

  20. Quantum information and coherence

    CERN Document Server

    Öhberg, Patrik

    2014-01-01

    This book offers an introduction to ten key topics in quantum information science and quantum coherent phenomena, aimed at graduate-student level. The chapters cover some of the most recent developments in this dynamic research field where theoretical and experimental physics, combined with computer science, provide a fascinating arena for groundbreaking new concepts in information processing. The book addresses both the theoretical and experimental aspects of the subject, and clearly demonstrates how progress in experimental techniques has stimulated a great deal of theoretical effort and vice versa. Experiments are shifting from simply preparing and measuring quantum states to controlling and manipulating them, and the book outlines how the first real applications, notably quantum key distribution for secure communication, are starting to emerge. The chapters cover quantum retrodiction, ultracold quantum gases in optical lattices, optomechanics, quantum algorithms, quantum key distribution, quantum cont...

  1. Temporal Coherence Strategies for Augmented Reality Labeling.

    Science.gov (United States)

    Madsen, Jacob Boesen; Tatzqern, Markus; Madsen, Claus B; Schmalstieg, Dieter; Kalkofen, Denis

    2016-04-01

    Temporal coherence of annotations is an important factor in augmented reality user interfaces and for information visualization. In this paper, we empirically evaluate four different techniques for annotation. Based on these findings, we follow up with subjective evaluations in a second experiment. Results show that presenting annotations in object space or image space leads to a significant difference in task performance. Furthermore, there is a significant interaction between rendering space and update frequency of annotations. Participants improve significantly in locating annotations, when annotations are presented in object space, and view management update rate is limited. In a follow-up experiment, participants appear to be more satisfied with limited update rate in comparison to a continuous update rate of the view management system.

  2. Experimental detection of quantum coherent evolution through the violation of Leggett-Garg-type inequalities.

    Science.gov (United States)

    Zhou, Zong-Quan; Huelga, Susana F; Li, Chuan-Feng; Guo, Guang-Can

    2015-09-11

    We discuss the use of inequalities of the Leggett-Garg type (LGtI) to witness quantum coherence and present the first experimental violation of this type of inequalities using a light-matter interfaced system. By separately benchmarking the Markovian character of the evolution and the translational invariance of the conditional probabilities, the observed violation of a LGtI is attributed to the quantum coherent character of the process. These results provide a general method to benchmark "quantumness" when the absence of memory effects can be independently certified and confirm the persistence of quantum coherent features within systems of increasing complexity.

  3. Scalable Track Initiation for Optical Space Surveillance

    Science.gov (United States)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    least cubic and commonly quartic or higher. Therefore, practical implementations require attention to the scalability of the algorithms, when one is dealing with the very large number of observations from large surveillance telescopes. We address two broad categories of algorithms. The first category includes and extends the classical methods of Laplace and Gauss, as well as the more modern method of Gooding, in which one solves explicitly for the apparent range to the target in terms of the given data. In particular, recent ideas offered by Mortari and Karimi allow us to construct a family of range-solution methods that can be scaled to many processors efficiently. We find that the orbit solutions (data association hypotheses) can be ranked by means of a concept we call persistence, in which a simple statistical measure of likelihood is based on the frequency of occurrence of combinations of observations in consistent orbit solutions. Of course, range-solution methods can be expected to perform poorly if the orbit solutions of most interest are not well conditioned. The second category of algorithms addresses this difficulty. Instead of solving for range, these methods attach a set of range hypotheses to each measured line of sight. Then all pair-wise combinations of observations are considered and the family of Lambert problems is solved for each pair. These algorithms also have polynomial complexity, though now the complexity is quadratic in the number of observations and also quadratic in the number of range hypotheses. We offer a novel type of admissible-region analysis, constructing partitions of the orbital element space and deriving rigorous upper and lower bounds on the possible values of the range for each partition. This analysis allows us to parallelize with respect to the element partitions and to reduce the number of range hypotheses that have to be considered in each processor simply by making the partitions smaller. Naturally, there are many ways to

  4. Two-mode Nonlinear Coherent States

    OpenAIRE

    Wang, Xiao-Guang

    2000-01-01

    Two-mode nonlinear coherent states are introduced in this paper. The pair coherent states and the two-mode Perelomov coherent states are special cases of the two-mode nonlinear coherent states. The exponential form of the two-mode nonlinear coherent states is given. The photon-added or photon-subtracted two-mode nonlinear coherent states are found to be two-mode nonlinear coherent states with different nonlinear functions. The parity coherent states are introduced as examples of two-mode nonl...

  5. Quicksilver: Middleware for Scalable Self-Regenerative Systems

    Science.gov (United States)

    2006-04-01

    standard best practice in the area, and hence helped us identify problems that can be justified in terms of real user needs. Our own group may write a...semantics, generally lack efficient, scalable implementations. Systems aproaches usually lack a precise formal specification, limiting the

  6. Scalable learning of probabilistic latent models for collaborative filtering

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2015-01-01

    Collaborative filtering has emerged as a popular way of making user recommendations, but with the increasing sizes of the underlying databases scalability is becoming a crucial issue. In this paper we focus on a recently proposed probabilistic collaborative filtering model that explicitly...

  7. PSOM2—partitioning-based scalable ontology matching using ...

    Indian Academy of Sciences (India)

    B Sathiya

    2017-11-16

    Nov 16, 2017 ... Abstract. The growth and use of semantic web has led to a drastic increase in the size, heterogeneity and number of ontologies that are available on the web. Correspondingly, scalable ontology matching algorithms that will eliminate the heterogeneity among large ontologies have become a necessity.

  8. Cognition-inspired Descriptors for Scalable Cover Song Retrieval

    NARCIS (Netherlands)

    van Balen, J.M.H.; Bountouridis, D.; Wiering, F.; Veltkamp, R.C.

    2014-01-01

    Inspired by representations used in music cognition studies and computational musicology, we propose three simple and interpretable descriptors for use in mid- to high-level computational analysis of musical audio and applications in content-based retrieval. We also argue that the task of scalable

  9. Scalable Directed Self-Assembly Using Ultrasound Waves

    Science.gov (United States)

    2015-09-04

    at Aberdeen Proving Grounds (APG), to discuss a possible collaboration. The idea is to integrate the ultrasound directed self- assembly technique ...difference between the ultrasound technology studied in this project, and other directed self-assembly techniques is its scalability and...deliverable: A scientific tool to predict particle organization, pattern, and orientation, based on the operating and design parameters of the ultrasound

  10. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    Science.gov (United States)

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  11. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  12. Coilable Crystalline Fiber (CCF) Lasers and their Scalability

    Science.gov (United States)

    2014-03-01

    highly power scalable, nearly diffraction-limited output laser. 37 References 1. Snitzer, E. Optical Maser Action of Nd 3+ in A Barium Crown Glass ...Electron Devices Directorate Helmuth Meissner Onyx Optics Approved for public release; distribution...lasers, but their composition ( glass ) poses significant disadvantages in pump absorption, gain, and thermal conductivity. All-crystalline fiber lasers

  13. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  14. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  15. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  16. Coherence Properties of the LCLS

    Energy Technology Data Exchange (ETDEWEB)

    Ocko, Samuel

    2010-08-25

    The LINAC Coherent Light Source (LCLS), an X-Ray free-electron laser(FEL) based on the self amplified spontaneous emission principle, has recently come on-line. For many users it is desirable to have an idea of the level of transverse coherence of the X-Ray beam produced. In this paper, we analyze the output of GENESIS simulations of electrons traveling through the FEL. We first test the validity of an approach that ignores the details of how the beam was produced, and instead, by assuming a Gaussian-Schell model of transverse coherence, predicts the level of transverse coherence simply through looking at the beam radius at several longitudinal slices. We then develop a Markov chain Monte Carlo approach to calculating the degree of transverse coherence, which offers a {approx}100-fold speedup compared to the brute-force algorithm previously in use. We find the beam highly coherent. Using a similar Markov chain Monte Carlo approach, we estimate the reasonability of assuming the beam to have a Gaussian-Schell model of transverse coherence, with inconclusive results.

  17. Oceanotron, Scalable Server for Marine Observations

    Science.gov (United States)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  18. International workshop on phase retrieval and coherent scattering. Coherence 2005

    Energy Technology Data Exchange (ETDEWEB)

    Nugent, K.A.; Fienup, J.R.; Van Dyck, D.; Van Aert, S.; Weitkamp, T.; Diaz, A.; Pfeiffer, F.; Cloetens, P.; Stampanoni, M.; Bunk, O.; David, C.; Bronnikov, A.V.; Shen, Q.; Xiao, X.; Gureyev, T.E.; Nesterets, Ya.I.; Paganin, D.M.; Wilkins, S.W.; Mokso, R.; Cloetens, P.; Ludwig, W.; Hignette, O.; Maire, E.; Faulkner, H.M.L.; Rodenburg, J.M.; Wu, X.; Liu, H.; Grubel, G.; Ludwig, K.F.; Livet, F.; Bley, F.; Simon, J.P.; Caudron, R.; Le Bolloc' h, D.; Moussaid, A.; Gutt, C.; Sprung, M.; Madsen, A.; Tolan, M.; Sinha, S.K.; Scheffold, F.; Schurtenberger, P.; Robert, A.; Madsen, A.; Falus, P.; Borthwick, M.A.; Mochrie, S.G.J.; Livet, F.; Sutton, M.D.; Ehrburger-Dolle, F.; Bley, F.; Geissler, E.; Sikharulidze, I.; Jeu, W.H. de; Lurio, L.B.; Hu, X.; Jiao, X.; Jiang, Z.; Lurio, L.B.; Hu, X.; Jiao, X.; Jiang, Z.; Naryanan, S.; Sinha, S.K.; Lal, J.; Naryanan, S.; Sinha, S.K.; Lal, J.; Robinson, I.K.; Chapman, H.N.; Barty, A.; Beetz, T.; Cui, C.; Hajdu, J.; Hau-Riege, S.P.; He, H.; Stadler, L.M.; Sepiol, B.; Harder, R.; Robinson, I.K.; Zontone, F.; Vogl, G.; Howells, M.; London, R.; Marchesini, S.; Shapiro, D.; Spence, J.C.H.; Weierstall, U.; Eisebitt, S.; Shapiro, D.; Lima, E.; Elser, V.; Howells, M.R.; Huang, X.; Jacobsen, C.; Kirz, J.; Miao, H.; Neiman, A.; Sayre, D.; Thibault, P.; Vartanyants, I.A.; Robinson, I.K.; Onken, J.D.; Pfeifer, M.A.; Williams, G.J.; Pfeiffer, F.; Metzger, H.; Zhong, Z.; Bauer, G.; Nishino, Y.; Miao, J.; Kohmura, Y.; Yamamoto, M.; Takahashi, Y.; Koike, K.; Ebisuzaki, T.; Ishikawa, T.; Spence, J.C.H.; Doak, B

    2005-07-01

    The contributions of the participants have been organized into 3 topics: 1) phase retrieval methods, 2) X-ray photon correlation spectroscopy, and 3) coherent diffraction imaging. This document gathers the abstracts of the presentations and of the posters.

  19. Radiation Tolerant Interfaces: Influence of Local Stoichiometry at the Misfit Dislocation on Radiation Damage Resistance of Metal/Oxide Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Shutthanandan, Vaithiyalingam [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA 99352 USA; Choudhury, Samrat [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos NM 87545 USA; Manandhar, Sandeep [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA 99352 USA; Kaspar, Tiffany C. [Physical and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland WA 99352 USA; Wang, Chongmin [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA 99352 USA; Devaraj, Arun [Physical and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland WA 99352 USA; Wirth, Brian D. [Department of Nuclear Engineering, University of Tennessee, Knoxville TN 37996 USA; Thevuthasan, Suntharampilli [Physical and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland WA 99352 USA; Hoagland, Richard G. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos NM 87545 USA; Dholabhai, Pratik P. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos NM 87545 USA; Uberuaga, Blas P. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos NM 87545 USA; Kurtz, Richard J. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland WA 99352 USA

    2017-04-24

    To understand how variations in interface properties such as misfit-dislocation density and local chemistry affect radiation-induced defect absorption and recombination, we have explored a model system of CrxV1-x alloy epitaxial films deposited on MgO single crystals. By controlling film composition, the lattice mismatch with MgO was adjusted so that the misfit-dislocation density varies at the interface. These interfaces were exposed to irradiation and in situ results show that the film with a semi-coherent interface (Cr) withstands irradiation while V film, which has similar semi-coherent interface like Cr, showed the largest damage. Theoretical calculations indicate that, unlike at metal/metal interfaces, the misfit dislocation density does not dominate radiation damage tolerance at metal/oxide interfaces. Rather, the stoichiometry, and the precise location of the misfit-dislocation density relative to the interface, drives defect behavior. Together, these results demonstrate the sensitivity of defect recombination to interfacial chemistry and provide new avenues for engineering radiation-tolerant nanomaterials.

  20. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  1. n-X-Coherent Rings

    OpenAIRE

    Bennis, Driss

    2010-01-01

    This paper unifies several generalizations of coherent rings in one notion. Namely, we introduce $n$-$\\mathscr{X}$-coherent rings, where $\\mathscr{X}$ is a class of modules and $n$ is a positive integer, as those rings for which the subclass $\\mathscr{X}_n$ of $n$-presented modules of $\\mathscr{X}$ is not empty, and every module in $\\mathscr{X}_n$ is $n+1$-presented. Then, for each particular class $\\mathscr{X}$ of modules, we find correspondent relative coherent rings. Our main aim is to sho...

  2. Optimally cloned binary coherent states

    Science.gov (United States)

    Müller, C. R.; Leuchs, G.; Marquardt, Ch.; Andersen, U. L.

    2017-10-01

    Binary coherent state alphabets can be represented in a two-dimensional Hilbert space. We capitalize this formal connection between the otherwise distinct domains of qubits and continuous variable states to map binary phase-shift keyed coherent states onto the Bloch sphere and to derive their quantum-optimal clones. We analyze the Wigner function and the cumulants of the clones, and we conclude that optimal cloning of binary coherent states requires a nonlinearity above second order. We propose several practical and near-optimal cloning schemes and compare their cloning fidelity to the optimal cloner.

  3. Optical coherent control in semiconductors

    DEFF Research Database (Denmark)

    Østergaard, John Erland; Vadim, Lyssenko; Hvam, Jørn Märcher

    2001-01-01

    The developments with coherent control (CC) techniques in optical spectroscopy have recently demonstrated population control and coherence manipulations when the induced optical phase is explored with phase-locked laser pulses. These and other developments have been guiding the new research field...... of quantum control including the recent applications to semiconductors and nanostructures. We study the influence of inhomogeneous broadening in semiconductors on CC results. Photoluminescence (PL) and the coherent emission in four-wave mixing (FWM) is recorded after resonant excitation with phase-locked...

  4. Coherent control of quantum dots

    DEFF Research Database (Denmark)

    Johansen, Jeppe; Lodahl, Peter; Hvam, Jørn Märcher

    In recent years much effort has been devoted to the use of semiconductor quantum dotsystems as building blocks for solid-state-based quantum logic devices. One importantparameter for such devices is the coherence time, which determines the number ofpossible quantum operations. From earlier...... measurements the coherence time of the selfassembledquantum dots (QDs) has been reported to be limited by the spontaneousemission rate at cryogenic temperatures1.In this project we propose to alter the coherence time of QDs by taking advantage of arecent technique on modifying spontaneous emission rates...

  5. PyClaw: Accessible, Extensible, Scalable Tools for Wave Propagation Problems

    KAUST Repository

    Ketcheson, David I.

    2012-08-15

    Development of scientific software involves tradeoffs between ease of use, generality, and performance. We describe the design of a general hyperbolic PDE solver that can be operated with the convenience of MATLAB yet achieves efficiency near that of hand-coded Fortran and scales to the largest supercomputers. This is achieved by using Python for most of the code while employing automatically wrapped Fortran kernels for computationally intensive routines, and using Python bindings to interface with a parallel computing library and other numerical packages. The software described here is PyClaw, a Python-based structured grid solver for general systems of hyperbolic PDEs [K. T. Mandli et al., PyClaw Software, Version 1.0, http://numerics.kaust.edu.sa/pyclaw/ (2011)]. PyClaw provides a powerful and intuitive interface to the algorithms of the existing Fortran codes Clawpack and SharpClaw, simplifying code development and use while providing massive parallelism and scalable solvers via the PETSc library. The package is further augmented by use of PyWENO for generation of efficient high-order weighted essentially nonoscillatory reconstruction code. The simplicity, capability, and performance of this approach are demonstrated through application to example problems in shallow water flow, compressible flow, and elasticity.

  6. Doppler Optical Coherence Tomography

    Science.gov (United States)

    Leitgeb, Rainer A.; Werkmeister, René M.; Blatter, Cedric; Schmetterer, Leopold

    2014-01-01

    Optical Coherence Tomography (OCT) has revolutionized ophthalmology. Since its introduction in the early 1990s it has continuously improved in terms of speed, resolution and sensitivity. The technique has also seen a variety of extensions aiming to assess functional aspects of the tissue in addition to morphology. One of these approaches is Doppler OCT (DOCT), which aims to visualize and quantify blood flow. Such extensions were already implemented in time domain systems, but have gained importance with the introduction of Fourier domain OCT. Nowadays phase-sensitive detection techniques are most widely used to extract blood velocity and blood flow from tissues. A common problem with the technique is that the Doppler angle is not known and several approaches have been realized to obtain absolute velocity and flow data from the retina. Additional studies are required to elucidate which of these techniques is most promising. In the recent years, however, several groups have shown that data can be obtained with high validity and reproducibility. In addition, several groups have published values for total retinal blood flow. Another promising application relates to non-invasive angiography. As compared to standard techniques such as fluorescein and indocyanine-green angiography the technique offers two major advantages: no dye is required and depth resolution is required is provided. As such Doppler OCT has the potential to improve our abilities to diagnose and monitor ocular vascular diseases. PMID:24704352

  7. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SVC...

  8. Quantum coherence of cosmological perturbations

    Science.gov (United States)

    Giovannini, Massimo

    2017-11-01

    In this paper, the degrees of quantum coherence of cosmological perturbations of different spins are computed in the large-scale limit and compared with the standard results holding for a single mode of the electromagnetic field in an optical cavity. The degree of second-order coherence of curvature inhomogeneities (and, more generally, of the scalar modes of the geometry) reproduces faithfully the optical limit. For the vector and tensor fluctuations, the numerical values of the normalized degrees of second-order coherence in the zero time-delay limit are always larger than unity (which is the Poisson benchmark value) but differ from the corresponding expressions obtainable in the framework of the single-mode approximation. General lessons are drawn on the quantum coherence of large-scale cosmological fluctuations.

  9. Coherent Control of Bond Making

    CERN Document Server

    Levin, Liat; Rybak, Leonid; Kosloff, Ronnie; Koch, Christiane P; Amitay, Zohar

    2014-01-01

    We demonstrate for the first time coherent control of bond making, a milestone on the way to coherent control of photo-induced bimolecular chemical reactions. In strong-field multiphoton femtosecond photoassociation experiments, we find the yield of detected magnesium dimer molecules to be enhanced for positively chirped pulses and suppressed for negatively chirped pulses. Our ab initio model shows that control is achieved by purification via Franck-Condon filtering combined with chirp-dependent Raman transitions. Experimental closed-loop phase optimization using a learning algorithm yields an improved pulse that utilizes vibrational coherent dynamics in addition to chirp-dependent Raman transitions. Our results show that coherent control of binary photo-reactions is feasible even under thermal conditions.

  10. Coherent exciton-polariton devices

    Science.gov (United States)

    Fraser, Michael D.

    2017-09-01

    The Bose-Einstein condensate of exciton-polaritons has emerged as a unique, coherent system for the study of non-equilibrium, macroscopically coherent Bose gases, while the full confinement of this coherent state to a semiconductor chip has also generated considerable interest in developing novel applications employing the polariton condensate, possibly even at room temperature. Such devices include low-threshold lasers, precision inertial sensors, and circuits based on superfluidity with ultra-fast non-linear elements. While the demonstration and development of such devices are at an early stage, rapid progress is being made. In this review, an overview of the exciton-polariton condensate system and the established and emerging material systems and fabrication techniques are presented, followed by a critical, in-depth assessment of the ability of the coherent polariton system to deliver on its promise of devices offering either new functionality and/or room-temperature operation.

  11. Coherent Addressing of Individual Neutral Atoms in a 3D Optical Lattice.

    Science.gov (United States)

    Wang, Yang; Zhang, Xianli; Corcovilos, Theodore A; Kumar, Aishwarya; Weiss, David S

    2015-07-24

    We demonstrate arbitrary coherent addressing of individual neutral atoms in a 5×5×5 array formed by an optical lattice. Addressing is accomplished using rapidly reconfigurable crossed laser beams to selectively ac Stark shift target atoms, so that only target atoms are resonant with state-changing microwaves. The effect of these targeted single qubit gates on the quantum information stored in nontargeted atoms is smaller than 3×10^{-3} in state fidelity. This is an important step along the path of converting the scalability promise of neutral atoms into reality.

  12. XCAN — A coherent amplification network of femtosecond fiber chirped-pulse amplifiers

    Science.gov (United States)

    Daniault, L.; Bellanger, S.; Le Dortz, J.; Bourderionnet, J.; Lallier, É.; Larat, C.; Antier-Murgey, M.; Chanteloup, J.-C.; Brignon, A.; Simon-Boisson, C.; Mourou, G.

    2015-10-01

    The XCAN collaboration program between the Ecole Polytechnique and Thales aims at developing a laser system based on the coherent combination of several tens of laser beams produced through a network of amplifying optical fibers [1]. As a first step this project aspires to demonstrate the scalability of a combining architecture in the femtosecond regime providing high peak power with high repetition rate and high efficiency. The initial system will include 61 individual phased beams aimed to provide 10 mJ, 350 fs pulses at 50 kHz.

  13. Cavity-based architecture to preserve quantum coherence and entanglement.

    Science.gov (United States)

    Man, Zhong-Xiao; Xia, Yun-Jie; Lo Franco, Rosario

    2015-09-09

    Quantum technology relies on the utilization of resources, like quantum coherence and entanglement, which allow quantum information and computation processing. This achievement is however jeopardized by the detrimental effects of the environment surrounding any quantum system, so that finding strategies to protect quantum resources is essential. Non-Markovian and structured environments are useful tools to this aim. Here we show how a simple environmental architecture made of two coupled lossy cavities enables a switch between Markovian and non-Markovian regimes for the dynamics of a qubit embedded in one of the cavity. Furthermore, qubit coherence can be indefinitely preserved if the cavity without qubit is perfect. We then focus on entanglement control of two independent qubits locally subject to such an engineered environment and discuss its feasibility in the framework of circuit quantum electrodynamics. With up-to-date experimental parameters, we show that our architecture allows entanglement lifetimes orders of magnitude longer than the spontaneous lifetime without local cavity couplings. This cavity-based architecture is straightforwardly extendable to many qubits for scalability.

  14. Unsupervised Discovery of Coherent Structures in Spatiotemporal Systems

    Science.gov (United States)

    Rupe, A.; Kashinath, K.; Prabhat, M.; Crutchfield, J. P.

    2016-12-01

    Coherent structures are ubiquitous in spatiotemporal systems far from equilibrium. These structures provide concise descriptions of the system and its dynamics, and there is often interest in these structures themselves. We present a novel method for automated detection and labeling of coherent structures that can be applied universally to spatiotemporal systems with local dynamics. Adapted from its original development in strictly temporal systems, computational mechanics is a method of inferring a hidden-state model from data which extracts and quantifies structure in the data. Significantly, the structures discovered by computational mechanics are not always readily identifiable from the raw data. Computational mechanics has been successfully applied to identify known structures in cellular automata. Here, we demonstrate the method on two-dimensional DNS hydrodynamic models of vortex shedding. Ultimately, we hope to analyze large-scale climate data for automated detection of extreme weather systems and other (possibly hidden) climatological structures. Current capabilities of the Computational Mechanics in Python (CMPy) software package allows for data parallelization using Berkeley Lab's Cori system, but significant scalability challenges remain to reach more realistic climate simulations. High-resolution simulations produce terabyte-size intermediates that require fully-distributed execution.

  15. Fractals, Coherence and Brain Dynamics

    Science.gov (United States)

    Vitiello, Giuseppe

    2010-11-01

    I show that the self-similarity property of deterministic fractals provides a direct connection with the space of the entire analytical functions. Fractals are thus described in terms of coherent states in the Fock-Bargmann representation. Conversely, my discussion also provides insights on the geometrical properties of coherent states: it allows to recognize, in some specific sense, fractal properties of coherent states. In particular, the relation is exhibited between fractals and q-deformed coherent states. The connection with the squeezed coherent states is also displayed. In this connection, the non-commutative geometry arising from the fractal relation with squeezed coherent states is discussed and the fractal spectral properties are identified. I also briefly discuss the description of neuro-phenomenological data in terms of squeezed coherent states provided by the dissipative model of brain and consider the fact that laboratory observations have shown evidence that self-similarity characterizes the brain background activity. This suggests that a connection can be established between brain dynamics and the fractal self-similarity properties on the basis of the relation discussed in this report between fractals and squeezed coherent states. Finally, I do not consider in this paper the so-called random fractals, namely those fractals obtained by randomization processes introduced in their iterative generation. Since self-similarity is still a characterizing property in many of such random fractals, my conjecture is that also in such cases there must exist a connection with the coherent state algebraic structure. In condensed matter physics, in many cases the generation by the microscopic dynamics of some kind of coherent states is involved in the process of the emergence of mesoscopic/macroscopic patterns. The discussion presented in this paper suggests that also fractal generation may provide an example of emergence of global features, namely long range

  16. Coherent states with elliptical polarization

    OpenAIRE

    Colavita, E.; Hacyan, S.

    2004-01-01

    Coherent states of the two dimensional harmonic oscillator are constructed as superpositions of energy and angular momentum eigenstates. It is shown that these states are Gaussian wave-packets moving along a classical trajectory, with a well defined elliptical polarization. They are coherent correlated states with respect to the usual cartesian position and momentum operators. A set of creation and annihilation operators is defined in polar coordinates, and it is shown that these same states ...

  17. Interface mobility from interface random walk

    Science.gov (United States)

    Trautt, Zachary; Upmanyu, Moneesh; Karma, Alain

    2007-03-01

    Computational studies aimed at extracting interface mobilities require driving forces orders of magnitude higher than those occurring experimentally. We present a computational methodology that extracts the absolute interface mobility in the zero driving force limit by monitoring the one-dimensional random walk of the mean interface position along the interface normal. The method exploits a fluctuation-dissipation relation similar to the Stokes-Einstein relation, which relates the diffusion coefficient of this Brownian-like random walk to the interface mobility. Atomic-scale simulations of grain boundaries in model crystalline systems validate the theoretical predictions, and also highlight the profound effect of impurities. The generality of this technique combined with its inherent spatial-temporal efficiency should allow computational studies to effectively complement experiments in understanding interface kinetics in diverse material systems.

  18. Interface solutions for interface side effects?

    Directory of Open Access Journals (Sweden)

    Stoffregen Thomas A.

    2011-12-01

    Full Text Available Human-computer interfaces often give rise to a variety of side effects, including eyestrain, headache, fatigue, and motion sickness (aka cybersickness, simulator sickness. We might hope that improvements in interface design would tend to reduce these side effects. Unfortunately, history reveals just the opposite: The incidence and severity of motion sickness (for example is positively related to the progressive sophistication of display technology and systems. In this presentation, I enquire about the future of interface technologies in relation to side effects. I review the types of side effects that occur and what is known about the causes of interface side effects. I suggest new ways of understanding relations between interface technologies and side effects, and new ways to approach the problem of interface side effects.

  19. Acquisition Order of Coherence Relations in Turkish

    Science.gov (United States)

    Demirgunes, Sercan

    2015-01-01

    Coherence as one of the criteria for textuality is the main element of a well-produced text. In the literature, there are many studies on the classification of coherence relations. Although there are different classifications on coherence relations, similar findings are reported regarding the acquisition order of coherence relations in different…

  20. Library of graphic symbols for power equipment in the scalable vector graphics format

    Directory of Open Access Journals (Sweden)

    A.G. Yuferov

    2016-03-01

    Full Text Available This paper describes the results of developing and using a library of graphic symbols for components of power equipment under the state standards GOST 21.403-80 “Power Equipment” and GOST 2.789-74 “Heat Exchangers”. The library is implemented in the SVG (Scalable Vector Graphics format. The obtained solutions are in line with the well-known studies on creating libraries of parametrical fragments of symbols for elements of diagrams and drawings in design systems for various industrial applications. The SVG format is intended for use in web applications, so the creation of SVG codes for power equipment under GOST 21.403-80 and GOST 2.789-74 is an essential stage in the development of web programs for the thermodynamic optimization of power plants. One of the major arguments in favor of the SVG format is that it can be integrated with codes. So, in process control systems developed based on a web platform, scalable vector graphics provides for a dynamic user interface, functionality of mimic panels and changeability of their components depending on the availability and status of equipment. An important reason for the acquisition and use of the SVG format is also that it is becoming the basis (recommended for the time being, and mandatory in future for electronic document management in the sphere of design documentation as part of international efforts to standardize and harmonize data exchange formats. In a specific context, the effectiveness of the SVG format for the power equipment arrangement has been shown. The library is intended for solution of specific production problems involving an analysis of the power plant thermal circuits and in training of power engineering students. The library and related materials are publicly available through the Internet. A number of proposals on the future evolution of the library have been formulated.

  1. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-08-01

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  2. Store operations to maintain cache coherence

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Nair, Ravi; Ohmacht, Martin

    2017-09-12

    In one embodiment, a computer-implemented method includes encountering a store operation during a compile-time of a program, where the store operation is applicable to a memory line. It is determined, by a computer processor, that no cache coherence action is necessary for the store operation. A store-without-coherence-action instruction is generated for the store operation, responsive to determining that no cache coherence action is necessary. The store-without-coherence-action instruction specifies that the store operation is to be performed without a cache coherence action, and cache coherence is maintained upon execution of the store-without-coherence-action instruction.

  3. Optical coherent tomography in diagnoses of peripheral retinal degenarations

    Directory of Open Access Journals (Sweden)

    O. G. Pozdeyeva

    2013-01-01

    Full Text Available Purpose: Studying the capabilities of optical coherence tomography (RTVue-100, OPTOVUE, USA in evaluation of peripheral retinal degenerations, vitreoretinal adhesions, adjacent vitreous body as well as measurement of morphometric data.Methods: The study included 189 patients (239 eyes with peripheral retinal degeneration. 77 men and 112 women aged 18 to 84 underwent an ophthalmologic examination since November 2012 until October 2013. The peripheral retina was visualized with the help of optical coherence tomography («RTVue-100,» USA. The fundography was carried out using a Nikon NF505‑AF (Japan fundus camera. All patients were examined with a Goldmann lens.Results: Optical coherence tomography was used to evaluate different kinds of peripheral retinal degenerations, such as lattice and snail track degeneration, isolated retinal tears, cystoid retinal degeneration, pathological hyperpigmentation, retinoschisis and cobblestone degeneration. The following morphometric data were studied: dimensions of the lesion (average length, retinal thickness along the edge of the lesion, retinal thickness at the base of the lesion and the vitreoretinal interface.Conclusion: Optical coherence tomography is a promising in vivo visualization method which is useful in evaluation of peripheral retinal degenerations, vitreoretinal adhesions and tractions. It also provides a comprehensive protocolling system and monitoring. It will enable ophthalmologists to better define laser and surgical treatment indications and evaluate therapy effectiveness.

  4. Coherent states and applications in mathematical physics

    CERN Document Server

    Combescure, Monique

    2012-01-01

    This book presents the various types of coherent states introduced and studied in the physics and mathematics literature and describes their properties together with application to quantum physics problems. It is intended to serve as a compendium on coherent states and their applications for physicists and mathematicians, stretching from the basic mathematical structures of generalized coherent states in the sense of Perelomov via the semiclassical evolution of coherent states to various specific examples of coherent states (hydrogen atom, quantum oscillator, ...).

  5. Coherent communication with continuous quantum variables

    Science.gov (United States)

    Wilde, Mark M.; Krovi, Hari; Brun, Todd A.

    2007-06-01

    The coherent bit (cobit) channel is a resource intermediate between classical and quantum communication. It produces coherent versions of teleportation and superdense coding. We extend the cobit channel to continuous variables by providing a definition of the coherent nat (conat) channel. We construct several coherent protocols that use both a position-quadrature and a momentum-quadrature conat channel with finite squeezing. Finally, we show that the quality of squeezing diminishes through successive compositions of coherent teleportation and superdense coding.

  6. Experimental generation of optical coherence lattices

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yahong; Cai, Yangjian, E-mail: serpo@dal.ca, E-mail: yangjiancai@suda.edu.cn [College of Physics, Optoelectronics and Energy and Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou 215006 (China); Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province and Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou 215006 (China); Ponomarenko, Sergey A., E-mail: serpo@dal.ca, E-mail: yangjiancai@suda.edu.cn [Department of Electrical and Computer Engineering, Dalhousie University, Halifax, Nova Scotia B3J 2X4 (Canada)

    2016-08-08

    We report experimental generation and measurement of recently introduced optical coherence lattices. The presented optical coherence lattice realization technique hinges on a superposition of mutually uncorrelated partially coherent Schell-model beams with tailored coherence properties. We show theoretically that information can be encoded into and, in principle, recovered from the lattice degree of coherence. Our results can find applications to image transmission and optical encryption.

  7. Scalable graphene coatings for enhanced condensation heat transfer.

    Science.gov (United States)

    Preston, Daniel J; Mafra, Daniela L; Miljkovic, Nenad; Kong, Jing; Wang, Evelyn N

    2015-05-13

    Water vapor condensation is commonly observed in nature and routinely used as an effective means of transferring heat with dropwise condensation on nonwetting surfaces exhibiting heat transfer improvement compared to filmwise condensation on wetting surfaces. However, state-of-the-art techniques to promote dropwise condensation rely on functional hydrophobic coatings that either have challenges with chemical stability or are so thick that any potential heat transfer improvement is negated due to the added thermal resistance of the coating. In this work, we show the effectiveness of ultrathin scalable chemical vapor deposited (CVD) graphene coatings to promote dropwise condensation while offering robust chemical stability and maintaining low thermal resistance. Heat transfer enhancements of 4× were demonstrated compared to filmwise condensation, and the robustness of these CVD coatings was superior to typical hydrophobic monolayer coatings. Our results indicate that graphene is a promising surface coating to promote dropwise condensation of water in industrial conditions with the potential for scalable application via CVD.

  8. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  9. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  10. Scalable metagenomic taxonomy classification using a reference genome database.

    Science.gov (United States)

    Ames, Sasha K; Hysom, David A; Gardner, Shea N; Lloyd, G Scott; Gokhale, Maya B; Allen, Jonathan E

    2013-09-15

    Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take contents of the sample. Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat allen99@llnl.gov Supplementary data are available at Bioinformatics online.

  11. Potential of Scalable Vector Graphics (SVG) for Ocean Science Research

    Science.gov (United States)

    Sears, J. R.

    2002-12-01

    Scalable Vector Graphics (SVG), a graphic format encoded in Extensible Markup Language (XML), is a recent W3C standard. SVG is text-based and platform-neutral, allowing interoperability and a rich array of features that offer significant promise for the presentation and publication of ocean and earth science research. This presentation (a) provides a brief introduction to SVG with real-world examples; (b) reviews browsers, editors, and other SVG tools; and (c) talks about some of the more powerful capabilities of SVG that might be important for ocean and earth science data presentation, such as searchability, animation and scripting, interactivity, accessibility, dynamic SVG, layers, scalability, SVG Text, SVG Audio, server-side SVG, and embedding metadata and data. A list of useful SVG resources is also given.

  12. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer

    2013-03-01

    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  13. Scalable, flexible and high resolution patterning of CVD graphene.

    Science.gov (United States)

    Hofmann, Mario; Hsieh, Ya-Ping; Hsu, Allen L; Kong, Jing

    2014-01-07

    The unique properties of graphene make it a promising material for interconnects in flexible and transparent electronics. To increase the commercial impact of graphene in those applications, a scalable and economical method for producing graphene patterns is required. The direct synthesis of graphene from an area-selectively passivated catalyst substrate can generate patterned graphene of high quality. We here present a solution-based method for producing patterned passivation layers. Various deposition methods such as ink-jet deposition and microcontact printing were explored, that can satisfy application demands for low cost, high resolution and scalable production of patterned graphene. The demonstrated high quality and nanometer precision of grown graphene establishes the potential of this synthesis approach for future commercial applications of graphene. Finally, the ability to transfer high resolution graphene patterns onto complex three-dimensional surfaces affords the vision of graphene-based interconnects in novel electronics.

  14. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  15. Interface Simulation Distances

    Directory of Open Access Journals (Sweden)

    Pavol Černý

    2012-10-01

    Full Text Available The classical (boolean notion of refinement for behavioral interfaces of system components is the alternating refinement preorder. In this paper, we define a distance for interfaces, called interface simulation distance. It makes the alternating refinement preorder quantitative by, intuitively, tolerating errors (while counting them in the alternating simulation game. We show that the interface simulation distance satisfies the triangle inequality, that the distance between two interfaces does not increase under parallel composition with a third interface, and that the distance between two interfaces can be bounded from above and below by distances between abstractions of the two interfaces. We illustrate the framework, and the properties of the distances under composition of interfaces, with two case studies.

  16. Refinement by interface instantiation

    DEFF Research Database (Denmark)

    Hallerstede, Stefan; Hoang, Thai Son

    2012-01-01

    Decomposition is a technique to separate the design of a complex system into smaller sub-models, which improves scalability and team development. In the shared-variable decomposition approach for Event-B sub-models share external variables and communicate through external events which cannot...

  17. Coherent optical DFT-spread OFDM transmission using orthogonal band multiplexing.

    Science.gov (United States)

    Yang, Qi; He, Zhixue; Yang, Zhu; Yu, Shaohua; Yi, Xingwen; Shieh, William

    2012-01-30

    Coherent optical OFDM (CO-OFDM) combined with orthogonal band multiplexing provides a scalable and flexible solution for achieving ultra high-speed rate. Among many CO-OFDM implementations, digital Fourier transform spread (DFT-S) CO-OFDM is proposed to mitigate fiber nonlinearity in long-haul transmission. In this paper, we first illustrate the principle of DFT-S OFDM. We then experimentally evaluate the performance of coherent optical DFT-S OFDM in a band-multiplexed transmission system. Compared with conventional clipping methods, DFT-S OFDM can reduce the OFDM peak-to-average power ratio (PAPR) value without suffering from the interference of the neighboring bands. With the benefit of much reduced PAPR, we successfully demonstrate 1.45 Tb/s DFT-S OFDM over 480 km SSMF transmission.

  18. Integrated generation of complex optical quantum states and their coherent control

    Science.gov (United States)

    Roztocki, Piotr; Kues, Michael; Reimer, Christian; Romero Cortés, Luis; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T.; Little, Brent E.; Moss, David J.; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2018-01-01

    Complex optical quantum states based on entangled photons are essential for investigations of fundamental physics and are the heart of applications in quantum information science. Recently, integrated photonics has become a leading platform for the compact, cost-efficient, and stable generation and processing of optical quantum states. However, onchip sources are currently limited to basic two-dimensional (qubit) two-photon states, whereas scaling the state complexity requires access to states composed of several (system with at least one hundred dimensions. Moreover, using off-the-shelf telecommunications components, we introduce a platform for the coherent manipulation and control of frequencyentangled quDit states. Our results suggest that microcavity-based entangled photon state generation and the coherent control of states using accessible telecommunications infrastructure introduce a powerful and scalable platform for quantum information science.

  19. Scalable Deployment of Advanced Building Energy Management Systems

    Science.gov (United States)

    2013-06-01

    rooms, classrooms, a quarterdeck with a two-story atrium and office spaces, and a large cafeteria /galley. Buildings 7113 and 7114 are functionally...similar (include barracks, classroom, cafeteria , etc.) and share a common central chilled water plant. 3.1.1 Building 7230 The drill hall (Building...scalability of the proposed approach, and expanded the capabilities developed for a single building to a building campus at Naval Station Great Lakes

  20. Scalable, Self Aligned Printing of Flexible Graphene Micro Supercapacitors (Postprint)

    Science.gov (United States)

    2017-05-11

    reduced graphene oxide: 0.4 mF cm−2)[11,39–41] prepared by conventional micro- fabrication techniques, the printed MSCs offer distinct advan- tages in...AFRL-RX-WP-JA-2017-0318 SCALABLE, SELF-ALIGNED PRINTING OF FLEXIBLE GRAPHENE MICRO-SUPERCAPACITORS (POSTPRINT) Woo Jin Hyun, Chang-Hyun Kim...including suggestions for reducing this burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  1. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-16

    Technology: Permanent Magnet Brushless DC machine • Model: Self-generating torque-speed-efficiency map • Future improvements: Induction machine ...system to the standard driveline – Example: BAS System – 3 kW system ISG Block, Rev. 2.0 Revision 2.0 • Four quadrant • PM Brushless Machine • Speed...and systems engineering. • Scope: Scalable, generic MATLAB/Simulink models in three areas: – Electromechanical machines (Integrated Starter

  2. Scalable privacy-preserving big data aggregation mechanism

    OpenAIRE

    Dapeng Wu; Boran Yang; Ruyan Wang

    2016-01-01

    As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs) recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA) method is proposed in this paper. Firstly, according...

  3. Fast & scalable pattern transfer via block copolymer nanolithography

    DEFF Research Database (Denmark)

    Li, Tao; Wang, Zhongli; Schulte, Lars

    2015-01-01

    A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin-casting of s...... on long range lateral order, including fabrication of substrates for catalysis, solar cells, sensors, ultrafiltration membranes and templating of semiconductors or metals....

  4. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    Directory of Open Access Journals (Sweden)

    Jacob R. Hauser

    2016-09-01

    Full Text Available 2-Cyanobenzothiazoles (CBTs are useful building blocks for: 1 luciferin derivatives for bioluminescent imaging; and 2 handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT, which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO, and discuss its advantages for scale-up over previously reported routes.

  5. Scalable Cluster-based Routing in Large Wireless Sensor Networks

    OpenAIRE

    Jiandong Li; Xuelian Cai; Jin Yang; Lina Zhu

    2012-01-01

    Large control overhead is the leading factor limiting the scalability of wireless sensor networks (WSNs). Clustering network nodes is an efficient solution, and Passive Clustering (PC) is one of the most efficient clustering methods. In this letter, we propose an improved PC-based route building scheme, named Route Reply (RREP) Broadcast with Passive Clustering (in short RBPC). Through broadcasting RREP packets on an expanding ring to build route, sensor nodes cache their route to the sink no...

  6. Semantic Models for Scalable Search in the Internet of Things

    OpenAIRE

    Dennis Pfisterer; Kay Römer; Richard Mietz; Sven Groppe

    2013-01-01

    The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper...

  7. Wavespace-Based Coherent Deconvolution

    Science.gov (United States)

    Bahr, Christopher J.; Cattafesta, Louis N., III

    2012-01-01

    Array deconvolution is commonly used in aeroacoustic analysis to remove the influence of a microphone array's point spread function from a conventional beamforming map. Unfortunately, the majority of deconvolution algorithms assume that the acoustic sources in a measurement are incoherent, which can be problematic for some aeroacoustic phenomena with coherent, spatially-distributed characteristics. While several algorithms have been proposed to handle coherent sources, some are computationally intractable for many problems while others require restrictive assumptions about the source field. Newer generalized inverse techniques hold promise, but are still under investigation for general use. An alternate coherent deconvolution method is proposed based on a wavespace transformation of the array data. Wavespace analysis offers advantages over curved-wave array processing, such as providing an explicit shift-invariance in the convolution of the array sampling function with the acoustic wave field. However, usage of the wavespace transformation assumes the acoustic wave field is accurately approximated as a superposition of plane wave fields, regardless of true wavefront curvature. The wavespace technique leverages Fourier transforms to quickly evaluate a shift-invariant convolution. The method is derived for and applied to ideal incoherent and coherent plane wave fields to demonstrate its ability to determine magnitude and relative phase of multiple coherent sources. Multi-scale processing is explored as a means of accelerating solution convergence. A case with a spherical wave front is evaluated. Finally, a trailing edge noise experiment case is considered. Results show the method successfully deconvolves incoherent, partially-coherent, and coherent plane wave fields to a degree necessary for quantitative evaluation. Curved wave front cases warrant further investigation. A potential extension to nearfield beamforming is proposed.

  8. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  9. Design and Implementation of Ceph: A Scalable Distributed File System

    Energy Technology Data Exchange (ETDEWEB)

    Weil, S A; Brandt, S A; Miller, E L; Long, D E; Maltzahn, C

    2006-04-19

    File system designers continue to look to new architectures to improve scalability. Object-based storage diverges from server-based (e.g. NFS) and SAN-based storage systems by coupling processors and memory with disk drives, delegating low-level allocation to object storage devices (OSDs) and decoupling I/O (read/write) from metadata (file open/close) operations. Even recent object-based systems inherit decades-old architectural choices going back to early UNIX file systems, however, limiting their ability to effectively scale to hundreds of petabytes. We present Ceph, a distributed file system that provides excellent performance and reliability with unprecedented scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable OSDs. We leverage OSD intelligence to distribute data replication, failure detection and recovery with semi-autonomous OSDs running a specialized local object storage file system (EBOFS). Finally, Ceph is built around a dynamic distributed metadata management cluster that provides extremely efficient metadata management that seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. We present performance measurements under a variety of workloads that show superior I/O performance and scalable metadata management (more than a quarter million metadata ops/sec).

  10. Event metadata records as a testbed for scalable data mining

    Science.gov (United States)

    van Gemmeren, P.; Malon, D.

    2010-04-01

    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  11. Scalable Dynamic Instrumentation for BlueGene/L

    Energy Technology Data Exchange (ETDEWEB)

    Schulz, M; Ahn, D; Bernat, A; de Supinski, B R; Ko, S Y; Lee, G; Rountree, B

    2005-09-08

    Dynamic binary instrumentation for performance analysis on new, large scale architectures such as the IBM Blue Gene/L system (BG/L) poses new challenges. Their scale--with potentially hundreds of thousands of compute nodes--requires new, more scalable mechanisms to deploy and to organize binary instrumentation and to collect the resulting data gathered by the inserted probes. Further, many of these new machines don't support full operating systems on the compute nodes; rather, they rely on light-weight custom compute kernels that do not support daemon-based implementations. We describe the design and current status of a new implementation of the DPCL (Dynamic Probe Class Library) API for BG/L. DPCL provides an easy to use layer for dynamic instrumentation on parallel MPI applications based on the DynInst dynamic instrumentation mechanism for sequential platforms. Our work includes modifying DynInst to control instrumentation from remote I/O nodes and porting DPCL's communication to use MRNet, a scalable data reduction network for collecting performance data. We describe extensions to the DPCL API that support instrumentation of task subsets and aggregation of collected performance data. Overall, our implementation provides a scalable infrastructure that provides efficient binary instrumentation on BG/L.

  12. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  13. Scalable MPEG-4 Encoder on FPGA Multiprocessor SOC

    Directory of Open Access Journals (Sweden)

    Kulmala Ari

    2006-01-01

    Full Text Available High computational requirements combined with rapidly evolving video coding algorithms and standards are a great challenge for contemporary encoder implementations. Rapid specification changes prefer full programmability and configurability both for software and hardware. This paper presents a novel scalable MPEG-4 video encoder on an FPGA-based multiprocessor system-on-chip (MPSOC. The MPSOC architecture is truly scalable and is based on a vendor-independent intellectual property (IP block interconnection network. The scalability in video encoding is achieved by spatial parallelization where images are divided to horizontal slices. A case design is presented with up to four synthesized processors on an Altera Stratix 1S40 device. A truly portable ANSI-C implementation that supports an arbitrary number of processors gives 11 QCIF frames/s at 50 MHz without processor specific optimizations. The parallelization efficiency is 97% for two processors and 93% with three. The FPGA utilization is 70%, requiring 28 797 logic elements. The implementation effort is significantly lower compared to traditional multiprocessor implementations.

  14. Scalable MPEG-4 Encoder on FPGA Multiprocessor SOC

    Directory of Open Access Journals (Sweden)

    Marko Hännikäinen

    2006-10-01

    Full Text Available High computational requirements combined with rapidly evolving video coding algorithms and standards are a great challenge for contemporary encoder implementations. Rapid specification changes prefer full programmability and configurability both for software and hardware. This paper presents a novel scalable MPEG-4 video encoder on an FPGA-based multiprocessor system-on-chip (MPSOC. The MPSOC architecture is truly scalable and is based on a vendor-independent intellectual property (IP block interconnection network. The scalability in video encoding is achieved by spatial parallelization where images are divided to horizontal slices. A case design is presented with up to four synthesized processors on an Altera Stratix 1S40 device. A truly portable ANSI-C implementation that supports an arbitrary number of processors gives 11 QCIF frames/s at 50 MHz without processor specific optimizations. The parallelization efficiency is 97% for two processors and 93% with three. The FPGA utilization is 70%, requiring 28 797 logic elements. The implementation effort is significantly lower compared to traditional multiprocessor implementations.

  15. Scalability improvements to NRLMOL for DFT calculations of large molecules

    Science.gov (United States)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  16. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  17. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  18. Event metadata records as a testbed for scalable data mining

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D, E-mail: gemmeren@anl.go [Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2010-04-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  19. Optical Coherence Tomography in Spontaneous Resolution of Vitreomacular Traction Syndrome

    Directory of Open Access Journals (Sweden)

    Kuo-Hsuan Hung

    2010-06-01

    Full Text Available Vitreomacular traction syndrome (VTS is a vitreoretinal interface abnormality. The disorder is caused by incomplete posterior vitreous detachment with persistent traction on the macula that produces symptoms and decreased vision. Most symptomatic eyes with VTS undergo a further decrease in visual acuity. Spontaneous complete vitreomacular separation occurs infrequently in eyes with VTS. Surgical intervention may be considered if severe metamorphopsia and decreased visual quality occur. Herein, we report 2 typical cases of idiopathic VTS with spontaneous resolution of vitreo-retinal traction demonstrated by optical coherence tomography. Optical coherence tomography is a sensitive and useful tool for the confirmation of diagnosis and for the serial anatomical evaluation of patients with VTS.

  20. High-Speed Coherent Raman Fingerprint Imaging of Biological Tissues

    CERN Document Server

    Camp, Charles H; Heddleston, John M; Hartshorn, Christopher M; Walker, Angela R Hight; Rich, Jeremy N; Lathia, Justin D; Cicerone, Marcus T

    2014-01-01

    We have developed a coherent Raman imaging platform using broadband coherent anti-Stokes Raman scattering (BCARS) that provides an unprecedented combination of speed, sensitivity, and spectral breadth. The system utilizes a unique configuration of laser sources that probes the Raman spectrum over 3,000 cm$^{-1}$ and generates an especially strong response in the typically weak Raman "fingerprint" region through heterodyne amplification of the anti-Stokes photons with a large nonresonant background (NRB) while maintaining high spectral resolution of $<$ 13 cm$^{-1}$. For histology and pathology, this system shows promise in highlighting major tissue components in a non-destructive, label-free manner. We demonstrate high-speed chemical imaging in two- and three-dimensional views of healthy murine liver and pancreas tissues and interfaces between xenograft brain tumors and the surrounding healthy brain matter.

  1. Logarithmic coherence: Operational interpretation of ℓ1-norm coherence

    Science.gov (United States)

    Rana, Swapan; Parashar, Preeti; Winter, Andreas; Lewenstein, Maciej

    2017-11-01

    We show that the distillable coherence—which is equal to the relative entropy of coherence—is, up to a constant factor, always bounded by the ℓ1-norm measure of coherence (defined as the sum of absolute values of off diagonals). Thus the latter plays a similar role as logarithmic negativity plays in entanglement theory and this is the best operational interpretation from a resource-theoretic viewpoint. Consequently the two measures are intimately connected to another operational measure, the robustness of coherence. We find also relationships between these measures, which are tight for general states, and the tightest possible for pure and qubit states. For a given robustness, we construct a state having minimum distillable coherence.

  2. Coherent states in quantum physics

    CERN Document Server

    Gazeau, Jean-Pierre

    2009-01-01

    This self-contained introduction discusses the evolution of the notion of coherent states, from the early works of Schrödinger to the most recent advances, including signal analysis. An integrated and modern approach to the utility of coherent states in many different branches of physics, it strikes a balance between mathematical and physical descriptions.Split into two parts, the first introduces readers to the most familiar coherent states, their origin, their construction, and their application and relevance to various selected domains of physics. Part II, mostly based on recent original results, is devoted to the question of quantization of various sets through coherent states, and shows the link to procedures in signal analysis. Title: Coherent States in Quantum Physics Print ISBN: 9783527407095 Author(s): Gazeau, Jean-Pierre eISBN: 9783527628292 Publisher: Wiley-VCH Dewey: 530.12 Publication Date: 23 Sep, 2009 Pages: 360 Category: Science, Science: Physics LCCN: Language: English Edition: N/A LCSH:

  3. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  4. Magnetosheath-cusp interface

    Directory of Open Access Journals (Sweden)

    S. Savin

    2004-01-01

    Full Text Available We advance the achievements of Interball-1 and other contemporary missions in exploration of the magnetosheath-cusp interface. Extensive discussion of published results is accompanied by presentation of new data from a case study and a comparison of those data within the broader context of three-year magnetopause (MP crossings by Interball-1. Multi-spacecraft boundary layer studies reveal that in ∼80% of the cases the interaction of the magnetosheath (MSH flow with the high latitude MP produces a layer containing strong nonlinear turbulence, called the turbulent boundary layer (TBL. The TBL contains wave trains with flows at approximately the Alfvén speed along field lines and "diamagnetic bubbles" with small magnetic fields inside. A comparison of the multi-point measurements obtained on 29 May 1996 with a global MHD model indicates that three types of populating processes should be operative: large-scale (∼few RE anti-parallel merging at sites remote from the cusp; medium-scale (few thousandkm local TBL-merging of fields that are anti-parallel on average; small-scale (few hundredkm bursty reconnection of fluctuating magnetic fields, representing a continuous mechanism for MSH plasma inflow into the magnetosphere, which could dominate in quasi-steady cases. The lowest frequency (∼1–2mHz TBL fluctuations are traced throughout the magnetosheath from the post-bow shock region up to the inner magnetopause border. The resonance of these fluctuations with dayside flux tubes might provide an effective correlative link for the entire dayside region of the solar wind interaction with the magnetopause and cusp ionosphere. The TBL disturbances are characterized by kinked, double-sloped wave power spectra and, most probably, three-wave cascading. Both elliptical polarization and nearly Alfvénic phase velocities with characteristic dispersion indicate the kinetic Alfvénic nature of the TBL waves. The three-wave phase coupling could effectively

  5. Instantaneous Liquid Interfaces

    OpenAIRE

    Willard, Adam P.; Chandler, David

    2009-01-01

    We describe and illustrate a simple procedure for identifying a liquid interface from atomic coordinates. In particular, a coarse grained density field is constructed, and the interface is defined as a constant density surface for this coarse grained field. In applications to a molecular dynamics simulation of liquid water, it is shown that this procedure provides instructive and useful pictures of liquid-vapor interfaces and of liquid-protein interfaces.

  6. Microcomputer interfacing and applications

    CERN Document Server

    Mustafa, M A

    1990-01-01

    This is the applications guide to interfacing microcomputers. It offers practical non-mathematical solutions to interfacing problems in many applications including data acquisition and control. Emphasis is given to the definition of the objectives of the interface, then comparing possible solutions and producing the best interface for every situation. Dr Mustafa A Mustafa is a senior designer of control equipment and has written many technical articles and papers on the subject of computers and their application to control engineering.

  7. Water at Interfaces

    DEFF Research Database (Denmark)

    Björneholm, Olle; Hansen, Martin Hangaard; Hodgson, Andrew

    2016-01-01

    The interfaces of neat water and aqueous solutions play a prominent role in many technological processes and in the environment. Examples of aqueous interfaces are ultrathin water films that cover most hydrophilic surfaces under ambient relative humidities, the liquid/solid interface which drives...

  8. Strong Scalability Study of Distributed Memory Parallel Markov Random Fields Using Graph Partitioning

    Science.gov (United States)

    Heinemann, Colleen

    the Message Passing Interface (MPI), with the MRF algorithm introduces the possibility of exploring how much parallel processing power should be allocated for the problem. A scalability study was run using the distributed-memory parallel MRF-based framework applied to a 2-dimensional ceramic composite dataset taken from the Berkeley microCT. Using such a dataset provides a basis for comparison with additional, potentially larger and more complex, datasets. The scaling study specifically targets the scaling of the optimization process of the MRF algorithm on 1, 2, 4, 12, 24, 48, 96, 192, 384, and 768 processes. The results of the scalability study conducted show that, given the way the algorithm is constructed and running experiments with the specific dataset, the scaling does not follow the trend of linear scaling where the amount of speedup provided is equal every time the number of processes dedicated to the problem increases by the same amount. Rather, the given combination of algorithm and dataset show that running at extremely high concurrencies does not provide the benefits associated with allocating such resources. The reasons are discussed in additional detail. Despite not being as efficient at high concurrencies, the presented framework still provides the possibility to assess 3-dimensional architecture of materials and the measurement of the structures involved. Developing the framework with the possibility to run in distributed memory parallel provides the first scalable, general purpose, easily configurable image analysis framework allowing pattern detection on different imaging modalities across multiple scales that the community has lacked for so long.

  9. Gate-Sensing Coherent Charge Oscillations in a Silicon Field-Effect Transistor.

    Science.gov (United States)

    Gonzalez-Zalba, M Fernando; Shevchenko, Sergey N; Barraud, Sylvain; Johansson, J Robert; Ferguson, Andrew J; Nori, Franco; Betz, Andreas C

    2016-03-09

    Quantum mechanical effects induced by the miniaturization of complementary metal-oxide-semiconductor (CMOS) technology hamper the performance and scalability prospects of field-effect transistors. However, those quantum effects, such as tunneling and coherence, can be harnessed to use existing CMOS technology for quantum information processing. Here, we report the observation of coherent charge oscillations in a double quantum dot formed in a silicon nanowire transistor detected via its dispersive interaction with a radio frequency resonant circuit coupled via the gate. Differential capacitance changes at the interdot charge transitions allow us to monitor the state of the system in the strong-driving regime where we observe the emergence of Landau-Zener-Stückelberg-Majorana interference on the phase response of the resonator. A theoretical analysis of the dispersive signal demonstrates that quantum and tunneling capacitance changes must be included to describe the qubit-resonator interaction. Furthermore, a Fourier analysis of the interference pattern reveals a charge coherence time, T2 ≈ 100 ps. Our results demonstrate charge coherent control and readout in a simple silicon transistor and open up the possibility to implement charge and spin qubits in existing CMOS technology.

  10. Direct Global Measurements of Tropspheric Winds Employing a Simplified Coherent Laser Radar using Fully Scalable Technology and Technique

    Science.gov (United States)

    Kavaya, Michael J.; Spiers, Gary D.; Lobl, Elena S.; Rothermel, Jeff; Keller, Vernon W.

    1996-01-01

    Innovative designs of a space-based laser remote sensing 'wind machine' are presented. These designs seek compatibility with the traditionally conflicting constraints of high scientific value and low total mission cost. Mission cost is reduced by moving to smaller, lighter, more off-the-shelf instrument designs which can be accommodated on smaller launch vehicles.

  11. Low-power, transparent optical network interface for high bandwidth off-chip interconnects.

    Science.gov (United States)

    Liboiron-Ladouceur, Odile; Wang, Howard; Garg, Ajay S; Bergman, Keren

    2009-04-13

    The recent emergence of multicore architectures and chip multiprocessors (CMPs) has accelerated the bandwidth requirements in high-performance processors for both on-chip and off-chip interconnects. For next generation computing clusters, the delivery of scalable power efficient off-chip communications to each compute node has emerged as a key bottleneck to realizing the full computational performance of these systems. The power dissipation is dominated by the off-chip interface and the necessity to drive high-speed signals over long distances. We present a scalable photonic network interface approach that fully exploits the bandwidth capacity offered by optical interconnects while offering significant power savings over traditional E/O and O/E approaches. The power-efficient interface optically aggregates electronic serial data streams into a multiple WDM channel packet structure at time-of-flight latencies. We demonstrate a scalable optical network interface with 70% improvement in power efficiency for a complete end-to-end PCI Express data transfer.

  12. Quantization of interface currents

    Energy Technology Data Exchange (ETDEWEB)

    Kotani, Motoko [AIMR, Tohoku University, Sendai (Japan); Schulz-Baldes, Hermann [Department Mathematik, Universität Erlangen-Nürnberg, Erlangen (Germany); Villegas-Blas, Carlos [Instituto de Matematicas, Cuernavaca, UNAM, Cuernavaca (Mexico)

    2014-12-15

    At the interface of two two-dimensional quantum systems, there may exist interface currents similar to edge currents in quantum Hall systems. It is proved that these interface currents are macroscopically quantized by an integer that is given by the difference of the Chern numbers of the two systems. It is also argued that at the interface between two time-reversal invariant systems with half-integer spin, one of which is trivial and the other non-trivial, there are dissipationless spin-polarized interface currents.

  13. Overcoming the drawback of lower sense margin in tunnel FET based dynamic memory along with enhanced charge retention and scalability

    Science.gov (United States)

    Navlakha, Nupur; Kranti, Abhinav

    2017-11-01

    The work reports on the use of a planar tri-gate tunnel field effect transistor (TFET) to operate as dynamic memory at 85 °C with an enhanced sense margin (SM). Two symmetric gates (G1) aligned to the source at a partial region of intrinsic film result into better electrostatic control that regulates the read mechanism based on band-to-band tunneling, while the other gate (G2), positioned adjacent to the first front gate is responsible for charge storage and sustenance. The proposed architecture results in an enhanced SM of ˜1.2 μA μm-1 along with a longer retention time (RT) of ˜1.8 s at 85 °C, for a total length of 600 nm. The double gate architecture towards the source increases the tunneling current and also reduces short channel effects, enhancing SM and scalability, thereby overcoming the critical bottleneck faced by TFET based dynamic memories. The work also discusses the impact of overlap/underlap and interface charges on the performance of TFET based dynamic memory. Insights into device operation demonstrate that the choice of appropriate architecture and biases not only limit the trade-off between SM and RT, but also result in improved scalability with drain voltage and total length being scaled down to 0.8 V and 115 nm, respectively.

  14. Scalable Earth-observation Analytics for Geoscientists: Spacetime Extensions to the Array Database SciDB

    Science.gov (United States)

    Appel, Marius; Lahn, Florian; Pebesma, Edzer; Buytaert, Wouter; Moulds, Simon

    2016-04-01

    Today's amount of freely available data requires scientists to spend large parts of their work on data management. This is especially true in environmental sciences when working with large remote sensing datasets, such as obtained from earth-observation satellites like the Sentinel fleet. Many frameworks like SpatialHadoop or Apache Spark address the scalability but target programmers rather than data analysts, and are not dedicated to imagery or array data. In this work, we use the open-source data management and analytics system SciDB to bring large earth-observation datasets closer to analysts. Its underlying data representation as multidimensional arrays fits naturally to earth-observation datasets, distributes storage and computational load over multiple instances by multidimensional chunking, and also enables efficient time-series based analyses, which is usually difficult using file- or tile-based approaches. Existing interfaces to R and Python furthermore allow for scalable analytics with relatively little learning effort. However, interfacing SciDB and file-based earth-observation datasets that come as tiled temporal snapshots requires a lot of manual bookkeeping during ingestion, and SciDB natively only supports loading data from CSV-like and custom binary formatted files, which currently limits its practical use in earth-observation analytics. To make it easier to work with large multi-temporal datasets in SciDB, we developed software tools that enrich SciDB with earth observation metadata and allow working with commonly used file formats: (i) the SciDB extension library scidb4geo simplifies working with spatiotemporal arrays by adding relevant metadata to the database and (ii) the Geospatial Data Abstraction Library (GDAL) driver implementation scidb4gdal allows to ingest and export remote sensing imagery from and to a large number of file formats. Using added metadata on temporal resolution and coverage, the GDAL driver supports time-based ingestion of

  15. Infrastructure and interfaces for large-scale numerical software.

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.; Gropp, W. D.; Hovland, P. D.; McInnes, L. C.; Smith, B. F.

    1999-06-10

    The complexity of large-scale scientific simulations often necessitates the combined use of multiple software packages developed by different groups in areas such as adaptive mesh manipulations, scalable algebraic solvers, and optimization. Historically, these packages have been combined by using custom code. This practice inhibits experimentation with and comparison of multiple tools that provide similar functionality through different implementations. The ALICE project, a collaborative effort among researchers at Argonne National Laboratory, is exploring the use of component-based software engineering to provide better interoperability among numerical toolkits. They discuss some initial experiences in developing an infrastructure and interfaces for high-performance numerical computing.

  16. Acquisition System and Detector Interface for Power Pulsed Detectors

    CERN Document Server

    Cornat, R

    2012-01-01

    A common DAQ system is being developed within the CALICE collaboration. It provides a flexible and scalable architecture based on giga-ethernet and 8b/10b serial links in order to transmit either slow control data, fast signals or read out data. A detector interface (DIF) is used to connect detectors to the DAQ system based on a single firmware shared among the collaboration but targeted on various physical implementations. The DIF allows to build, store and queue packets of data as well as to control the detectors providing USB and serial link connectivity. The overall architecture is foreseen to manage several hundreds of thousands channels.

  17. Interference due to coherence swapping

    Indian Academy of Sciences (India)

    In quantum interference (first order) the important requirement is the coherence of a quantum state, which usually we tend to associate with a particle if it has come from a single source and made to pass through a double slit or through a suit- able device such as a beam splitter (as in a Mach–Zehnder interferometer).

  18. Coherent state quantization of quaternions

    Energy Technology Data Exchange (ETDEWEB)

    Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com [Department of Mathematics and Statistics, University of Jaffna, Thirunelveli (Sri Lanka); Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com [Department of Computer Science and Software Engineering, Concordia University, 1455 De Maisonneuve Blvd. West, Montreal, Quebec H3G 1M8 (Canada)

    2015-08-15

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  19. Optical Coherence and Quantum Optics

    Science.gov (United States)

    Mandel, Leonard; Wolf, Emil

    1995-09-01

    The advent of lasers in the 1960s led to the development of many new fields in optical physics. This book is a systematic treatment of one of these fields--the broad area that deals with the coherence and fluctuation of light. The authors begin with a review of probability theory and random processes, and follow this with a thorough discussion of optical coherence theory within the framework of classical optics. They next treat the theory of photoelectric detection of light and photoelectric correlation. They then discuss in some detail quantum systems and effects. The book closes with two chapters devoted to laser theory and one on the quantum theory of nonlinear optics. The sound introduction to coherence theory and the quantum nature of light and the chapter-end exercises will appeal to graduate students and newcomers to the field. Researchers will find much of interest in the new results on coherence-induced spectral line shifts, nonclassical states of light, higher-order squeezing, and quantum effects of down-conversion. Written by two of the world's most highly regarded optical physicists, this book is required reading of all physicists and engineers working in optics.

  20. Coherent source radius in ppbar collisions

    OpenAIRE

    Zhang, Q. H.; Li, X. Q.

    1997-01-01

    We use a recently derived result to extract from two-pion interferometry data from $p\\bar{p}$ collisions the radius of the coherent component in the source. We find a coherent source radius of about $2 fm$.

  1. Coherent states, wavelets, and their generalizations

    CERN Document Server

    Ali, Syed Twareque; Gazeau, Jean-Pierre

    2014-01-01

    This second edition is fully updated, covering in particular new types of coherent states (the so-called Gazeau-Klauder coherent states, nonlinear coherent states, squeezed states, as used now routinely in quantum optics) and various generalizations of wavelets (wavelets on manifolds, curvelets, shearlets, etc.). In addition, it contains a new chapter on coherent state quantization and the related probabilistic aspects. As a survey of the theory of coherent states, wavelets, and some of their generalizations, it emphasizes mathematical principles, subsuming the theories of both wavelets and coherent states into a single analytic structure. The approach allows the user to take a classical-like view of quantum states in physics.   Starting from the standard theory of coherent states over Lie groups, the authors generalize the formalism by associating coherent states to group representations that are square integrable over a homogeneous space; a further step allows one to dispense with the group context altoget...

  2. Coherence for vectorial waves and majorization

    OpenAIRE

    Luis, Alfredo

    2016-01-01

    We show that majorization provides a powerful approach to the coherence conveyed by partially polarized transversal electromagnetic waves. Here we present the formalism, provide some examples and compare with standard measures of polarization and coherence of vectorial waves.

  3. Trial prospector: matching patients with cancer research studies using an automated and scalable approach.

    Science.gov (United States)

    Sahoo, Satya S; Tao, Shiqiang; Parchman, Andrew; Luo, Zhihui; Cui, Licong; Mergler, Patrick; Lanese, Robert; Barnholtz-Sloan, Jill S; Meropol, Neal J; Zhang, Guo-Qiang

    2014-01-01

    Cancer is responsible for approximately 7.6 million deaths per year worldwide. A 2012 survey in the United Kingdom found dramatic improvement in survival rates for childhood cancer because of increased participation in clinical trials. Unfortunately, overall patient participation in cancer clinical studies is low. A key logistical barrier to patient and physician participation is the time required for identification of appropriate clinical trials for individual patients. We introduce the Trial Prospector tool that supports end-to-end management of cancer clinical trial recruitment workflow with (a) structured entry of trial eligibility criteria, (b) automated extraction of patient data from multiple sources, (c) a scalable matching algorithm, and (d) interactive user interface (UI) for physicians with both matching results and a detailed explanation of causes for ineligibility of available trials. We report the results from deployment of Trial Prospector at the National Cancer Institute (NCI)-designated Case Comprehensive Cancer Center (Case CCC) with 1,367 clinical trial eligibility evaluations performed with 100% accuracy.

  4. A wireless, compact, and scalable bioimpedance measurement system for energy-efficient multichannel body sensor solutions

    Science.gov (United States)

    Ramos, J.; Ausín, J. L.; Lorido, A. M.; Redondo, F.; Duque-Carrillo, J. F.

    2013-04-01

    In this paper, we present the design, realization and evaluation of a multichannel measurement system based on a cost-effective high-performance integrated circuit for electrical bioimpedance (EBI) measurements in the frequency range from 1 kHz to 1 MHz, and a low-cost commercially available radio frequency transceiver device, which provides reliable wireless communication. The resulting on-chip spectrometer provides high measuring EBI capabilities and constitutes the basic node to built EBI wireless sensor networks (EBI-WSNs). The proposed EBI-WSN behaves as a high-performance wireless multichannel EBI spectrometer where the number of nodes, i.e., number of channels, is completely scalable to satisfy specific requirements of body sensor networks. One of its main advantages is its versatility, since each EBI node is independently configurable and capable of working simultaneously. A prototype of the EBI node leads to a very small printed circuit board of approximately 8 cm2 including chip-antenna, which can operate several years on one 3-V coin cell battery. A specifically tailored graphical user interface (GUI) for EBI-WSN has been also designed and implemented in order to configure the operation of EBI nodes and the network topology. EBI analysis parameters, e.g., single-frequency or spectroscopy, time interval, analysis by EBI events, frequency and amplitude ranges of the excitation current, etc., are defined by the GUI.

  5. WESTPA: an interoperable, highly scalable software package for weighted ensemble simulation and analysis.

    Science.gov (United States)

    Zwier, Matthew C; Adelman, Joshua L; Kaus, Joseph W; Pratt, Adam J; Wong, Kim F; Rego, Nicholas B; Suárez, Ernesto; Lettieri, Steven; Wang, David W; Grabe, Michael; Zuckerman, Daniel M; Chong, Lillian T

    2015-02-10

    The weighted ensemble (WE) path sampling approach orchestrates an ensemble of parallel calculations with intermittent communication to enhance the sampling of rare events, such as molecular associations or conformational changes in proteins or peptides. Trajectories are replicated and pruned in a way that focuses computational effort on underexplored regions of configuration space while maintaining rigorous kinetics. To enable the simulation of rare events at any scale (e.g., atomistic, cellular), we have developed an open-source, interoperable, and highly scalable software package for the execution and analysis of WE simulations: WESTPA (The Weighted Ensemble Simulation Toolkit with Parallelization and Analysis). WESTPA scales to thousands of CPU cores and includes a suite of analysis tools that have been implemented in a massively parallel fashion. The software has been designed to interface conveniently with any dynamics engine and has already been used with a variety of molecular dynamics (e.g., GROMACS, NAMD, OpenMM, AMBER) and cell-modeling packages (e.g., BioNetGen, MCell). WESTPA has been in production use for over a year, and its utility has been demonstrated for a broad set of problems, ranging from atomically detailed host–guest associations to nonspatial chemical kinetics of cellular signaling networks. The following describes the design and features of WESTPA, including the facilities it provides for running WE simulations and storing and analyzing WE simulation data, as well as examples of input and output.

  6. Low-cost scalable quartz crystal microbalance array for environmental sensing

    Energy Technology Data Exchange (ETDEWEB)

    Anazagasty, Cristain [University of Puerto Rico; Hianik, Tibor [Comenius University, Bratislava, Slovakia; Ivanov, Ilia N [ORNL

    2016-01-01

    Proliferation of environmental sensors for internet of things (IoT) applications has increased the need for low-cost platforms capable of accommodating multiple sensors. Quartz crystal microbalance (QCM) crystals coated with nanometer-thin sensor films are suitable for use in high-resolution (~1 ng) selective gas sensor applications. We demonstrate a scalable array for measuring frequency response of six QCM sensors controlled by low-cost Arduino microcontrollers and a USB multiplexer. Gas pulses and data acquisition were controlled by a LabVIEW user interface. We test the sensor array by measuring the frequency shift of crystals coated with different compositions of polymer composites based on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) while films are exposed to water vapor and oxygen inside a controlled environmental chamber. Our sensor array exhibits comparable performance to that of a commercial QCM system, while enabling high-throughput 6 QCM testing for under $1,000. We use deep neural network structures to process sensor response and demonstrate that the QCM array is suitable for gas sensing, environmental monitoring, and electronic-nose applications.

  7. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  8. A scalable pairwise class interaction framework for multidimensional classification

    DEFF Research Database (Denmark)

    Arias, Jacinto; Gámez, Jose A.; Nielsen, Thomas Dyhre

    2016-01-01

    We present a general framework for multidimensional classification that cap- tures the pairwise interactions between class variables. The pairwise class inter- actions are encoded using a collection of base classifiers (Phase 1), for which the class predictions are combined in a Markov random field...... inference methods in the second phase. We describe the basic framework and its main properties, as well as strategies for ensuring the scalability of the framework. We include a detailed experimental evaluation based on a range of publicly available databases. Here we analyze the overall performance...

  9. SAR++: A Multi-Channel Scalable and Reconfigurable SAR System

    DEFF Research Database (Denmark)

    Høeg, Flemming; Christensen, Erik Lintz

    2002-01-01

    SAR++ is a technology program aiming at developing know-how and technology needed to design the next generation civilian SAR systems. Technology has reached a state, which allows major parts of the digital subsystem to be built using custom-off-the-shelf (COTS) components. A design goal...... is to design a modular, scalable and reconfigurable SAR system using such components, in order to ensure maximum flexibility for the users of the actual system and for future system updates. Having these aspects in mind the SAR++ system is presented with focus on the digital subsystem architecture...

  10. Scalable brain network construction on white matter fibers

    Science.gov (United States)

    Chung, Moo K.; Adluru, Nagesh; Dalton, Kim M.; Alexander, Andrew L.; Davidson, Richard J.

    2011-03-01

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ɛ-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  11. Using overlay network architectures for scalable video distribution

    Science.gov (United States)

    Patrikakis, Charalampos Z.; Despotopoulos, Yannis; Fafali, Paraskevi; Cha, Jihun; Kim, Kyuheon

    2004-11-01

    Within the last years, the enormous growth of Internet based communication as well as the rapid increase of available processing power has lead to the widespread use of multimedia streaming as a means to convey information. This work aims at providing an open architecture designed to support scalable streaming to a large number of clients using application layer multicast. The architecture is based on media relay nodes that can be deployed transparently to any existing media distribution scheme, which can support media streamed using the RTP and RTSP protocols. The architecture is based on overlay networks at application level, featuring rate adaptation mechanisms for responding to network congestion.

  12. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...... been developed. Hybris is a prototype rendering architeture which can be tailored to many specific 3D graphics applications and implemented in various ways. Parallel software implementations for both single and multi-processor Windows 2000 system have been demonstrated. Working hardware...... as a case study and an application of the Hybris graphics architecture....

  13. A Scalable Framework to Detect Personal Health Mentions on Twitter.

    Science.gov (United States)

    Yin, Zhijun; Fabbri, Daniel; Rosenbloom, S Trent; Malin, Bradley

    2015-06-05

    Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual's health. The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (PTwitter in a scalable manner. These mentions correspond to the health issues of the Twitter users

  14. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  15. Scalable implementation of boson sampling with trapped ions.

    Science.gov (United States)

    Shen, C; Zhang, Z; Duan, L-M

    2014-02-07

    Boson sampling solves a classically intractable problem by sampling from a probability distribution given by matrix permanents. We propose a scalable implementation of boson sampling using local transverse phonon modes of trapped ions to encode the bosons. The proposed scheme allows deterministic preparation and high-efficiency readout of the bosons in the Fock states and universal mode mixing. With the state-of-the-art trapped ion technology, it is feasible to realize boson sampling with tens of bosons by this scheme, which would outperform the most powerful classical computers and constitute an effective disproof of the famous extended Church-Turing thesis.

  16. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  17. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  18. On P-coherent endomorphism rings

    Indian Academy of Sciences (India)

    A ring is called right -coherent if every principal right ideal is finitely presented. Let M R be a right -module. We study the -coherence of the endomorphism ring of M R . It is shown that is a right -coherent ring if and only if every endomorphism of M R has a pseudokernel in add M R ; S is a left -coherent ring if and ...

  19. Scalable modulation technology and the tradeoff of reach, spectral efficiency, and complexity

    Science.gov (United States)

    Bosco, Gabriella; Pilori, Dario; Poggiolini, Pierluigi; Carena, Andrea; Guiomar, Fernando

    2017-01-01

    Bandwidth and capacity demand in metro, regional, and long-haul networks is increasing at several tens of percent per year, driven by video streaming, cloud computing, social media and mobile applications. To sustain this traffic growth, an upgrade of the widely deployed 100-Gbit/s long-haul optical systems, based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) modulation format associated with coherent detection and digital signal processing (DSP), is mandatory. In fact, optical transport techniques enabling a per-channel bit rate beyond 100 Gbit/s have recently been the object of intensive R and D activities, aimed at both improving the spectral efficiency and lowering the cost per bit in fiber transmission systems. In this invited contribution, we review the different available options to scale the per-channel bit-rate to 400 Gbit/s and beyond, i.e. symbol-rate increase, use of higher-order quadrature amplitude modulation (QAM) modulation formats and use of super-channels with DSP-enabled spectral shaping and advanced multiplexing technologies. In this analysis, trade-offs of system reach, spectral efficiency and transceiver complexity are addressed. Besides scalability, next generation optical networks will require a high degree of flexibility in the transponders, which should be able to dynamically adapt the transmission rate and bandwidth occupancy to the light path characteristics. In order to increase the flexibility of these transponders (often referred to as "flexponders"), several advanced modulation techniques have recently been proposed, among which sub-carrier multiplexing, hybrid formats (over time, frequency and polarization), and constellation shaping. We review these techniques, highlighting their limits and potential in terms of performance, complexity and flexibility.

  20. On Radar Resolution in Coherent Change Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Bickel, Douglas L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    It is commonly observed that resolution plays a role in coherent change detection. Although this is the case, the relationship of the resolution in coherent change detection is not yet defined . In this document, we present an analytical method of evaluating this relationship using detection theory. Specifically we examine the effect of resolution on receiver operating characteristic curves for coherent change detection.

  1. Operator properties of generalized coherent state systems

    Indian Academy of Sciences (India)

    The main properties of standard quantum mechanical coherent states and the two generalizations of Klauder and of Perelomov are reviewed. For a system of generalized coherent states in the latter sense, necessary and sufficient conditions for existence of a diagonal coherent stable representation for all Hilbert-Schmidt ...

  2. Zp-graded charge coherent states

    Science.gov (United States)

    Chung, Won Sang

    2014-06-01

    A new kind of charge coherent state called a Zp-graded charge coherent state is constructed by using the complex solution of the equation qp = 1. The p-1 charge operators are also explicitly constructed. We explicitly investigate some nonclassical properties for the Z3-graded charge coherent state.

  3. Coherent states for the Legendre oscillator

    OpenAIRE

    Borzov, V. V.; Damaskinsky, E. V.

    2003-01-01

    A new oscillator-like system called by the Legendre oscillator is introduced in this note. The two families of coherent states (coherent states as eigenvectors of the annihilation operator and the Klauder-Gazeau temporally stable coherent states) are defined and investigated for this oscillator.

  4. Propagation of superconducting coherence via chiral quantum-Hall edge channels.

    Science.gov (United States)

    Park, Geon-Hyoung; Kim, Minsoo; Watanabe, Kenji; Taniguchi, Takashi; Lee, Hu-Jong

    2017-09-08

    Recently, there has been significant interest in superconducting coherence via chiral quantum-Hall (QH) edge channels at an interface between a two-dimensional normal conductor and a superconductor (N-S) in a strong transverse magnetic field. In the field range where the superconductivity and the QH state coexist, the coherent confinement of electron- and hole-like quasiparticles by the interplay of Andreev reflection and the QH effect leads to the formation of Andreev edge states (AES) along the N-S interface. Here, we report the electrical conductance characteristics via the AES formed in graphene-superconductor hybrid systems in a three-terminal configuration. This measurement configuration, involving the QH edge states outside a graphene-S interface, allows the detection of the longitudinal and QH conductance separately, excluding the bulk contribution. Convincing evidence for the superconducting coherence and its propagation via the chiral QH edge channels is provided by the conductance enhancement on both the upstream and the downstream sides of the superconducting electrode as well as in bias spectroscopy results below the superconducting critical temperature. Propagation of superconducting coherence via QH edge states was more evident as more edge channels participate in the Andreev process for high filling factors with reduced valley-mixing scattering.

  5. Generation and Use of Coherent Transition Radiation from Short Electron Bunches

    Energy Technology Data Exchange (ETDEWEB)

    Settakorn, Chitrlada

    2001-08-28

    When accelerated, an electron bunch emits coherent radiation at wavelength longer than or comparable to the bunch length. The coherent radiation intensity scales with the square of the number of electron per bunch and its radiation spectrum is determined by the Fourier Transform of the electron bunch distribution squared. At the SUNSHINE (Stanford University Short Intense Electron Source) facility, electron bunches can be generated as short as {sigma}{sub z} = 36 {micro}m (120 femtosecond duration) and such bunches can emit coherent radiation in the far-infrared. Since a typical number for the electron population in a bunch is 10{sup 8}-10{sup 9}, the coherent radiation intensity is much higher than that of incoherent radiation as well as that of a conventional far-infrared radiation source. This concentrates on coherent transition and diffraction radiation from short electron bunches as a potential high intensity far-infrared radiation source and for sub-picosecond electron bunch length measurements. Coherent transition radiation generated from a 25 MeV beam at a vacuum-metal interface is characterized. Such a high intensity radiation source allows far-infrared spectroscopy to be conducted conveniently with a Michelson interferometer and a room temperature detector. Measurements of the refractive index of silicon are described to demonstrate the possibilities of far-infrared spectroscopy using coherent transition radiation Coherent diffraction radiation, which is closely related to coherent transition radiation, can be considered as another potential FIR radiation source. Since the perturbation by the radiation generation to the electron beam is relatively small, it has the advantage of being a nondestructive radiation source.

  6. ANALYZING AVIATION SAFETY REPORTS: FROM TOPIC MODELING TO SCALABLE MULTI-LABEL CLASSIFICATION

    Data.gov (United States)

    National Aeronautics and Space Administration — ANALYZING AVIATION SAFETY REPORTS: FROM TOPIC MODELING TO SCALABLE MULTI-LABEL CLASSIFICATION AMRUDIN AGOVIC*, HANHUAI SHAN, AND ARINDAM BANERJEE Abstract. The...

  7. Implementing a hardware-friendly wavelet entropy codec for scalable video

    Science.gov (United States)

    Eeckhaut, Hendrik; Christiaens, Mark; Devos, Harald; Stroobandt, Dirk

    2005-11-01

    In the RESUME project (Reconfigurable Embedded Systems for Use in Multimedia Environments) we explore the benefits of an implementation of scalable multimedia applications using reconfigurable hardware by building an FPGA implementation of a scalable wavelet-based video decoder. The term "scalable" refers to a design that can easily accommodate changes in quality of service with minimal computational overhead. This is important for portable devices that have different Quality of Service (QoS) requirements and have varying power restrictions. The scalable video decoder consists of three major blocks: a Wavelet Entropy Decoder (WED), an Inverse Discrete Wavelet Transformer (IDWT) and a Motion Compensator (MC). The WED decodes entropy encoded parts of the video stream into wavelet transformed frames. These frames are decoded bitlayer per bitlayer. The more bitlayers are decoded the higher the image quality (scalability in image quality). Resolution scalability is obtained as an inherent property of the IDWT. Finally framerate scalability is achieved through hierarchical motion compensation. In this article we present the results of our investigation into the hardware implementation of such a scalable video codec. In particular we found that the implementation of the entropy codec is a significant bottleneck. We present an alternative, hardware-friendly algorithm for entropy coding with excellent data locality (both temporal and spatial), streaming capabilities, a high degree of parallelism, a smaller memory footprint and state-of-the-art compression while maintaining all required scalability properties. These claims are supported by an effective hardware implementation on an FPGA.

  8. SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS

    Data.gov (United States)

    National Aeronautics and Space Administration — SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Biomass monitoring,...

  9. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  10. Solution-Processing of Organic Solar Cells: From In Situ Investigation to Scalable Manufacturing

    KAUST Repository

    Abdelsamie, Maged

    2016-12-05

    Photovoltaics provide a feasible route to fulfilling the substantial increase in demand for energy worldwide. Solution processable organic photovoltaics (OPVs) have attracted attention in the last decade because of the promise of low-cost manufacturing of sufficiently efficient devices at high throughput on large-area rigid or flexible substrates with potentially low energy and carbon footprints. In OPVs, the photoactive layer is made of a bulk heterojunction (BHJ) layer and is typically composed of a blend of an electron-donating (D) and an electron-accepting (A) materials which phase separate at the nanoscale and form a heterojunction at the D-A interface that plays a crucial role in the generation of charges. Despite the tremendous progress that has been made in increasing the efficiency of organic photovoltaics over the last few years, with power conversion efficiency increasing from 8% to 13% over the duration of this PhD dissertation, there have been numerous debates on the mechanisms of formation of the crucial BHJ layer and few clues about how to successfully transfer these lessons to scalable processes. This stems in large part from a lack of understanding of how BHJ layers form from solution. This lack of understanding makes it challenging to design BHJs and to control their formation in laboratory-based processes, such as spin-coating, let alone their successful transfer to scalable processes required for the manufacturing of organic solar cells. Consequently, the OPV community has in recent years sought out to better understand the key characteristics of state of the art lab-based organic solar cells and made efforts to shed light on how the BHJ forms in laboratory-based processes as well as in scalable processes. We take the view that understanding the formation of the solution-processed bulk heterojunction (BHJ) photoactive layer, where crucial photovoltaic processes take place, is the one of the most crucial steps to developing strategies towards the

  11. Scalable and Fault Tolerant Failure Detection and Consensus

    Energy Technology Data Exchange (ETDEWEB)

    Katti, Amogh [University of Reading, UK; Di Fatta, Giuseppe [University of Reading, UK; Naughton III, Thomas J [ORNL; Engelmann, Christian [ORNL

    2015-01-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.

  12. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  13. Performance and Scalability Evaluation of the Ceph Parallel File System

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Nelson, Mark [Inktank Storage, Inc.; Oral, H Sarp [ORNL; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Caldwell, Blake A [ORNL; Hill, Jason J [ORNL

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  14. Towards Scalable Strain Gauge-Based Joint Torque Sensors.

    Science.gov (United States)

    Khan, Hamza; D'Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G; Cuschieri, Alfred; Semini, Claudio

    2017-08-18

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR).

  15. Developing a scalable artificial photosynthesis technology through nanomaterials by design.

    Science.gov (United States)

    Lewis, Nathan S

    2016-12-06

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  16. A highly scalable peptide-based assay system for proteomics.

    Directory of Open Access Journals (Sweden)

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  17. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  18. The dust acoustic waves in three dimensional scalable complex plasma

    CERN Document Server

    Zhukhovitskii, D I

    2015-01-01

    Dust acoustic waves in the bulk of a dust cloud in complex plasma of low pressure gas discharge under microgravity conditions are considered. The dust component of complex plasma is assumed a scalable system that conforms to the ionization equation of state (IEOS) developed in our previous study. We find singular points of this IEOS that determine the behavior of the sound velocity in different regions of the cloud. The fluid approach is utilized to deduce the wave equation that includes the neutral drag term. It is shown that the sound velocity is fully defined by the particle compressibility, which is calculated on the basis of the scalable IEOS. The sound velocities and damping rates calculated for different 3D complex plasmas both in ac and dc discharges demonstrate a good correlation with experimental data that are within the limits of validity of the theory. The theory provides interpretation for the observed independence of the sound velocity on the coordinate and for a weak dependence on the particle ...

  19. Developing a scalable artificial photosynthesis technology through nanomaterials by design

    Science.gov (United States)

    Lewis, Nathan S.

    2016-12-01

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  20. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Directory of Open Access Journals (Sweden)

    Zihao Yang

    2017-09-01

    Full Text Available The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15 wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  1. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Science.gov (United States)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  2. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  3. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  4. Advanced aerosense display interfaces

    Science.gov (United States)

    Hopper, Darrel G.; Meyer, Frederick M.

    1998-09-01

    High-resolution display technologies are being developed to meet the ever-increasing demand for realistic detail. The requirement for evermore visual information exceeds the capacity of fielded aerospace display interfaces. In this paper we begin an exploration of display interfaces and evolving aerospace requirements. Current and evolving standards for avionics, commercial, and flat panel displays are summarized and compared to near term goals for military and aerospace applications. Aerospace and military applications prior to 2005 up to UXGA and digital HDTV resolution can be met by using commercial interface standard developments. Advanced aerospace requirements require yet higher resolutions (2560 X 2048 color pixels, 5120 X 4096 color pixels at 85 Hz, etc.) and necessitate the initiation of discussion herein of an 'ultra digital interface standard (UDIS)' which includes 'smart interface' features such as large memory and blazingly fast resizing microcomputer. Interface capacity, IT, increased about 105 from 1973 to 1998; 102 more is needed for UDIS.

  5. Universal computer interfaces

    CERN Document Server

    Dheere, RFBM

    1988-01-01

    Presents a survey of the latest developments in the field of the universal computer interface, resulting from a study of the world patent literature. Illustrating the state of the art today, the book ranges from basic interface structure, through parameters and common characteristics, to the most important industrial bus realizations. Recent technical enhancements are also included, with special emphasis devoted to the universal interface adapter circuit. Comprehensively indexed.

  6. Popeye Project: ROV interface

    Energy Technology Data Exchange (ETDEWEB)

    Scates, C.R.; Hernandez, D.A.; Hickok, D.D.

    1996-12-31

    This paper discusses the Remote Operated Vehicle (ROV) interface with the Popeye Project Subsea System. It describes the ROV-related plans, design philosophies, intervention tasks, tooling/equipment requirements, testing activities, and offshore installation experiences. Early identification and continuous consideration of the ROV interfaces significantly improved the overall efficiency of equipment designs and offshore operations. The Popeye Project helped advance the technology and standardization of ROV interfaces for deep water subsea production systems.

  7. Analysis of coherent structures during the 2009 CABINEX field campaign: Implications for atmospheric chemistry

    Science.gov (United States)

    Pressley, S. N.; Steiner, A. L.; Chung, S. H.; Edburg, S. L.; Jones, E.; Botros, A.

    2010-12-01

    Intermittent coherent structures are an important component of turbulent exchange of mass, momentum, and energy at the biosphere-atmosphere interface. Specifically, above forested canopies, coherent structures can be responsible for a large fraction of the exchange of trace gases and aerosols between the sub-canopy (ground surface), canopy and the atmosphere. This study quantifies the coherent structures and associated turbulence intensity at the canopy interface for the Community Atmosphere-Biosphere Interactions Experiment (CABINEX) field campaign (July 1 - Aug 10, 2009) at the University of Michigan Biological Station (UMBS), and determines the effect of coherent structures on canopy air-parcel residence times and importance for atmospheric chemistry. Two different methods of analysis are used to estimate the coherent exchange: 1) wavelet analysis and 2) quadrant-hole (Q-H) analysis (also referred to as conditional sampling). Wavelet analysis uses wavelet transforms to detect non-periodic signals with a variable duration. Using temperature ramp structures, the timing and magnitude of individual coherent ‘events’ can be evaluated over the duration of the campaign. Conversely, the Q-H analysis detects ‘events’ when │u'w'│≥ H×urmswrms, where H is a threshold parameter, u is the stream-wise velocity and w is the vertical velocity. Events are primarily comprised of high momentum air penetrating into the canopy (sweeps, u’> 0; w’0). Results from both techniques are compared under varying stability classes, and the number of events, total duration, and contribution to the total flux are analyzed for the full campaign. The contribution of coherent structures to the total canopy-atmosphere exchange is similar between the two methods, despite a greater number of events estimated from the Q-H analysis. These analyses improve the quantification of canopy mixing time at the UMBS site during CABINEX, and will aid in interpreting in-canopy processes

  8. Stress Relaxation in a Perfect Nanocrystal by Coherent Ejection of Lattice Layers

    Science.gov (United States)

    Chaudhuri, Abhishek; Sengupta, Surajit; Rao, Madan

    2005-12-01

    We show that a small crystal trapped within a potential well and in contact with its own fluid responds to large compressive stresses by a novel mechanism—the transfer of complete lattice layers across the solid-fluid interface. Further, when the solid is impacted by a momentum impulse set up in the fluid, a coherently ejected lattice layer carries away a definite quantity of energy and momentum, resulting in a sharp peak in the calculated phonon absorption spectrum. Apart from its relevance to studies of stability and failure of small sized solids, such coherent nanospallation may be used to make atomic wires or monolayer films.

  9. Quantum coherence and correlations in quantum system

    Science.gov (United States)

    Xi, Zhengjun; Li, Yongming; Fan, Heng

    2015-01-01

    Criteria of measure quantifying quantum coherence, a unique property of quantum system, are proposed recently. In this paper, we first give an uncertainty-like expression relating the coherence and the entropy of quantum system. This finding allows us to discuss the relations between the entanglement and the coherence. Further, we discuss in detail the relations among the coherence, the discord and the deficit in the bipartite quantum system. We show that, the one-way quantum deficit is equal to the sum between quantum discord and the relative entropy of coherence of measured subsystem. PMID:26094795

  10. Electromagnetic Interface Testing Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Electromagnetic Interface Testing facilitysupports such testing asEmissions, Field Strength, Mode Stirring, EMP Pulser, 4 Probe Monitoring/Leveling System, and...

  11. Coherence number as a discrete quantum resource

    Science.gov (United States)

    Chin, Seungbeom

    2017-10-01

    We introduce a discrete coherence monotone named the coherence number, which is a generalization of the coherence rank to mixed states. After defining the coherence number in a manner similar to that of the Schmidt number in entanglement theory, we present a necessary and sufficient condition of the coherence number for a coherent state to be converted to an entangled state of nonzero k concurrence (a member of the generalized concurrence family with 2 ≤k ≤d ). As an application of the coherence number to a practical quantum system, Grover's search algorithm of N items is considered. We show that the coherence number remains N and falls abruptly when the success probability of a searching process becomes maximal. This phenomenon motivates us to analyze the depletion pattern of Cc(N ) (the last member of the generalized coherence concurrence, nonzero when the coherence number is N ), which turns out to be an optimal resource for the process since it is completely consumed to finish the searching task. The generalization of the original Grover algorithm with arbitrary (mixed) initial states is also discussed, which reveals the boundary condition for the coherence to be monotonically decreasing under the process.

  12. Operational resource theory of total quantum coherence

    Science.gov (United States)

    Yang, Si-ren; Yu, Chang-shui

    2018-01-01

    Quantum coherence is an essential feature of quantum mechanics and is an important physical resource in quantum information. Recently, the resource theory of quantum coherence has been established parallel with that of entanglement. In the resource theory, a resource can be well defined if given three ingredients: the free states, the resource, the (restricted) free operations. In this paper, we study the resource theory of coherence in a different light, that is, we consider the total coherence defined by the basis-free coherence maximized among all potential basis. We define the distillable total coherence and the total coherence cost and in both the asymptotic regime and the single-copy regime show the reversible transformation between a state with certain total coherence and the state with the unit reference total coherence. Extensively, we demonstrate that the total coherence can also be completely converted to the total correlation with the equal amount by the free operations. We also provide the alternative understanding of the total coherence, respectively, based on the entanglement and the total correlation in a different way.

  13. The global coherence initiative: creating a coherent planetary standing wave.

    Science.gov (United States)

    McCraty, Rollin; Deyhle, Annette; Childre, Doc

    2012-03-01

    The much anticipated year of 2012 is now here. Amidst the predictions and cosmic alignments that many are aware of, one thing is for sure: it will be an interesting and exciting year as the speed of change continues to increase, bringing both chaos and great opportunity. One benchmark of these times is a shift in many people from a paradigm of competition to one of greater cooperation. All across the planet, increasing numbers of people are practicing heart-based living, and more groups are forming activities that support positive change and creative solutions for manifesting a better world. The Global Coherence Initiative (GCI) is a science-based, co-creative project to unite people in heart-focused care and intention. GCI is working in concert with other initiatives to realize the increased power of collective intention and consciousness. The convergence of several independent lines of evidence provides strong support for the existence of a global information field that connects all living systems and consciousness. Every cell in our bodies is bathed in an external and internal environment of fluctuating invisible magnetic forces that can affect virtually every cell and circuit in biological systems. Therefore, it should not be surprising that numerous physiological rhythms in humans and global collective behaviors are not only synchronized with solar and geomagnetic activity, but disruptions in these fields can create adverse effects on human health and behavior. The most likely mechanism for explaining how solar and geomagnetic influences affect human health and behavior are a coupling between the human nervous system and resonating geomagnetic frequencies, called Schumann resonances, which occur in the earth-ionosphere resonant cavity and Alfvén waves. It is well established that these resonant frequencies directly overlap with those of the human brain and cardiovascular system. If all living systems are indeed interconnected and communicate with each other

  14. Quantum coherences of indistinguishable particles

    Science.gov (United States)

    Sperling, Jan; Perez-Leija, Armando; Busch, Kurt; Walmsley, Ian A.

    2017-09-01

    We study different notions of quantum correlations in multipartite systems of distinguishable and indistinguishable particles. Based on the definition of quantum coherence for a single particle, we consider two possible extensions of this concept to the many-particle scenario and determine the influence of the exchange symmetry. Moreover, we characterize the relation of multiparticle coherence to the entanglement of the compound quantum system. To support our general treatment with examples, we consider the quantum correlations of a collection of qudits. The impact of local and global quantum superpositions on the different forms of quantum correlations is discussed. For differently correlated states in the bipartite and multipartite scenarios, we provide a comprehensive characterization of the various forms and origins of quantum correlations.

  15. Entropic cohering power in quantum operations

    Science.gov (United States)

    Xi, Zhengjun; Hu, Ming-Liang; Li, Yongming; Fan, Heng

    2018-02-01

    Coherence is a basic feature of quantum systems and a common necessary condition for quantum correlations. It is also an important physical resource in quantum information processing. In this paper, using relative entropy, we consider a more general definition of the cohering power of quantum operations. First, we calculate the cohering power of unitary quantum operations and show that the amount of distributed coherence caused by non-unitary quantum operations cannot exceed the quantum-incoherent relative entropy between system of interest and its environment. We then find that the difference between the distributed coherence and the cohering power is larger than the quantum-incoherent relative entropy. As an application, we consider the distributed coherence caused by purification.

  16. Coherent Communications, Imaging and Targeting

    Energy Technology Data Exchange (ETDEWEB)

    Stappaerts, E; Baker, K; Gavel, D; Wilks, S; Olivier, S; Brase, J; Olivier, S; Brase, J

    2003-10-03

    Laboratory and field demonstration results obtained as part of the DARPA-sponsored Coherent Communications, Imaging and Targeting (CCIT) program are reviewed. The CCIT concept uses a Phase Conjugation Engine based on a quadrature receiver array, a hologram processor and a spatial light modulator (SLM) for high-speed, digital beam control. Progress on the enabling MEMS SLM, being developed by a consortium consisting of LLNL, academic institutions and small businesses, is presented.

  17. Neuronal avalanches and coherence potentials

    Science.gov (United States)

    Plenz, D.

    2012-05-01

    The mammalian cortex consists of a vast network of weakly interacting excitable cells called neurons. Neurons must synchronize their activities in order to trigger activity in neighboring neurons. Moreover, interactions must be carefully regulated to remain weak (but not too weak) such that cascades of active neuronal groups avoid explosive growth yet allow for activity propagation over long-distances. Such a balance is robustly realized for neuronal avalanches, which are defined as cortical activity cascades that follow precise power laws. In experiments, scale-invariant neuronal avalanche dynamics have been observed during spontaneous cortical activity in isolated preparations in vitro as well as in the ongoing cortical activity of awake animals and in humans. Theory, models, and experiments suggest that neuronal avalanches are the signature of brain function near criticality at which the cortex optimally responds to inputs and maximizes its information capacity. Importantly, avalanche dynamics allow for the emergence of a subset of avalanches, the coherence potentials. They emerge when the synchronization of a local neuronal group exceeds a local threshold, at which the system spawns replicas of the local group activity at distant network sites. The functional importance of coherence potentials will be discussed in the context of propagating structures, such as gliders in balanced cellular automata. Gliders constitute local population dynamics that replicate in space after a finite number of generations and are thought to provide cellular automata with universal computation. Avalanches and coherence potentials are proposed to constitute a modern framework of cortical synchronization dynamics that underlies brain function.

  18. The Portals 4.0 network programming interface.

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin; Wheeler, Kyle Bruce; Hemmert, Karl Scott; Riesen, Rolf E.; Underwood, Keith Douglas; Maccabe, Arthur Bernard; Hudson, Trammell B.

    2012-11-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generation of machines employing advanced network interface architectures that support enhanced offload capabilities.

  19. The portals 4.0.1 network programming interface.

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin; Wheeler, Kyle Bruce; Hemmert, Karl Scott; Riesen, Rolf E.; Underwood, Keith Douglas; Maccabe, Arthur Bernard; Hudson, Trammell B.

    2013-04-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generation of machines employing advanced network interface architectures that support enhanced offload capabilities. 3

  20. The Portals 4.1 Network Programming Interface

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Brian; Brightwell, Ronald B.; Grant, Ryan; Hemmert, Karl Scott; Pedretti, Kevin; Wheeler, Kyle; Underwood, Keith D; Riesen, Rolf; Maccabe, Arthur B.; Hudson, Trammel

    2017-04-01

    This report presents a specification for the Portals 4 networ k programming interface. Portals 4 is intended to allow scalable, high-performance network communication betwee n nodes of a parallel computing system. Portals 4 is well suited to massively parallel processing and embedded syste ms. Portals 4 represents an adaption of the data movement layer developed for massively parallel processing platfor ms, such as the 4500-node Intel TeraFLOPS machine. Sandia's Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4 is tar geted to the next generation of machines employing advanced network interface architectures that support enh anced offload capabilities.

  1. Quantum repeaters with imperfect memories: Cost and scalability

    Science.gov (United States)

    Razavi, M.; Piani, M.; Lütkenhaus, N.

    2009-09-01

    Memory dephasing and its impact on the rate of entanglement generation in quantum repeaters is addressed. For systems that rely on probabilistic schemes for entanglement distribution and connection, we estimate the maximum achievable rate per employed memory for our optimized partial nesting protocol, when a large number of memories are being used in each node. The above rate scales polynomially with distance, L , if quantum memories with infinitely long coherence times are available or if we employ a fully fault-tolerant scheme. For memories with finite coherence times and no fault-tolerant protection, the above rate optimistically degrades exponentially in L , regardless of the employed purification scheme. It decays, at best, exponentially in L if no purification is used.

  2. Interface colloidal robotic manipulator

    Science.gov (United States)

    Aronson, Igor; Snezhko, Oleksiy

    2015-08-04

    A magnetic colloidal system confined at the interface between two immiscible liquids and energized by an alternating magnetic field dynamically self-assembles into localized asters and arrays of asters. The colloidal system exhibits locomotion and shape change. By controlling a small external magnetic field applied parallel to the interface, structures can capture, transport, and position target particles.

  3. Interfaces in nanoscale photovoltaics

    NARCIS (Netherlands)

    Öner, S.Z.

    2016-01-01

    This thesis deals with material interfaces in nanoscale photovoltaics. Interface properties between the absorbing semiconductor and other employed materials are crucial for an efficient solar cell. While the optical properties are largely unaffected by a few nanometer thin layer, the electronic

  4. Designing the Instructional Interface.

    Science.gov (United States)

    Lohr, L. L.

    2000-01-01

    Designing the instructional interface is a challenging endeavor requiring knowledge and skills in instructional and visual design, psychology, human-factors, ergonomic research, computer science, and editorial design. This paper describes the instructional interface, the challenges of its development, and an instructional systems approach to its…

  5. User Interface Technology Survey.

    Science.gov (United States)

    1987-04-01

    Interface can be manufactured. The user Interface bulder may be provided with tools to enhance the building block set, e.g.. icon and font editor to add...ity and easy extensiblity of the command set. t supports command history , execu- tion of previous commands, and editing of commands. Through the

  6. Interface, a dispersed architecture

    NARCIS (Netherlands)

    Vissers, C.A.

    1976-01-01

    Past and current specification techniques use timing diagrams and written text to describe the phenomenology of an interface. This paper treats an interface as the architecture of a number of processes, which are dispersed over the related system parts and the message path. This approach yields a

  7. Modeling and Measurement of Spatial Coherence for Normal Incidence Seafloor Scattering

    Science.gov (United States)

    Brown, Daniel C.

    A small body of literature exists regarding the spatial coherence of the incoherent (or non-specular) component of the field scattered from the sea floor. Within this literature, the seafloor is described using simple models that consider only one or two properties that determine the spatial coherence. Additionally, the literature has focused on describing the average spatial coherence over an ensemble of seafloor realizations. The variability of the coherence that is observed for individual pings has been described neither theoretically nor through experimental observation. This research has extended the existing models for the mean spatial coherence to include a broader range of physical processes that determine the coherence of the field scattered from the seafloor near normal incidence. In particular, the effects of sensor directivity, seafloor slope, sediment scattering strength, interface transmission coefficient, sediment attenuation coefficient, sediment layer thickness, and temporal windowing have been explored. This is accomplished through the development of a model for the spatial coherence that is based upon the van Cittert-Zernike theorem. The results of this modeling show that in many realistic scenarios it is a combination of multiple parameters that determine the observed spatial coherence. This represents a significant extension in understanding that reaches beyond those processes that have been described previously. In particular, the application of a temporal window was found to have a significant impact on the spatial coherence in many scenarios. This research provides the first documentation of this effect. In addition to the modeling effort, an experiment was conducted where the spatial coherence is measured for scattering from the lake bed at Seneca Lake, New York. In this experiment, the spatial coherence of the scattered field was measured over a number of pings. This data set was used to form an ensemble that compares favorably to

  8. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  9. Scalable nanostructuring on polymer by a SiC stamp: optical and wetting effects

    DEFF Research Database (Denmark)

    Argyraki, Aikaterini; Lu, Weifang; Petersen, Paul Michael

    2015-01-01

    A method for fabricating scalable antireflective nanostructures on polymer surfaces (polycarbonate) is demonstrated. The transition from small scale fabrication of nanostructures to a scalable replication technique can be quite challenging. In this work, an area per print corresponding to a 2-inch...

  10. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  11. Scalable Multifunction Active Phased Array Systems: from concept to implementation; 2006BU1-IS

    NARCIS (Netherlands)

    LaMana, M.; Huizing, A.

    2006-01-01

    The SMRF (Scalable Multifunction Radio Frequency Systems) concept has been launched in the WEAG (Western European Armament Group) context, recently restructured into the EDA (European Defence Agency). A derived concept is introduced here, namely the SMRF-APAS (Scalable Multifunction Radio

  12. A NEaT Design for reliable and scalable network stacks

    NARCIS (Netherlands)

    Hruby, Tomas; Giuffrida, Cristiano; Sambuc, Lionel; Bos, Herbert; Tanenbaum, Andrew S.

    2016-01-01

    Operating systems provide a wide range of services, which are crucial for the increasingly high reliability and scalability demands of modern applications. Providing both reliability and scalability at the same time is hard. Commodity OS architectures simply lack the design abstractions to do so for

  13. Entanglement and topological interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Brehm, E.; Brunner, I.; Jaud, D.; Schmidt-Colinet, C. [Arnold Sommerfeld Center, Ludwig-Maximilians-Universitaet, Theresienstrasse 37, 80333, Muenchen (Germany)

    2016-06-15

    In this paper we consider entanglement entropies in two-dimensional conformal field theories in the presence of topological interfaces. Tracing over one side of the interface, the leading term of the entropy remains unchanged. The interface however adds a subleading contribution, which can be interpreted as a relative (Kullback-Leibler) entropy with respect to the situation with no defect inserted. Reinterpreting boundaries as topological interfaces of a chiral half of the full theory, we rederive the left/right entanglement entropy in analogy with the interface case. We discuss WZW models and toroidal bosonic theories as examples. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  14. User Interface History

    DEFF Research Database (Denmark)

    Jørgensen, Anker Helms; Myers, Brad A

    2008-01-01

    User Interfaces have been around as long as computers have existed, even well before the field of Human-Computer Interaction was established. Over the years, some papers on the history of Human-Computer Interaction and User Interfaces have appeared, primarily focusing on the graphical interface era...... and early visionaries such as Bush, Engelbart and Kay. With the User Interface being a decisive factor in the proliferation of computers in society and since it has become a cultural phenomenon, it is time to paint a more comprehensive picture of its history. This SIG will investigate the possibilities...... of  launching a concerted effort towards creating a History of User Interfaces. ...

  15. After Rigid Interfaces

    DEFF Research Database (Denmark)

    Troiano, Giovanni Maria

    Deformable and shape-changing interfaces are rapidly emerging in the field of human-computer interaction (HCI). Deformable interfaces provide users with newer input possibilities such as bending, squeezing, or stretching, which were impossible to achieve with rigid interfaces. Shape...... sensors in the five preferred objects and programmed them for controlling sounds with computer software. Finally, we ran a performance study where six musicians performed music with deformable interfaces at their studios. Results from the performance study show that musicians systematically map......, Transformation, Adaptation and Physicalization. In synthesis, the work presented in this thesis shows (1) implications of usefulness for deformable interfaces and how their new input modalities can redefine the way users interact with computers, and (2) how a systematic understanding of conventional design...

  16. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  17. Optimization of Hierarchical Modulation for Use of Scalable Media

    Directory of Open Access Journals (Sweden)

    Heneghan Conor

    2010-01-01

    Full Text Available This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP and Lower Priority (LP stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR for the particular examples shown.

  18. Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations

    DEFF Research Database (Denmark)

    Pedersen, Thomas Lin

    2017-01-01

    information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes...... along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do...... of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https...

  19. Scalable Spectrum Sharing Mechanism for Local Area Networks Deployment

    DEFF Research Database (Denmark)

    Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan Zsolt

    2010-01-01

    The availability on the market of powerful and lightweight mobile devices has led to a fast diffusion of mobile services for end users and the trend is shifting from voice based services to multimedia contents distribution. The current access networks are, however, able to support relatively low...... data rates and with limited Quality of Service (QoS). In order to extend the access to high data rate services to wireless users, the International Telecommunication Union (ITU) established new requirements for future wireless communication technologies of up to 1Gbps in low mobility and up to 100Mbps...... management (RRM) functionalities in a CR framework, able to minimize the inter-OLA interferences. A Game Theory-inspired scalable algorithm is introduced to enable a distributed resource allocation in competitive radio environments. The proof-ofconcept simulation results demonstrate the effectiveness...

  20. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  1. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  2. Scalable Fabrication of 2D Semiconducting Crystals for Future Electronics

    Directory of Open Access Journals (Sweden)

    Jiantong Li

    2015-12-01

    Full Text Available Two-dimensional (2D layered materials are anticipated to be promising for future electronics. However, their electronic applications are severely restricted by the availability of such materials with high quality and at a large scale. In this review, we introduce systematically versatile scalable synthesis techniques in the literature for high-crystallinity large-area 2D semiconducting materials, especially transition metal dichalcogenides, and 2D material-based advanced structures, such as 2D alloys, 2D heterostructures and 2D material devices engineered at the wafer scale. Systematic comparison among different techniques is conducted with respect to device performance. The present status and the perspective for future electronics are discussed.

  3. MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Directory of Open Access Journals (Sweden)

    Prem Prakash Jayaraman

    2014-05-01

    Full Text Available Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data acrossmultiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.

  4. A Modular, Scalable, Extensible, and Transparent Optical Packet Buffer

    Science.gov (United States)

    Small, Benjamin A.; Shacham, Assaf; Bergman, Keren

    2007-04-01

    We introduce a novel optical packet switching buffer architecture that is composed of multiple building-block modules, allowing for a large degree of scalability. The buffer supports independent and simultaneous read and write processes without packet rejection or misordering and can be considered a fully functional packet buffer. It can easily be programmed to support two prioritization schemes: first-in first-out (FIFO) and last-in first-out (LIFO). Because the system leverages semiconductor optical amplifiers as switching elements, wideband packets can be routed transparently. The operation of the system is discussed with illustrative packet sequences, which are then verified on an actual implementation composed of conventional fiber-optic componentry.

  5. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  6. Scalable, ultra-resistant structural colors based on network metamaterials

    CERN Document Server

    Galinski, Henning; Dong, Hao; Gongora, Juan S Totero; Favaro, Grégory; Döbeli, Max; Spolenak, Ralph; Fratalocchi, Andrea; Capasso, Federico

    2016-01-01

    Structural colours have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realise robust colours with a scalable fabrication technique is still lacking, hampering the realisation of practical applications with this platform. Here we develop a new approach based on large scale network metamaterials, which combine dealloyed subwavelength structures at the nanoscale with loss-less, ultra-thin dielectrics coatings. By using theory and experiments, we show how sub-wavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero (ENZ) regions generated in the metallic network, manifesting the formation of highly saturated structural colours that cover a wide portion of the spectrum. Ellipsometry measurements report the efficient observation of these colours even at angles of $70$ degrees. The network-like architecture of these nanoma...

  7. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    Directory of Open Access Journals (Sweden)

    Kae Nemoto

    2014-08-01

    Full Text Available Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  8. A Practical and Scalable Tool to Find Overlaps between Sequences

    Science.gov (United States)

    Haj Rachid, Maan

    2015-01-01

    The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment. PMID:25961045

  9. A Practical and Scalable Tool to Find Overlaps between Sequences

    Directory of Open Access Journals (Sweden)

    Maan Haj Rachid

    2015-01-01

    Full Text Available The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment.

  10. CloudETL: Scalable Dimensional ETL for Hadoop and Hive

    DEFF Research Database (Denmark)

    Xiufeng, Liu; Thomsen, Christian; Pedersen, Torben Bach

    Extract-Transform-Load (ETL) programs process data from sources into data warehouses (DWs). Due to the rapid growth of data volumes, there is an increasing demand for systems that can scale on demand. Recently, much attention has been given to MapReduce which is a framework for highly parallel...... handling of massive data sets in cloud environments. The MapReduce-based Hive has been proposed as a DBMS-like system for DWs and provides good and scalable analytical features. It is,however, still challenging to do proper dimensional ETL processing with Hive; for example, UPDATEs are not supported which...... makes handling of slowly changing dimensions (SCDs) very difficult. To remedy this, we here present the cloud-enabled ETL framework CloudETL. CloudETL uses the open source MapReduce implementation Hadoop to parallelize the ETL execution and to process data into Hive. The user defines the ETL process...

  11. A Software and Hardware IPTV Architecture for Scalable DVB Distribution

    Directory of Open Access Journals (Sweden)

    Georg Acher

    2009-01-01

    Full Text Available Many standards and even more proprietary technologies deal with IP-based television (IPTV. But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders.

  12. Adaptive Streaming of Scalable Videos over P2PTV

    Directory of Open Access Journals (Sweden)

    Youssef Lahbabi

    2015-01-01

    Full Text Available In this paper, we propose a new Scalable Video Coding (SVC quality-adaptive peer-to-peer television (P2PTV system executed at the peers and at the network. The quality adaptation mechanisms are developed as follows: on one hand, the Layer Level Initialization (LLI is used for adapting the video quality with the static resources at the peers in order to avoid long startup times. On the other hand, the Layer Level Adjustment (LLA is invoked periodically to adjust the SVC layer to the fluctuation of the network conditions with the aim of predicting the possible stalls before their occurrence. Our results demonstrate that our mechanisms allow quickly adapting the video quality to various system changes while providing best Quality of Experience (QoE that matches current resources of the peer devices and instantaneous throughput available at the network state.

  13. Final Report. Center for Scalable Application Development Software

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  14. Scalable and Flexible SLA Management Approach for Cloud

    Directory of Open Access Journals (Sweden)

    SHAUKAT MEHMOOD

    2017-01-01

    Full Text Available Cloud Computing is a cutting edge technology in market now a days. In Cloud Computing environment the customer should pay bills to use computing resources. Resource allocation is a primary task in a cloud environment. Significance of resources allocation and availability increase many fold because income of the cloud depends on how efficiently it provides the rented services to the clients. SLA (Service Level Agreement is signed between the cloud Services Provider and the Cloud Services Consumer to maintain stipulated QoS (Quality of Service. It is noted that SLAs are violated due to several reasons. These may include system malfunctions and change in workload conditions. Elastic and adaptive approaches are required to prevent SLA violations. We propose an application level monitoring novel scheme to prevent SLA violations. It is based on elastic and scalable characteristics. It is easy to deploy and use. It focuses on application level monitoring.

  15. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou

    2007-07-01

    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  16. MSDLSR: Margin Scalable Discriminative Least Squares Regression for Multicategory Classification.

    Science.gov (United States)

    Wang, Lingfeng; Zhang, Xu-Yao; Pan, Chunhong

    2016-12-01

    In this brief, we propose a new margin scalable discriminative least squares regression (MSDLSR) model for multicategory classification. The main motivation behind the MSDLSR is to explicitly control the margin of DLSR model. We first prove that the DLSR is a relaxation of the traditional L2 -support vector machine. Based on this fact, we further provide a theorem on the margin of DLSR. With this theorem, we add an explicit constraint on DLSR to restrict the number of zeros of dragging values, so as to control the margin of DLSR. The new model is called MSDLSR. Theoretically, we analyze the determination of the margin and support vectors of MSDLSR. Extensive experiments illustrate that our method outperforms the current state-of-the-art approaches on various machine leaning and real-world data sets.

  17. Optimization of Hierarchical Modulation for Use of Scalable Media

    Science.gov (United States)

    Liu, Yongheng; Heneghan, Conor

    2010-12-01

    This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP) and Lower Priority (LP) stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR) for the particular examples shown.

  18. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  19. Scalable load-balance measurement for SPMD codes

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D

    2008-08-05

    Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.

  20. Quantitative coherence witness for finite dimensional states

    Science.gov (United States)

    Ren, Huizhong; Lin, Anni; He, Siying; Hu, Xueyuan

    2017-12-01

    We define the stringent coherence witness as an observable whose mean value vanishes for all incoherent states but nonzero for some coherent states. Such witnesses are proved to exist for any finite-dimension states. Not only is the witness efficient in testing whether a state is coherent, but also its mean value can quantitatively reveal the amount of coherence. For an unknown state, the modulus of the mean value of a normalized witness provides a tight lower bound to the l1-norm of coherence. When we have some previous knowledge of a state, the optimal witness which has the maximal mean value is derived. It is proved that for any finite dimension state, the mean value of the optimal witness, which we call the witnessed coherence, equals the l1-norm of coherence. In the case that the witness is fixed and the incoherent operations are allowed, the maximal mean value can reach the witnessed coherence if and only if certain relations between the fixed witness and the initial state are satisfied. Our results provide a way to directly measure the coherence in arbitrary finite dimension states and an operational interpretation of the l1-norm of coherence.