WorldWideScience

Sample records for scalable spectrum sharing

  1. Scalable Spectrum Sharing Mechanism for Local Area Networks Deployment

    DEFF Research Database (Denmark)

    Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan Zsolt

    2010-01-01

    The availability on the market of powerful and lightweight mobile devices has led to a fast diffusion of mobile services for end users and the trend is shifting from voice based services to multimedia contents distribution. The current access networks are, however, able to support relatively low...... data rates and with limited Quality of Service (QoS). In order to extend the access to high data rate services to wireless users, the International Telecommunication Union (ITU) established new requirements for future wireless communication technologies of up to 1Gbps in low mobility and up to 100Mbps...... management (RRM) functionalities in a CR framework, able to minimize the inter-OLA interferences. A Game Theory-inspired scalable algorithm is introduced to enable a distributed resource allocation in competitive radio environments. The proof-ofconcept simulation results demonstrate the effectiveness...

  2. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  3. 5G Spectrum Sharing

    OpenAIRE

    Nekovee, Maziar; Rudd, Richard

    2017-01-01

    In this paper an overview is given of the current status of 5G industry standards, spectrum allocation and use cases, followed by initial investigations of new opportunities for spectrum sharing in 5G using cognitive radio techniques, considering both licensed and unlicensed scenarios. A particular attention is given to sharing millimeter-wave frequencies, which are of prominent importance for 5G.

  4. Dynamic Spectrum Sharing among Femtocells

    DEFF Research Database (Denmark)

    Da Costa, Gustavo Wagner Oliveira

    2012-01-01

    The ever-growing demand for mobile broadband is leading to an imminent spectrum scarcity. In order to cope with such situation dynamic spectrum sharing and the widespread deployment of small cells (femtocells) are promising solutions. Delivering such a view is not short of challenges. Massive...

  5. Imitation-based Social Spectrum Sharing

    OpenAIRE

    Chen, Xu; Huang, Jianwei

    2014-01-01

    Dynamic spectrum sharing is a promising technology for improving the spectrum utilization. In this paper, we study how secondary users can share the spectrum in a distributed fashion based on social imitations. The imitation-based mechanism leverages the social intelligence of the secondary user crowd and only requires a low computational power for each individual user. We introduce the information sharing graph to model the social information sharing relationship among the secondary users. W...

  6. Spectrum sharing for future mobile cellular systems

    OpenAIRE

    Bennis, M.

    2009-01-01

    Abstract Spectrum sharing has become a high priority research area over the past few years. The motivation behind this lies in the fact that the limited spectrum is currently inefficiently utilized. As recognized by the World radio communication conference (WRC)-07, the amount of identified spectrum is not large enough to support large bandwidths for a substantial number of operators. Therefore, it is paramount for future mobile cellular systems to share the frequency spectrum and coexist ...

  7. Spectrum Sharing Radar: Coexistence via Xampling

    OpenAIRE

    Cohen, Deborah; Mishra, Kumar Vijay; Eldar, Yonina C.

    2016-01-01

    This paper presents a spectrum sharing technology enabling interference-free operation of a surveillance radar and communication transmissions over a common spectrum. A cognitive radio receiver senses the spectrum using low sampling and processing rates. The radar is a cognitive system that employs a Xampling-based receiver and transmits in several narrow bands. Our main contribution is the alliance of two previous ideas, CRo and cognitive radar (CRr), and their adaptation to solve the spectr...

  8. Opportunistic spectrum sharing in cognitive radio networks

    CERN Document Server

    Wang, Zhe

    2015-01-01

    This Springer Brief investigates spectrum sharing with limited channel feedback in various cognitive radio systems, such as point-to-point, broadcast scheduling and ad-hoc networks. The design aim is to optimally allocate the secondary resources to improve the throughput of secondary users while maintaining a certain quality of service for primary users. The analytical results of optimal resource allocation are derived via optimization theory and are verified by the numerical results. The results demonstrate the secondary performance is significantly improved by limited feedback and is further improved by more feedback bits, more secondary receivers and more primary side information.

  9. Direct sequence spread spectrum CDMA in shared spectrum applications

    Science.gov (United States)

    Schilling, Donald L.; Milstein, Laurence B.; Pickholtz, Raymond L.; Miller, Frank

    1991-01-01

    Personal Communication Network (PCN) is an entirely wireless communication system with the capability of assessing the wired telephone system to reach anyone processing only a wired telephone. It is expected to compete with the existing mobile cellular system which connects directly to the wired telephone system. While many PCN systems employ TDMA technology, the PCN system described here uses Broadband CDMA (BCDMA(sup SM)) which is capable of sharing the spectrum with other users and which is extremely resistant to fading caused by multipath.

  10. Coalition Formation and Spectrum Sharing of Cooperative Spectrum Sensing Participants.

    Science.gov (United States)

    Zhensheng Jiang; Wei Yuan; Leung, Henry; Xinge You; Qi Zheng

    2017-05-01

    In cognitive radio networks, self-interested secondary users (SUs) desire to maximize their own throughput. They compete with each other for transmit time once the absence of primary users (PUs) is detected. To satisfy the requirement of PU protection, on the other hand, they have to form some coalitions and cooperate to conduct spectrum sensing. Such dilemma of SUs between competition and cooperation motivates us to study two interesting issues: 1) how to appropriately form some coalitions for cooperative spectrum sensing (CSS) and 2) how to share transmit time among SUs. We jointly consider these two issues, and propose a noncooperative game model with 2-D strategies. The first dimension determines coalition formation, and the second indicates transmit time allocation. Considering the complexity of solving this game, we decompose the game into two more tractable ones: one deals with the formation of CSS coalitions, and the other focuses on the allocation of transmit time. We characterize the Nash equilibria (NEs) of both games, and show that the combination of these two NEs corresponds to the NE of the original game. We also develop a distributed algorithm to achieve a desirable NE of the original game. When this NE is achieved, the SUs obtain a Dhp-stable coalition structure and a fair transmit time allocation. Numerical results verify our analyses, and demonstrate the effectiveness of our algorithm.

  11. A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes

    Science.gov (United States)

    Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme. PMID:28072851

  12. A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes.

    Science.gov (United States)

    Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.

  13. Market-driven spectrum sharing in cognitive radio

    CERN Document Server

    Yi, Changyan

    2016-01-01

    This brief focuses on the current research on mechanism design for dynamic spectrum sharing in cognitive radio (CR) networks. Along with a review of CR architectures and characteristics, this brief presents the motivations, significances and unique challenges of implementing algorithmic mechanism design for encouraging both primary spectrum owners and secondary spectrum users to participate in dynamic spectrum sharing. The brief then focuses on recent advances in mechanism design in CR networks. With an emphasis on dealing with the uncertain spectrum availabilities, mechanisms based on spectrum recall, two-stage spectrum sharing and online spectrum allocation are introduced with the support of theoretic analyses and numerical illustrations. The brief concludes with a discussion of potential research directions and interests, which will motivate further studies on mechanism design for wireless communications. This brief is concise and approachable for researchers, professionals and advanced-level students in w...

  14. Spectrum Sharing Models in Cognitive Radio

    OpenAIRE

    Rifà Pous, Helena; Rifà Coma, Josep

    2011-01-01

    Spectrum scarcity demands thinking new ways to manage the distribution of radio frequency bands so that its use is more effective. The emerging technology that can enable this paradigm shift is the cognitive radio. Different models for organizing and managing cognitive radios have emerged, all with specific strategic purposes. In this article we review the allocation spectrum patterns of cognitive radio networks and analyse which are the common basis of each model.We expose the vulne...

  15. 78 FR 64200 - Innovative Spectrum Sharing Technology Day Event

    Science.gov (United States)

    2013-10-28

    ... wireless technologies and applications. President Obama, supported by the President's Council of Advisors... initiatives aimed at satisfying the nation's surging demand for wireless services, devices and applications... National Telecommunications and Information Administration Innovative Spectrum Sharing Technology Day Event...

  16. Inter-operator spectrum sharing from a game theoretical perspective

    OpenAIRE

    Samson Lasaulce; Mehdi Bennis; Merouane Debbah

    2009-01-01

    International audience; In this paper, we address the problem of spectrum sharing where competitive oper- ators coexist in the same frequency band. First, we model this problem as a strategic non-cooperative game where operators simultaneously share the spectrum according to the Nash Equilibrium (N.E). Given a set of channel realizations, several Nash equilibria exist which renders the outcome of the game unpredictable. Then, in a cognitive context with the presence of primary and secondary o...

  17. COMAS: A Cooperative Multiagent Architecture for Spectrum Sharing

    Directory of Open Access Journals (Sweden)

    Mir Usama

    2010-01-01

    Full Text Available Static spectrum allocation is a major problem in recent wireless network domains. Generally, these allocations lead to inefficient usage creating empty spectrum holes or white spaces. Thus, some alternatives must be ensured in order to mitigate the current spectrum scarcity. An effective technology to ensure dynamic spectrum usage is cognitive radio, which seeks the unutilized spectrum portions opportunistically and shares them with the neighboring devices. However, since users generally have a limited knowledge about their environment, we claim that cooperative behavior can provide them with the necessary information to solve the global issues. Therefore, in this paper, we develop a novel approach for spectrum allocation using a multiagent system that enables cognitive radio devices to work cooperatively with their neighboring licensed (or primary user devices in order to utilize the available spectrum dynamically. The fundamental aspect of our approach is the deployment of an agent on each device which cooperates with its neighboring agents in order to have a better spectrum sharing. Considering the concurrent, distributed, and autonomous nature of the proposed approach, Petri nets are adopted to model the cooperative behaviors of primary and cognitive radio users. Our simulation results show that the proposed solution achieves good performance in terms of spectrum access, sustaining lower communication overhead.

  18. A scalable and pragmatic method for the safe sharing of high-quality health data.

    Science.gov (United States)

    Prasser, Fabian; Kohlmayer, Florian; Spengler, Helmut; Kuhn, Klaus

    2017-03-23

    The sharing of sensitive personal health data is an important aspect of biomedical research. Methods of data de-identification are often used in this process to trade the granularity of data off against privacy risks. However, traditional approaches, such as HIPAA Safe Harbor or k-anonymization, often fail to provide data with sufficient quality. Alternatively, data can be de-identified only to a degree which still allows to use it as required, e.g. to carry out specific analyses. Controlled environments, which restrict the ways recipients can interact with the data, can then be used to cope with residual risks. The contributions of this article are twofold. Firstly, we present a method for implementing controlled data sharing environments and analyze its privacy properties. Secondly, we present a de-identification method which is specifically suited for sanitizing health data which is to be shared in such environments. Traditional de-identification methods control the uniqueness of records in a dataset. The basic idea of our approach is to reduce the probability that a record in a dataset has characteristics which are unique within the underlying population. As the characteristics of the population are typically not known, we have implemented a pragmatic solution in which properties of the population are modeled with statistical methods. We have further developed an accompanying process for evaluating and validating the degree of protection provided. The results of an extensive experimental evaluation show that our approach enables the safe sharing of high-quality data and that it is highly scalable.

  19. Performance analysis of distributed beamforming in a spectrum sharing system

    KAUST Repository

    Yang, Liang

    2012-09-01

    In this paper, we consider a distributed beamforming scheme (DBF) in a spectrum sharing system where multiple secondary users share the spectrum with the licensed primary users under an interference temperature constraint. We assume that DBF is applied at the secondary users. We first consider optimal beamforming and compare it with the user selection scheme in terms of the outage probability and bit-error rate performance. Since perfect feedback is difficult to obtain, we then investigate a limited feedback DBF scheme and develop an outage probability analysis for a random vector quantization (RVQ) design algorithm. Numerical results are provided to illustrate our mathematical formalism and verify our analysis. © 2012 IEEE.

  20. Inter-Operator Spectrum Sharing from a Game Theoretical Perspective

    Science.gov (United States)

    Bennis, Mehdi; Lasaulce, Samson; Debbah, Merouane

    2009-12-01

    We address the problem of spectrum sharing where competitive operators coexist in the same frequency band. First, we model this problem as a strategic non-cooperative game where operators simultaneously share the spectrum according to the Nash Equilibrium (NE). Given a set of channel realizations, several Nash equilibria exist which renders the outcome of the game unpredictable. Then, in a cognitive context with the presence of primary and secondary operators, the inter-operator spectrum sharing problem is reformulated as a Stackelberg game using hierarchy where the primary operator is the leader. The Stackelberg Equilibrium (SE) is reached where the best response of the secondary operator is taken into account upon maximizing the primary operator's utility function. Moreover, an extension to the multiple operators spectrum sharing problem is given. It is shown that the Stackelberg approach yields better payoffs for operators compared to the classical water-filling approach. Finally, we assess the goodness of the proposed distributed approach by comparing its performance to the centralized approach.

  1. Inter-Operator Spectrum Sharing from a Game Theoretical Perspective

    Directory of Open Access Journals (Sweden)

    Samson Lasaulce

    2009-01-01

    Full Text Available We address the problem of spectrum sharing where competitive operators coexist in the same frequency band. First, we model this problem as a strategic non-cooperative game where operators simultaneously share the spectrum according to the Nash Equilibrium (NE. Given a set of channel realizations, several Nash equilibria exist which renders the outcome of the game unpredictable. Then, in a cognitive context with the presence of primary and secondary operators, the inter-operator spectrum sharing problem is reformulated as a Stackelberg game using hierarchy where the primary operator is the leader. The Stackelberg Equilibrium (SE is reached where the best response of the secondary operator is taken into account upon maximizing the primary operator's utility function. Moreover, an extension to the multiple operators spectrum sharing problem is given. It is shown that the Stackelberg approach yields better payoffs for operators compared to the classical water-filling approach. Finally, we assess the goodness of the proposed distributed approach by comparing its performance to the centralized approach.

  2. On multiuser switched diversity transmission for spectrum sharing systems

    KAUST Repository

    Qaraqe, Marwa

    2012-01-01

    In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users share the spectrum with primary users. In particular, we devise two schemes for selecting the user among those that satisfy the interference constraints and achieve an acceptable signal-to-noise ratio (SNR) level. The first scheme selects the user with the maximum SNR at the receiver, whereas in the second scheme the users are scanned in a sequential manner until an acceptable user is found. In addition, we consider two power adaptive settings. In the on/off power adaptive setting, the users transmit based on whether the interference constraint is met or not while in the full power adaptive setting, the users vary their transmission power to satisfy the interference constraint. Finally, we present numerical results of our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load of both schemes. © 2012 ICST.

  3. Opportunistic Cognitive Relaying: A Win-Win Spectrum Sharing Scheme

    Directory of Open Access Journals (Sweden)

    Luo Haiyan

    2010-01-01

    Full Text Available A cost-effective spectrum sharing architecture is proposed to enable the legacy noncognitive secondary system to coexist with the primary system. Specifically, we suggest to install a few intermediate nodes, namely, the cognitive relays, to conduct the spectrum sensing and coordinate the spectrum access. To achieve the goal of win-win between primary and secondary systems, the cognitive relay may act as a cooperator for both of them, and an Opportunistic Cognitive Relaying (OCR scheme is specially devised. In this scheme, the cognitive relay opportunistically switches among three different working modes, that is, Relay for Primary Link (RPL, Relay for Secondary Link (RSL, or Relay for Neither of the Links (RNL, respectively, based on the channel-dependent observation of both systems. In addition, the transmit power for cognitive relay and secondary transmitter in each mode are optimally determined by maximizing the transmission rate of secondary system while keeping or even reducing the outage probability of primary system. Simulation results validate the efficiency of the proposed spectrum sharing scheme.

  4. Interference-aware random beam selection for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.

    2012-09-01

    Spectrum sharing systems have been introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this paper, we develop interference-aware random beam selection schemes that provide enhanced throughput for the secondary link under the condition that the interference observed at the primary link is within a predetermined acceptable value. For a secondary transmitter equipped with multiple antennas, our schemes select a random beam, among a set of power- optimized orthogonal random beams, that maximizes the capacity of the secondary link while satisfying the interference constraint at the primary receiver for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the signal-to-noise and interference ratio (SINR) statistics as well as the capacity of the secondary link. Finally, we present numerical results that study the effect of system parameters including number of beams and the maximum transmission power on the capacity of the secondary link attained using the proposed schemes. © 2012 IEEE.

  5. Joint switched multi-spectrum and transmit antenna diversity for spectrum sharing systems

    KAUST Repository

    Sayed, Mostafa M.

    2013-10-01

    In spectrum sharing systems, a secondary user (SU) is allowed to share the spectrum with a primary (licensed) network under the condition that the interference observed at the receivers of the primary users (PU-Rxs) is below a predetermined level. In this paper, we consider a secondary network comprised of a secondary transmitter (SU-Tx) equipped with multiple antennas and a single-antenna secondary receiver (SU-Rx) sharing the same spectrum with multiple primary users (PUs), each with a distinct spectrum. We develop transmit antenna diversity schemes at the SU-Tx that exploit the multi-spectrum diversity provided by the existence of multiple PUs so as to optimize the signal-to-noise ratio (SNR) at the SU-Rx. In particular, assuming bounded transmit power at the SU-Tx, we develop switched selection schemes that select the primary spectrum and the SU-Tx transmit antenna that maintain the SNR at the SU-Rx above a specific threshold. Assuming Rayleigh fading channels and binary phase-shift keying (BPSK) transmission, we derive the average bit-error-rate (BER) and average feedback load expressions for the proposed schemes. For the sake of comparison, we also derive a BER expression for the optimal selection scheme that selects the best antenna/spectrum pair that maximizes the SNR at the SU-Rx, in exchange of high feedback load and switching complexity. Finally, we show that our analytical results are in perfect agreement with the simulation results. © 2013 IEEE.

  6. Performance analysis of distributed beamforming in a spectrum sharing system

    KAUST Repository

    Yang, Liang

    2013-05-01

    In this paper, we consider a distributed beamforming scheme (DBF) in a spectrum sharing system where multiple secondary users share the spectrum with some licensed primary users under an interference temperature constraint. We assume that the DBF is applied at the secondary users. We first consider optimal beamforming and compare it with the user selection scheme in terms of the outage probability and bit error rate performance metrics. Since perfect feedback is difficult to obtain, we then investigate a limited feedback DBF scheme and develop an analysis for a random vector quantization design algorithm. Specifically, the approximate statistics functions of the squared inner product between the optimal and quantized vectors are derived. With these statistics, we analyze the outage performance. Furthermore, the effects of channel estimation error and number of primary users on the system performance are investigated. Finally, optimal power adaptation and cochannel interference are considered and analyzed. Numerical and simulation results are provided to illustrate our mathematical formalism and verify our analysis. © 2012 IEEE.

  7. Joint opportunistic beam and spectrum selection schemes for spectrum sharing systems with limited feedback

    KAUST Repository

    Sayed, Mostafa M.

    2014-11-01

    Spectrum sharing systems have been introduced to alleviate the problem of spectrum scarcity by allowing an unlicensed secondary user (SU) to share the spectrum with a licensed primary user (PU) under acceptable interference levels to the primary receiver (PU-Rx). In this paper, we consider a secondary link composed of a secondary transmitter (SU-Tx) equipped with multiple antennas and a single-antenna secondary receiver (SU-Rx). The secondary link is allowed to share the spectrum with a primary network composed of multiple PUs communicating over distinct frequency spectra with a primary base station. We develop a transmission scheme where the SU-Tx initially broadcasts a set of random beams over all the available primary spectra for which the PU-Rx sends back the index of the spectrum with the minimum interference level, as well as information that describes the interference value, for each beam. Based on the feedback information on the PU-Rx, the SU-Tx adapts the transmitted beams and then resends the new beams over the best primary spectrum for each beam to the SU-Rx. The SU-Rx selects the beam that maximizes the received signal-to-interference-plus-noise ratio (SINR) to be used in transmission over the next frame. We consider three cases for the level of feedback information describing the interference level. In the first case, the interference level is described by both its magnitude and phase; in the second case, only the magnitude is considered; and in the third case, we focus on a q-bit description of its magnitude. In the latter case, we propose a technique to find the optimal quantizer thresholds in a mean-square-error sense. We also develop a statistical analysis for the SINR statistics and the capacity and bit error rate of the secondary link and present numerical results that study the impact of the different system parameters.

  8. 77 FR 18793 - Spectrum Sharing Innovation Test-Bed Pilot Program

    Science.gov (United States)

    2012-03-28

    ... employing spectrum sensing and/or geo- location techniques to share spectrum with land mobile radio (LMR...) evaluate its radio frequency environment using spectrum sensing, geo-location, or a combination of spectrum... completion of Phase I, NTIA will evaluate the DSA spectrum sensing and/or geo- location capabilities of the...

  9. Energy efficient cross layer design for spectrum sharing systems

    KAUST Repository

    Alabbasi, Abdulrahman

    2016-10-06

    We propose a cross layer design that optimizes the energy efficiency of spectrum sharing systems. The energy per good bit (EPG) is considered as an energy efficiency metric. We optimize the secondary user\\'s transmission power and media access frame length to minimize the EPG metric. We protect the primary user transmission via an outage probability constraint. The non-convex targeted problem is optimized by utilizing the generalized convexity theory and verifying the strictly pseudo-convex structure of the problem. Analytical results of the optimal power and frame length are derived. We also used these results in proposing an algorithm, which guarantees the existence of a global optimal solution. Selected numerical results show the improvement of the proposed system compared to other systems. © 2016 IEEE.

  10. On the performance of spectrum sharing systems with multiple antennas

    KAUST Repository

    Yang, Liang

    2012-01-01

    In this paper, we study the capacity of spectrum sharing (SS) multiple-input multiple-output (MIMO) systems over Rayleigh fading channels. More specifically, we present closed-form capacity formulas for such systems with and without optimal power and rate adaptation. A lower bound on the capacity is also derived to characterize the scaling law of the capacity. Results show that increasing the number of antennas has a negative effect on the system capacity in the low signal-to-noise (SNR) regime and the scaling law at high SNR is similar to the conventional MIMO systems. In addition, a lower bound on the capacity of the SS keyhole MIMO channels is analyzed. We also present a capacity analysis of SS MIMO maximal ratio combining (MRC) systems and the results show that the capacity of such systems always decreases with the increase of the number of antennas. Numerical results are finally given to illustrate our analysis. © 2012 ICST.

  11. Inferring demographic history from a spectrum of shared haplotype lengths.

    Directory of Open Access Journals (Sweden)

    Kelley Harris

    2013-06-01

    Full Text Available There has been much recent excitement about the use of genetics to elucidate ancestral history and demography. Whole genome data from humans and other species are revealing complex stories of divergence and admixture that were left undiscovered by previous smaller data sets. A central challenge is to estimate the timing of past admixture and divergence events, for example the time at which Neanderthals exchanged genetic material with humans and the time at which modern humans left Africa. Here, we present a method for using sequence data to jointly estimate the timing and magnitude of past admixture events, along with population divergence times and changes in effective population size. We infer demography from a collection of pairwise sequence alignments by summarizing their length distribution of tracts of identity by state (IBS and maximizing an analytic composite likelihood derived from a Markovian coalescent approximation. Recent gene flow between populations leaves behind long tracts of identity by descent (IBD, and these tracts give our method power by influencing the distribution of shared IBS tracts. In simulated data, we accurately infer the timing and strength of admixture events, population size changes, and divergence times over a variety of ancient and recent time scales. Using the same technique, we analyze deeply sequenced trio parents from the 1000 Genomes project. The data show evidence of extensive gene flow between Africa and Europe after the time of divergence as well as substructure and gene flow among ancestral hominids. In particular, we infer that recent African-European gene flow and ancient ghost admixture into Europe are both necessary to explain the spectrum of IBS sharing in the trios, rejecting simpler models that contain less population structure.

  12. Capacity analysis of spectrum sharing spatial multiplexing MIMO systems

    KAUST Repository

    Yang, Liang

    2014-12-01

    This paper considers a spectrum sharing (SS) multiple-input multiple-output (MIMO) system operating in a Rayleigh fading environment. First the capacity of a single-user SS spatial multiplexing system is investigated in two scenarios that assume different receivers. To explicitly show the capacity scaling law of SS MIMO systems, some approximate capacity expressions for the two scenarios are derived. Next, we extend our analysis to a multiple user system with zero-forcing receivers (ZF) under spatially-independent scheduling and analyze the sum-rate. Furthermore, we provide an asymptotic sum-rate analysis to investigate the effects of different parameters on the multiuser diversity gain. Our results show that the secondary system with a smaller number of transmit antennas Nt and a larger number of receive antennas Nr can achieve higher capacity at lower interference temperature Q, but at high Q the capacity follows the scaling law of the conventional MIMO systems. However, for a ZF SS spatial multiplexing system, the secondary system with small Nt and large Nr can achieve the highest capacity throughout the entire region of Q. For a ZF SS spatial multiplexing system with scheduling, the asymptotic sum-rate scales like Ntlog2(Q(KNtNp-1)/Nt), where Np denotes the number of antennas of the primary receiver and K represents the number of secondary transmitters.

  13. Switch and examine transmit diversity for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.

    2011-06-01

    In this paper, we develop a switch and examine transmit diversity algorithm for spectrum sharing cognitive networks. We consider a cognitive network composed of a primary link that employs constant rate and constant power transmission scheme with automatic-and-repeat request (ARQ) protocol, while the secondary link is composed of a fixed power multiple-antenna secondary transmitter and a single antenna receiver. Our objective is to develop a low complex transmit diversity algorithm at the secondary transmitter that maximizes the performance of the secondary link in terms of the effective throughput while maintaining a predetermined maximum loss in the packet rate of the primary link. In achieving this objective, we develop an algorithm that selects the best antenna, which maintains the quality of the secondary link in terms of signal-to-noise ratio above a specific threshold, based on overhearing the acknowledgment (ACK) and negative acknowledgment (NACK) feedback messages transmitted over the primary link. We also develop closed form expressions for the bit error rates and the effective throughput of the secondary link. © 2011 IEEE.

  14. The Spectrum Sharing in Cognitive Radio Networks Based on Competitive Price Game

    Directory of Open Access Journals (Sweden)

    Y. B. Li

    2012-09-01

    Full Text Available The competitive price game model is used to analyze the spectrum sharing problem in the cognitive radio networks, and the spectrum sharing problem with the constraints of available spectrum resource from primary users is further discussed in this paper. The Rockafeller multiplier method is applied to deal with the constraints of available licensed spectrum resource, and the improved profits function is achieved, which can be used to measure the impact of shared spectrum price strategies on the system profit. However, in the competitive spectrum sharing problem of practical cognitive radio network, primary users have to determine price of the shared spectrum without the acknowledgement of the other primary user’s price strategies. Thus a fast gradient iterative calculation method of equilibrium price is proposed, only with acknowledgement of the price strategies of shared spectrum during last cycle. Through the adaptive iteration at the direction with largest gradient of improved profit function, the equilibrium price strategies can be achieved rapidly. It can also avoid the predefinition of adjustment factor according to the parameters of communication system in conventional linear iteration method. Simulation results show that the proposed competitive price spectrum sharing model can be applied in the cognitive radio networks with constraints of available licensed spectrum, and it has better convergence performance.

  15. Joint random beam and spectrum selection for spectrum sharing systems with partial channel state information

    KAUST Repository

    Abdallah, Mohamed M.

    2013-11-01

    In this work, we develop joint interference-aware random beam and spectrum selection scheme that provide enhanced performance for the secondary network under the condition that the interference observed at the primary receiver is below a predetermined acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a set of primary links composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes jointly select a beam, among a set of power-optimized random beams, as well as the primary spectrum that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint. In particular, we consider the case where the interference level is described by a q-bit description of its magnitude, whereby we propose a technique to find the optimal quantizer thresholds in a mean square error (MSE) sense. © 2013 IEEE.

  16. An accessible, scalable ecosystem for enabling and sharing diverse mass spectrometry imaging analyses.

    Science.gov (United States)

    Fischer, Curt R; Ruebel, Oliver; Bowen, Benjamin P

    2016-01-01

    Mass spectrometry imaging (MSI) is used in an increasing number of biological applications. Typical MSI datasets contain unique, high-resolution mass spectra from tens of thousands of spatial locations, resulting in raw data sizes of tens of gigabytes per sample. In this paper, we review technical progress that is enabling new biological applications and that is driving an increase in the complexity and size of MSI data. Handling such data often requires specialized computational infrastructure, software, and expertise. OpenMSI, our recently described platform, makes it easy to explore and share MSI datasets via the web - even when larger than 50 GB. Here we describe the integration of OpenMSI with IPython notebooks for transparent, sharable, and replicable MSI research. An advantage of this approach is that users do not have to share raw data along with analyses; instead, data is retrieved via OpenMSI's web API. The IPython notebook interface provides a low-barrier entry point for data manipulation that is accessible for scientists without extensive computational training. Via these notebooks, analyses can be easily shared without requiring any data movement. We provide example notebooks for several common MSI analysis types including data normalization, plotting, clustering, and classification, and image registration. Published by Elsevier Inc.

  17. An accessible, scalable ecosystem for enabling and sharing diverse mass spectrometry imaging analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, CR; Ruebel, O; Bowen, BP

    2016-01-01

    Mass spectrometry imaging (MSI) is used in an increasing number of biological applications. Typical MSI datasets contain unique, high-resolution mass spectra from tens of thousands of spatial locations, resulting in raw data sizes of tens of gigabytes per sample. In this paper, we review technical progress that is enabling new biological applications and that is driving an increase in the complexity and size of MSI data. Handling such data often requires specialized computational infrastructure, software, and expertise. OpenMSI, our recently described platform, makes it easy to explore and share MSI datasets via the web - even when larger than 50 GB. Here we describe the integration of OpenMSI with IPython notebooks for transparent, sharable, and replicable MSI research. An advantage of this approach is that users do not have to share raw data along with analyses; instead, data is retrieved via OpenMSI's web API. The IPython notebook interface provides a low-barrier entry point for data manipulation that is accessible for scientists without extensive computational training. Via these notebooks, analyses can be easily shared without requiring any data movement. We provide example notebooks for several common MSI analysis types including data normalization, plotting, clustering, and classification, and image registration.

  18. Achievable capacity of a spectrum sharing system over hyper fading channels

    KAUST Repository

    Ekin, Sabit

    2009-11-01

    Cognitive radio with spectrum sharing feature is a promising technique to address the spectrum under-utilization problem in dynamically changing environments. In this paper, achievable capacity gain of spectrum sharing systems over dynamic fading environments is studied. For the analysis, a theoretical fading model called hyper fading model that is suitable to the dynamic nature of cognitive radio channel is proposed. Closed-form expression of probability density function (PDF) and cumulative density function (CDF) of the signal-to-noise ratio (SNR) for secondary users in spectrum sharing systems are derived. In addition, the capacity gains achievable with spectrum sharing systems in high and low power regions are obtained. Numerical simulations are performed to study the effects of different fading figures, average powers, interference temperature, and number of secondary users on the achievable capacity.

  19. Resource allocation in shared spectrum access communications for operators with diverse service requirements

    Science.gov (United States)

    Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki

    2016-12-01

    In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.

  20. Distributed opportunistic spectrum sharing in cognitive radio networks

    KAUST Repository

    Hawa, Mohammed

    2016-05-19

    In cases where the licensed radio spectrum is underutilized, cognitive radio technology enables cognitive devices to sense and then dynamically access this scarce resource making the most out of it. In this work, we introduce a simple and intuitive, yet powerful and efficient, technique that allows opportunistic channel access in cognitive radio systems in a completely distributed fashion. Our proposed method achieves very high values of spectrum utilization and throughput. It also minimizes interference between cognitive base stations and the primary users licensed to use the spectrum. The algorithm responds quickly and efficiently to variations in the network parameters and also achieves a high degree of fairness between cognitive base stations. © 2016 John Wiley & Sons, Ltd.

  1. Mimo radar waveform design for spectrum sharing with cellular systems a Matlab based approach

    CERN Document Server

    Khawar, Awais; Clancy, T Charles

    2016-01-01

    This book discusses spectrum sharing between cellular systems and radars. The book addresses a novel way to design radar waveforms that can enable spectrum sharing between radars and communication systems, without causing interference to communication systems, and at the same time achieving radar objectives of target detection, estimation, and tracking. The book includes a MATLAB-based approach, which provides reader with a way to learn, experiment, compare, and build on top of existing algorithms.

  2. Feasibility of Spectrum Sharing Between Airborne Weather Radar and Wireless Local Area Networks

    OpenAIRE

    Zarookian, Ruffy

    2007-01-01

    Emerging technologies such as wireless local area networks and cellular telephones have dramatically increased the use of wireless communications services within the last 10 years. The shortage of available spectrum exists due to increasing demand for wireless services and current spectrum allocation regulations. To alleviate this shortage, Research aims to improve spectral efficiency and to allow spectrum sharing between separatelymanaged and non-coordinating communications systems. T...

  3. A Game-Theoretic Approach for Opportunistic Spectrum Sharing in Cognitive Radio Networks with Incomplete Information

    Science.gov (United States)

    Tan, Xuesong Jonathan; Li, Liang; Guo, Wei

    One important issue in cognitive transmission is for multiple secondary users to dynamically acquire spare spectrum from the single primary user. The existing spectrum sharing scheme adopts a deterministic Cournot game to formulate this problem, of which the solution is the Nash equilibrium. This formulation is based on two implicit assumptions. First, each secondary user is willing to fully exchange transmission parameters with all others and hence knows their complete information. Second, the unused spectrum of the primary user for spectrum sharing is always larger than the total frequency demand of all secondary users at the Nash equilibrium. However, both assumptions may not be true in general. To remedy this, the present paper considers a more realistic assumption of incomplete information, i.e., each secondary user may choose to conceal their private information for achieving higher transmission benefit. Following this assumption and given that the unused bandwidth of the primary user is large enough, we adopt a probabilistic Cournot game to formulate an opportunistic spectrum sharing scheme for maximizing the total benefit of all secondary users. Bayesian equilibrium is considered as the solution of this game. Moreover, we prove that a secondary user can improve their expected benefit by actively hiding its transmission parameters and increasing their variance. On the other hand, when the unused spectrum of the primary user is smaller than the maximal total frequency demand of all secondary users at the Bayesian equilibrium, we formulate a constrained optimization problem for the primary user to maximize its profit in spectrum sharing and revise the proposed spectrum sharing scheme to solve this problem heuristically. This provides a unified approach to overcome the aforementioned two limitations of the existing spectrum sharing scheme.

  4. Multi-spectrum and transmit-antenna switched diversity schemes for spectrum sharing systems: A performance analysis

    KAUST Repository

    Sayed, Mostafa

    2012-12-01

    In spectrum sharing systems, a secondary user (SU) is allowed to share the spectrum with a primary licensed user under the condition that the interference at the the primary user receiver (PU-Rx) is below a predetermined threshold. Joint primary spectrum and transmit antenna selection diversity schemes can be utilized as an efficient way to meet the quality of service (QoS) demands of the SUs while satisfying the interference constraint. In this paper, we consider a secondary link comprised of a secondary transmitter (SU-Tx) equipped with multiple antennas and a single-antenna secondary receiver (SU-Rx) sharing the same spectrum with a number of primary users (PUs) operating at distinct spectra. We present a performance analysis for two primary spectrum and transmit antenna switched selection schemes with different amount of feedback requirements. In particular, assuming Rayleigh fading and BPSK transmission, we derive approximate BER expressions for the presented schemes. For the sake of comparison, we also derive a closed-form BER expression for the optimal selection scheme that selects the best pair in terms of the SU-Rx signal-to-noise ratio (SNR) which has the disadvantage of high feedback requirements. Finally, our results are verified with numerical simulations. © 2012 IEEE.

  5. Exploring the Spectrum of Dynamic Scheduling Algorithms for Scalable Distributed-MemoryRay Tracing.

    Science.gov (United States)

    Navrátil, Paul A; Childs, Hank; Fussell, Donald S; Lin, Calvin

    2014-06-01

    This paper extends and evaluates a family of dynamic ray scheduling algorithms that can be performed in-situ on large distributed memory parallel computers. The key idea is to consider both ray state and data accesses when scheduling ray computations. We compare three instances of this family of algorithms against two traditional statically scheduled schemes. We show that our dynamic scheduling approach can render data sets that are larger than aggregate system memory and that cannot be rendered by existing statically scheduled ray tracers. For smaller problems that fit in aggregate memory but are larger than typical shared memory, our dynamic approach is competitive with the best static scheduling algorithm.

  6. A cognitive gateway-based spectrum sharing method in downlink round robin scheduling of LTE system

    Science.gov (United States)

    Deng, Hongyu; Wu, Cheng; Wang, Yiming

    2017-07-01

    A key technique of LTE is how to allocate efficiently the resource of radio spectrum. Traditional Round Robin (RR) scheduling scheme may lead to too many resource residues when allocating resources. When the number of users in the current transmission time interval (TTI) is not the greatest common divisor of resource block groups (RBGs), and such a phenomenon lasts for a long time, the spectrum utilization would be greatly decreased. In this paper, a novel spectrum allocation scheme of cognitive gateway (CG) was proposed, in which the LTE spectrum utilization and CG’s throughput were greatly increased by allocating idle resource blocks in the shared TTI in LTE system to CG. Our simulation results show that the spectrum resource sharing method can improve LTE spectral utilization and increase the CG’s throughput as well as network use time.

  7. A fully distributed method for dynamic spectrum sharing in femtocells

    DEFF Research Database (Denmark)

    Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan

    2012-01-01

    The traffic in cellular networks has been growing at an accelerated rate. In order to meet the rising demand for large data volumes, shrinking the cell size may be the only viable option. In fact, locally deployed small cells, namely picocells and femtocells, will certainly play a major role...... in meeting the IMT-Advanced requirements for the next generation of cellular networks. Notwithstanding, several aspects of femtocell deployment are very challenging, especially in closed subscriber group femtocells: massive deployment, user definition of access point location and high density. When...... such characteristics are combined the traditional network planning and optimization of cellular networks fails to be cost effective. Therefore, a greater deal of automation is needed in femtocells. In particular, this paper proposes a novel method for autonomous selection of spectrum/ channels in femtocells...

  8. Capacity limits of spectrum-sharing systems over hyper-fading channels

    KAUST Repository

    Ekin, Sabit

    2011-01-20

    Cognitive radio (CR) with spectrum-sharing feature is a promising technique to address the spectrum under-utilization problem in dynamically changing environments. In this paper, the achievable capacity gain of spectrum-sharing systems over dynamic fading environments is studied. To perform a general analysis, a theoretical fading model called hyper-fading model that is suitable to the dynamic nature of CR channel is proposed. Closed-form expressions of probability density function (PDF) and cumulative density function (CDF) of the signal-to-noise ratio (SNR) for secondary users (SUs) in spectrum-sharing systems are derived. In addition, the capacity gains achievable with spectrum-sharing systems in high and low power regions are obtained. The effects of different fading figures, average fading powers, interference temperatures, peak powers of secondary transmitters, and numbers of SUs on the achievable capacity are investigated. The analytical and simulation results show that the fading figure of the channel between SUs and primary base-station (PBS), which describes the diversity of the channel, does not contribute significantly to the system performance gain. © 2011 John Wiley & Sons, Ltd.

  9. Wireless communication and spectrum sharing for public safety in the United States.

    Science.gov (United States)

    Kapucu, Naim; Haupt, Brittany; Yuksel, Murat

    2016-01-01

    With the vast number of fragmented, independent public safety wireless communication systems, the United States is encountering major challenges with enhancing interoperability and effectively managing costs while sharing limited availability of critical spectrum. The traditional hierarchical approach of emergency management does not always allow for needed flexibility and is not a mandate. A national system would reduce equipment needs, increase effectiveness, and enrich quality and coordination of response; however, it is dependent on integrating the commercial market. This article discusses components of an ideal national wireless public safety system consists along with key policies in regulating wireless communication and spectrum sharing for public safety and challenges for implementation.

  10. Spectrum and Infrastructure Sharing in Wireless Mobile Networks: Advantages and Risks

    Directory of Open Access Journals (Sweden)

    Mugdim Bublin

    2008-07-01

    Full Text Available In recent time the spectrum and infrastructure sharing hasbeen gaining more and more on importance due to high spectrumlicense costs and expensive infrastructure needed formodem high-bandwidth wireless communications. In this paperthe advantages and disadvantages of spectrum and infrastructuresharing by analytical models and simulations are analyzed.Results show that operators could significantly reducetheir costs, increase capacity and improve network quality bysharing their infrastructure and spectrum. Using Game Theoryit is shown how operators could "protect themselves" againstnon-cooperative behaviour of other operators.

  11. On the expiration date of spectrum sharing in mobile cellular networks

    NARCIS (Netherlands)

    Janssen, T.; Litjens, R.; Sowerby, K.W.

    2014-01-01

    Driven by a combination of flat lining revenues and an explosive growth in the mobile data traffic and hence the need for network resources, mobile operators consider infrastructure-and spectrum sharing as a means to reduce operational costs. We develop and apply an assessment approach to quantify

  12. Interference-aware random beam selection schemes for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed

    2012-10-19

    Spectrum sharing systems have been recently introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a predetermined/acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a primary link composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes select a beam, among a set of power-optimized random beams, that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the SINR statistics as well as the capacity and bit error rate (BER) of the secondary link.

  13. Simultaneous wireless information and power transfer for spectrum sharing in cognitive radio communication systems

    KAUST Repository

    Benkhelifa, Fatma

    2016-07-26

    In this paper, we consider the simultaneous wireless information and power transfer for the spectrum sharing (SS) in cognitive radio (CR) systems with a multi-antenna energy harvesting (EH) primary receiver (PR). The PR uses the antenna switching (AS) technique that assigns a subset of the PR\\'s antennas to harvest the energy from the radio frequency (RF) signals sent by the secondary transmitter (ST), and assigns the rest of the PR\\'s antennas to decode the information data. In this context, the primary network allows the secondary network to use the spectrum as long as the interference induced by the secondary transmitter (ST)\\'s signals is beneficial for the energy harvesting process at the PR side. The objective of this work is to show that the spectrum sharing is beneficial for both the SR and PR sides and leads to a win-win situation. To illustrate the incentive of the spectrum sharing cognitive system, we evaluate the mutual outage probability (MOP) introduced in [1] which declares an outage event if the PR or the secondary receiver (SR) is in an outage. Through the simulation results, we show that the performance of our system in terms of the MOP is always better than the performance of the system in the absence of ST and improves as the ST-PR interference increases. © 2016 IEEE.

  14. Outage Analysis of Spectrum-Sharing over M-Block Fading with Sensing Information

    KAUST Repository

    Alabbasi, Abdulrahman

    2016-07-13

    Future wireless technologies, such as, 5G, are expected to support real-time applications with high data throughput, e.g., holographic meetings. From a bandwidth perspective, cognitive radio is a promising technology to enhance the system’s throughput via sharing the licensed spectrum. From a delay perspective, it is well known that increasing the number of decoding blocks will improve the system robustness against errors, while increasing the delay. Therefore, optimally allocating the resources to determine the tradeoff of tuning the length of decoding blocks while sharing the spectrum is a critical challenge for future wireless systems. In this work, we minimize the targeted outage probability over the block-fading channels while utilizing the spectrum-sharing concept. The secondary user’s outage region and the corresponding optimal power are derived, over twoblocks and M-blocks fading channels. We propose two suboptimal power strategies and derive the associated asymptotic lower and upper bounds on the outage probability with tractable expressions. These bounds allow us to derive the exact diversity order of the secondary user’s outage probability. To further enhance the system’s performance, we also investigate the impact of including the sensing information on the outage problem. The outage problem is then solved via proposing an alternating optimization algorithm, which utilizes the verified strict quasiconvex structure of the problem. Selected numerical results are presented to characterize the system’s behavior and show the improvements of several sharing concepts.

  15. Spectrum sharing in cognitive radio networks--an auction-based approach.

    Science.gov (United States)

    Wang, Xinbing; Li, Zheng; Xu, Pengchao; Xu, Youyun; Gao, Xinbo; Chen, Hsiao-Hwa

    2010-06-01

    Cognitive radio is emerging as a promising technique to improve the utilization of the radio frequency spectrum. In this paper, we consider the problem of spectrum sharing among primary (or "licensed") users (PUs) and secondary (or "unlicensed") users (SUs). We formulate the problem based on bandwidth auction, in which each SU makes a bid for the amount of spectrum and each PU may assign the spectrum among the SUs by itself according to the information from the SUs without degrading its own performance. We show that the auction is a noncooperative game and that Nash equilibrium (NE) can be its solution. We first consider a single-PU network to investigate the existence and uniqueness of the NE and further discuss the fairness among the SUs under given conditions. Then, we present a dynamic updating algorithm in which each SU achieves NE in a distributed manner. The stability condition of the dynamic behavior for this spectrum-sharing scheme is studied. The discussion is generalized to the case in which there are multiple PUs in the network, where the properties of the NE are shown under appropriate conditions. Simulations were used to evaluate the system performance and verify the effectiveness of the proposed algorithm.

  16. On the performance of spectrum sharing systems with two-way relaying and multiuser diversity

    KAUST Repository

    Yang, Liang

    2012-08-01

    In this letter, we consider a spectrum sharing network with two-way relaying and multi-user diversity. More specifically, one secondary transmitter with the best channel quality is selected and splits its partial power to relay its received signals to the primary users by using the amplify-and-forward relaying protocol. We derive a tight approximation for the resulting outage probability. Based on this formula, the performance of the spectral sharing region and the cell coverage are analyzed. Numerical results are given to verify our analysis and are discussed to illustrate the advantages of our newly proposed scheme. © 1997-2012 IEEE.

  17. Two-way cooperative AF relaying in spectrum-sharing systems: Enhancing cell-edge performance

    KAUST Repository

    Xia, Minghua

    2012-09-01

    In this contribution, two-way cooperative amplify-and-forward (AF) relaying technique is integrated into spectrumsharing wireless systems to improve spectral efficiency of secondary users (SUs). In order to share the available spectrum resources originally dedicated to primary users (PUs), the transmit power of a SU is optimized with respect to the average tolerable interference power at primary receivers. By analyzing outage probability and achievable data rate at the base station and at a cell-edge SU, our results reveal that the uplink performance is dominated by the average tolerable interference power at primary receivers, while the downlink always behaves like conventional one-way AF relaying and its performance is dominated by the average signal-to-noise ratio (SNR). These important findings provide fresh perspectives for system designers to improve spectral efficiency of secondary users in next-generation broadband spectrum-sharing wireless systems. © 2012 IEEE.

  18. Exploring the Cognitive Foundations of the Shared Attention Mechanism: Evidence for a Relationship between Self-Categorization and Shared Attention across the Autism Spectrum

    Science.gov (United States)

    Skorich, Daniel P.; Gash, Tahlia B.; Stalker, Katie L.; Zheng, Lidan; Haslam, S. Alexander

    2017-01-01

    The social difficulties of autism spectrum disorder (ASD) are typically explained as a disruption in the Shared Attention Mechanism (SAM) sub-component of the theory of mind (ToM) system. In the current paper, we explore the hypothesis that SAM's capacity to construct the self-other-object relations necessary for shared-attention arises from a…

  19. CBRS Spectrum Sharing between LTE-U and WiFi: A Multiarmed Bandit Approach

    Directory of Open Access Journals (Sweden)

    Imtiaz Parvez

    2016-01-01

    Full Text Available The surge of mobile devices such as smartphone and tablets requires additional capacity. To achieve ubiquitous and high data rate Internet connectivity, effective spectrum sharing and utilization of the wireless spectrum carry critical importance. In this paper, we consider the use of unlicensed LTE (LTE-U technology in the 3.5 GHz Citizens Broadband Radio Service (CBRS band and develop a multiarmed bandit (MAB based spectrum sharing technique for a smooth coexistence with WiFi. In particular, we consider LTE-U to operate as a General Authorized Access (GAA user; hereby MAB is used to adaptively optimize the transmission duty cycle of LTE-U transmissions. Additionally, we incorporate downlink power control which yields a high energy efficiency and interference suppression. Simulation results demonstrate a significant improvement in the aggregate capacity (approximately 33% and cell-edge throughput of coexisting LTE-U and WiFi networks for different base station densities and user densities.

  20. Spectrum sharing in cognitive radio networks medium access control protocol based approach

    CERN Document Server

    Pandit, Shweta

    2017-01-01

    This book discusses the use of the spectrum sharing techniques in cognitive radio technology, in order to address the problem of spectrum scarcity for future wireless communications. The authors describe a cognitive radio medium access control (MAC) protocol, with which throughput maximization has been achieved. The discussion also includes use of this MAC protocol for imperfect sensing scenarios and its effect on the performance of cognitive radio systems. The authors also discuss how energy efficiency has been maximized in this system, by applying a simple algorithm for optimizing the transmit power of the cognitive user. The study about the channel fading in the cognitive user and licensed user and power adaption policy in this scenario under peak transmit power and interference power constraint is also present in this book.

  1. Spectrum Sharing Based on a Bertrand Game in Cognitive Radio Sensor Networks.

    Science.gov (United States)

    Zeng, Biqing; Zhang, Chi; Hu, Pianpian; Wang, Shengyu

    2017-01-07

    In the study of power control and allocation based on pricing, the utility of secondary users is usually studied from the perspective of the signal to noise ratio. The study of secondary user utility from the perspective of communication demand can not only promote the secondary users to meet the maximum communication needs, but also to maximize the utilization of spectrum resources, however, research in this area is lacking, so from the viewpoint of meeting the demand of network communication, this paper designs a two stage model to solve spectrum leasing and allocation problem in cognitive radio sensor networks (CRSNs). In the first stage, the secondary base station collects the secondary network communication requirements, and rents spectrum resources from several primary base stations using the Bertrand game to model the transaction behavior of the primary base station and secondary base station. The second stage, the subcarriers and power allocation problem of secondary base stations is defined as a nonlinear programming problem to be solved based on Nash bargaining. The simulation results show that the proposed model can satisfy the communication requirements of each user in a fair and efficient way compared to other spectrum sharing schemes.

  2. Performance analysis of switch-based multiuser scheduling schemes with adaptive modulation in spectrum sharing systems

    KAUST Repository

    Qaraqe, Marwa

    2014-04-01

    This paper focuses on the development of multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, two scheduling schemes are proposed for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme focuses on optimizing the average spectral efficiency by selecting the user that reports the best channel quality. In order to alleviate the relatively high feedback required by the first scheme, a second scheme based on the concept of switched diversity is proposed, where the base station (BS) scans the secondary users in a sequential manner until a user whose channel quality is above an acceptable predetermined threshold is found. We develop expressions for the statistics of the signal-to-interference and noise ratio as well as the average spectral efficiency, average feedback load, and the delay at the secondary BS. We then present numerical results for the effect of the number of users and the interference constraint on the optimal switching threshold and the system performance and show that our analysis results are in perfect agreement with the numerical results. © 2014 John Wiley & Sons, Ltd.

  3. Joint multiuser switched diversity and adaptive modulation schemes for spectrum sharing systems

    KAUST Repository

    Qaraqe, Marwa

    2012-12-01

    In this paper, we develop multiuser access schemes for spectrum sharing systems whereby secondary users are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. In particular, we devise two schemes for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality. In order to alleviate the high feedback load associated with the first scheme, we develop a second scheme based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. In addition to these two selection schemes, we consider two power adaptive settings at the secondary users based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users are allowed to transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, the users are allowed to vary their transmission power to satisfy the interference constraint. Finally, we present numerical results for our proposed algorithms where we show the trade-off between the average spectral efficiency and average feedback load for both schemes. © 2012 IEEE.

  4. Shared familial transmission of autism spectrum and attention-deficit/hyperactivity disorders.

    Science.gov (United States)

    Musser, Erica D; Hawkey, Elizabeth; Kachan-Liu, Svetlana S; Lees, Paul; Roullet, Jean-Baptiste; Goddard, Katrina; Steiner, Robert D; Nigg, Joel T

    2014-07-01

    To determine whether familial transmission is shared between autism spectrum disorders and attention-deficit/hyperactivity disorder, we assessed the prevalence, rates of comorbidity, and familial transmission of both disorders in a large population-based sample of children during a recent 7 year period. Study participants included all children born to parents with the Kaiser Permanente Northwest (KPNW) Health Plan between 1 January 1998 and 31 December 2004 (n = 35,073). Children and mothers with physician-identified autism spectrum disorders (ASD) and/or attention-deficit/hyperactivity disorder (ADHD) were identified via electronic medical records maintained for all KPNW members. Among children aged 6-12 years, prevalence was 2.0% for ADHD and 0.8% for ASD; within those groups, 0.2% of the full sample (19% of the ASD sample and 9.6% of the ADHD sample) had co-occurring ASD and ADHD, when all children were included. When mothers had a diagnosis of ADHD, first born offspring were at 6-fold risk of ADHD alone (OR = 5.02, p disorders shares familial transmission with ADHD. ADHD and ASD have a partially overlapping diathesis. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.

  5. A Unified Framework for the Ergodic Capacity of Spectrum Sharing Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2012-12-29

    We consider a spectrum sharing communication scenario in which a primary and a secondary users are communicating, simultaneously, with their respective destinations using the same frequency carrier. Both optimal power profile and ergodic capacity are derived for fading channels, under an average transmit power and an instantaneous interference outage constraints. Unlike previous studies, we assume that the secondary user has a noisy version of the cross link and the secondary link Channel State Information (CSI). After deriving the capacity in this case, we provide an ergodic capacity generalization, through a unified expression, that encompasses several previously studied spectrum sharing settings. In addition, we provide an asymptotic capacity analysis at high and low signal-to-noise ratio (SNR). Numerical results, applied for independent Rayleigh fading channels, show that at low SNR regime, only the secondary channel estimation matters with no effect of the cross link on the capacity; whereas at high SNR regime, the capacity is rather driven by the cross link CSI. Furthermore, a practical on-off power allocation scheme is proposed and is shown, through numerical results, to achieve the full capacity at high and low SNR regimes and suboptimal rates in the medium SNR regime.

  6. Interference-Aware Spectrum Sharing Techniques for Next Generation Wireless Networks

    KAUST Repository

    Qaraqe, Marwa Khalid

    2011-11-20

    Background: Reliable high-speed data communication that supports multimedia application for both indoor and outdoor mobile users is a fundamental requirement for next generation wireless networks and requires a dense deployment of physically coexisting network architectures. Due to the limited spectrum availability, a novel interference-aware spectrum-sharing concept is introduced where networks that suffer from congested spectrums (secondary-networks) are allowed to share the spectrum with other networks with available spectrum (primary-networks) under the condition that limited interference occurs to primary networks. Objective: Multiple-antenna and adaptive rate can be utilized as a power-efficient technique for improving the data rate of the secondary link while satisfying the interference constraint of the primary link by allowing the secondary user to adapt its transmitting antenna, power, and rate according to the channel state information. Methods: Two adaptive schemes are proposed using multiple-antenna transmit diversity and adaptive modulation in order to increase the spectral-efficiency of the secondary link while maintaining minimum interference with the primary. Both the switching efficient scheme (SES) and bandwidth efficient scheme (BES) use the scan-and-wait combining antenna technique (SWC) where there is a secondary transmission only when a branch with an acceptable performance is found; else the data is buffered. Results: In both these schemes the constellation size and selected transmit branch are determined to minimized the average number of switches and achieve the highest spectral efficiency given a minimum bit-error-rate (BER), fading conditions, and peak interference constraint. For delayed sensitive applications, two schemes using power control are used: SES-PC and BES-PC. In these schemes the secondary transmitter sends data using a nominal power level, which is optimized to minimize the average delay. Several numerical examples show

  7. Underlay Spectrum Sharing Techniques with In-Band Full-Duplex Systems using Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed

    2016-10-26

    Sharing the spectrum with in-band full-duplex (FD) primary users (PUs) is a challenging and interesting problem in the underlay cognitive radio (CR) systems. The self-interference introduced at the primary network may dramatically impede the secondary user (SU) opportunity to access the spectrum. To tackle this problem, we use the so-called improper Gaussian signaling. Particularly, we assume the downlink transmission of a SU that uses improper Gaussian signaling while the FD PU pair implements the regular proper Gaussian signaling. First, we derive a closed form expression and an upper bound for the SU and PUs outage probabilities, respectively. Second, we optimize the SU signal parameters to minimize its outage probability while maintaining the required PUs quality-of-service based on the average channel state information (CSI). Moreover, we provide the conditions to reap merits from employing improper Gaussian signaling at the SU. Third, we design the SU signal parameters based on perfect knowledge of its direct link instantaneous CSI and investigate all benefits that can be achieved at both the SU and PUs. Finally, we provide some numerical results that demonstrate the advantages of using improper Gaussian signaling to access the spectrum of the FD PUs.

  8. Sharing the Licensed Spectrum of Full-Duplex Systems using Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed

    2016-01-06

    Sharing the spectrum with in-band full-duplex (FD) primary users (PU) is a challenging and interesting problem in the underlay cognitive radio (CR) systems. The self-inteference introducsed at the primary network may dramatically impede the secondary user (SU) opportunity to access the spectrum. In this work, we attempt to tackle this problem through the use of so called improper Gaussian signaling (IGS). Such a signaling technique has demonstrated its superiority in improving the overall performance in interference limited networks. Particularly, we assume a system with a SU pair working in half-duplex mode that uses IGS while the FD PU pair implements the regular proper Gaussiam signaling techniques. Frist, we derive a closed form expression for the SU outage probability while maintaining the required PU quality-of-service based on the average channel state information. Finally, we provide some numerical results that validate the tightness of the PU outage probability bound and demonstrate the advantage of employing IGS to the SU in order to access the spectrum of the FD PU.

  9. Sharing the Licensed Spectrum of Full-Duplex Systems Using Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed

    2015-12-01

    Sharing the spectrum with in-band full-duplex (FD) primary users (PU) is a challenging and interesting problem in the underlay cognitive radio (CR) systems. The self-interference introduced at the primary network may dramatically impede the secondary user (SU) opportunity to access the spectrum. In this work, we attempt to tackle this problem through the use of the so-called improper Gaussian signaling. Such a signaling technique has demonstrated its superiority in improving the overall performance in interference limited networks. Particularly, we assume a system with a SU pair working in half-duplex mode that uses improper Gaussian signaling while the FD PU pair implements the regular proper Gaussian signaling techniques. First, we derive a closed form expression for the SU outage probability and an upper bound for the PU outage probability. Then, we optimize the SU signal parameters to minimize its outage probability while maintaining the required PU quality-of-service based on the average channel state information. Finally, we provide some numerical results that validate the tightness of the PU outage probability bound and demonstrate the advantage of employing the improper Gaussian signaling to the SU in order to access the spectrum of the FD PU.

  10. On the capacity of multiple cognitive links through common relay under spectrum-sharing constraints

    KAUST Repository

    Yang, Yuli

    2011-06-01

    In this paper, we consider an underlay cognitive relaying network consisting of multiple secondary users and introduce a cooperative transmission protocol using a common relay to help with the communications between all secondary source-destination pairs for higher throughput and lower realization complexity. A whole relay-assisted transmission procedure is composed of multiple access phase and broadcast phase, where the relay is equipped with multiple antennas, and the secondary sources and destinations are single-antenna nodes. Considering the spectrum-sharing constraints on the secondary sources and the relay, we analyze the capacity behaviors of the underlay cognitive relaying network under study. The corresponding numerical results provide a convenient tool for the presented network design and substantiate a distinguishing feature of introduced design in that multiple secondary users\\' communications do not rely on multiple relays, hence allowing for a more efficient use of the radio resources. © 2011 IEEE.

  11. Price-Based Resource Allocation for Spectrum-Sharing Femtocell Networks: A Stackelberg Game Approach

    CERN Document Server

    Kang, Xin; Motani, Mehul

    2011-01-01

    This paper investigates the price-based resource allocation strategies for the uplink transmission of a spectrum-sharing femtocell network, in which a central macrocell is underlaid with distributed femtocells, all operating over the same frequency band as the macrocell. Assuming that the macrocell base station (MBS) protects itself by pricing the interference from the femtocell users, a Stackelberg game is formulated to study the joint utility maximization of the macrocell and the femtocells subject to a maximum tolerable interference power constraint at the MBS. Especially, two practical femtocell channel models: sparsely deployed scenario for rural areas and densely deployed scenario for urban areas, are investigated. For each scenario, two pricing schemes: uniform pricing and non-uniform pricing, are proposed. Then, the Stackelberg equilibriums for these proposed games are studied, and an effective distributed interference price bargaining algorithm with guaranteed convergence is proposed for the uniform-...

  12. Capacity of spectrum sharing Cognitive Radio systems over Nakagami fading channels at low SNR

    KAUST Repository

    Sboui, Lokman

    2013-06-01

    In this paper, we study the ergodic capacity of Cognitive Radio (CR) spectrum sharing systems at low power regime. We focus on Nakagami fading channels. We formally define the low power regime and present closed form expressions of the capacity in the low power regime under various types of interference and/or power constraints, depending on the available channel state information (CSI) of the cross link (CL) between the secondary user transmitter and the primary user receiver. We explicitly characterize two regimes where either the interference constraint or the power constraint dictates the optimal power profile. Our framework also highlights the effects of different fading parameters on the secondary link ergodic capacity. Interestingly, we show that the low power regime analysis provides a specific insight on the capacity behavior of CR that has not been reported by previous studies. © 2013 IEEE.

  13. Exact Outage Performance Analysis of Multiuser Multi-relay Spectrum Sharing Cognitive Networks

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-04-01

    Full Text Available In this paper, we investigate the outage performance of dual-hop multiuser multi-relay cognitive radio networks under spectrum sharing constraints. Using an efficient relay-destination selection scheme, the exact and asymptotic closed-form expressions for the outage probability are derived. From these expressions it is indicated that the achieved diversity order is only determined by the number of secondary user (SU relays and destinations, and equals to M+N (where M and N are the number of destination nodes and relay nodes, respectively. Further, we find that the coding gain of the SU network will be affected by the interference threshold $bar I$ at the primary user (PU receiver. Specifically, as the increases of the interference threshold, the coding gain of the considered network approaches to that of the multiuser multi-relay system in the non-cognitive network. Finally, our study is corroborated by representative numerical examples.

  14. Spectrum sharing opportunities of full-duplex systems using improper Gaussian signaling

    KAUST Repository

    Gaafar, Mohamed

    2015-08-01

    Sharing the licensed spectrum of full-duplex (FD) primary users (PU) brings strict limitations on the underlay cognitive radio operation. Particularly, the self interference may overwhelm the PU receiver and limit the opportunity of secondary users (SU) to access the spectrum. Improper Gaussian signaling (IGS) has demonstrated its superiority in improving the performance of interference channel systems. Throughout this paper, we assume a FD PU pair that uses proper Gaussian signaling (PGS), and a half-duplex SU pair that uses IGS. The objective is to maximize the SU instantaneous achievable rate while meeting the PU quality-of-service. To this end, we propose a simplified algorithm that optimizes the SU signal parameters, i.e, the transmit power and the circularity coefficient, which is a measure of the degree of impropriety of the SU signal, to achieve the design objective. Numerical results show the merits of adopting IGS compared with PGS for the SU especially with the existence of week PU direct channels and/or strong SU interference channels.

  15. Secondary access based on sensing and primary ARQ feedback in spectrum sharing systems

    KAUST Repository

    Hamza, Doha R.

    2012-04-01

    In the context of primary/secondary spectrum sharing, we propose a randomized secondary access strategy with access probabilities that are a function of both the primary automatic repeat request (ARQ) feedback and the spectrum sensing outcome. The primary terminal operates in a time slotted fashion and is active only when it has a packet to send. The primary receiver can send a positive acknowledgment (ACK) when the received packet is decoded correctly. Lack of ARQ feedback is interpreted as erroneous reception or inactivity. We call this the explicit ACK scheme. The primary receiver may also send a negative acknowledgment (NACK) when the packet is received in error. Lack of ARQ feedback is interpreted as an ACK or no-transmission. This is called the explicit NACK scheme. Under both schemes, when the primary feedback is interpreted as a NACK, the secondary user assumes that there will be retransmission in the next slot and accesses the channel with a certain probability. When the primary feedback is interpreted as an ACK, the secondary user accesses the channel with either one of two probabilities based on the sensing outcome. Under these settings, we find the three optimal access probabilities via maximizing the secondary throughput given a constraint on the primary throughput. We compare the performance of the explicit ACK and explicit NACK schemes and contrast them with schemes based on either sensing or primary ARQ feedback only. © 2012 IEEE.

  16. On the coexistence of primary and secondary users in spectrum-sharing broadcast channels

    KAUST Repository

    Yang, Yuli

    2013-06-01

    In this paper, we consider a broadcast channel in spectrum-sharing networks, where the base station schedules licensed primary users (PUs) and cognitive secondary users (SUs) simultaneously. Based on such a framework, we present a transmission strategy in the light of dirty paper coding. In order to promise the PUs\\' quality of service (QoS) in the broadcasting, the base station chooses codewords for the users by taking into account that the codewords pertaining to SUs can be pre-subtracted from those pertaining to PUs as if there were no interference from the secondary\\'s data to the primary\\'s data. For the purpose of performance evaluation, by taking capacity behavior and bit error rate (BER) as metrics, we study the achievable data rate regions for both types of users with the introduced design, and analyze the BER performance in corresponding systems implemented with hierarchical modulation. Numerical results substantiate that with flexible management of the spectrum resources, our proposed scheme provides more communication opportunities for SUs while maintaining PUs\\' QoS at an acceptable level. © 2013 IEEE.

  17. Implementation of a Shared Data Repository and Common Data Dictionary for Fetal Alcohol Spectrum Disorders Research

    Science.gov (United States)

    Arenson, Andrew D.; Bakhireva, Ludmila; Chambers, Christina D.; Deximo, Christina; Foroud, Tatiana; Jacobson, Joseph L.; Jacobson, Sandra W.; Jones, Kenneth Lyons; Mattson, Sarah N.; May, Philip A.; Moore, Elizabeth; Ogle, Kimberly; Riley, Edward P.; Robinson, Luther K.; Rogers, Jeffrey; Streissguth, Ann P.; Tavares, Michel; Urbanski, Joseph; Yezerets, Yelena; Surya, Radha; Stewart, Craig A.; Barnett, William K.

    2010-01-01

    Many previous attempts by fetal alcohol spectrum disorders researchers to compare data across multiple prospective and retrospective human studies have failed due to both structural differences in the collected data as well as difficulty in coming to agreement on the precise meaning of the terminology used to describe the collected data. Although some groups of researchers have an established track record of successfully integrating data, attempts to integrate data more broadly amongst different groups of researchers have generally faltered. Lack of tools to help researchers share and integrate data has also hampered data analysis. This situation has delayed improving diagnosis, intervention, and treatment before and after birth. We worked with various researchers and research programs in the Collaborative Initiative on Fetal Alcohol Spectrum Disorders (CI-FASD) to develop a set of common data dictionaries to describe the data to be collected, including definitions of terms and specification of allowable values. The resulting data dictionaries were the basis for creating a central data repository (CI-FASD Central Repository) and software tools to input and query data. Data entry restrictions ensure that only data which conform to the data dictionaries reach the CI-FASD Central Repository. The result is an effective system for centralized and unified management of the data collected and analyzed by the initiative, including a secure, long-term data repository. CI-FASD researchers are able to integrate and analyze data of different types, collected using multiple methods, and collected from multiple populations, and data are retained for future reuse in a secure, robust repository. PMID:20036486

  18. Cooperative AF Relaying in Spectrum-Sharing Systems: Outage Probability Analysis under Co-Channel Interferences and Relay Selection

    KAUST Repository

    Xia, Minghua

    2012-11-01

    For cooperative amplify-and-forward (AF) relaying in spectrum-sharing wireless systems, secondary users share spectrum resources originally licensed to primary users to communicate with each other and, thus, the transmit power of secondary transmitters is strictly limited by the tolerable interference powers at primary receivers. Furthermore, the received signals at a relay and at a secondary receiver are inevitably interfered by the signals from primary transmitters. These co-channel interferences (CCIs) from concurrent primary transmission can significantly degrade the performance of secondary transmission. This paper studies the effect of CCIs on outage probability of the secondary link in a spectrum-sharing environment. In particular, in order to compensate the performance loss due to CCIs, the transmit powers of a secondary transmitter and its relaying node are respectively optimized with respect to both the tolerable interference powers at the primary receivers and the CCIs from the primary transmitters. Moreover, when multiple relays are available, the technique of opportunistic relay selection is exploited to further improve system performance with low implementation complexity. By analyzing lower and upper bounds on the outage probability of the secondary system, this study reveals that it is the tolerable interference powers at primary receivers that dominate the system performance, rather than the CCIs from primary transmitters. System designers will benefit from this result in planning and designing next-generation broadband spectrum-sharing systems.

  19. Using Aromatherapy Massage to Increase Shared Attention Behaviours in Children with Autistic Spectrum Disorders and Severe Learning Difficulties

    Science.gov (United States)

    Solomons, Steve

    2005-01-01

    Children with autistic spectrum disorders (ASD) characteristically display a lack of shared attention behaviours and the lack of these behaviours impacts on their ability to develop social interactions and relationships with others. Steve Solomons, assistant headteacher at Rectory Paddock School and Research Unit in the London Borough of Bromley,…

  20. Effects of a Social Skills Intervention on Children with Autism Spectrum Disorder and Peers with Shared Deficits

    Science.gov (United States)

    Radley, Keith C.; O'Handley, Roderick D.; Battaglia, Allison A.; Lum, John D. K.; Dadakhodjaeva, Komila; Ford, William B.; McHugh, Melissa B.

    2017-01-01

    The current study evaluated the effects of the "Superheroes Social Skills" program (Jenson et al. 2011) in promoting accurate demonstration of target social skills in training and generalization conditions in young children with autism spectrum disorder (ASD) and peers with shared social deficits. Three preschool-age children with ASD…

  1. Delay analysis of a point-to-multipoint spectrum sharing network with CSI based power allocation

    KAUST Repository

    Khan, Fahd Ahmed

    2012-10-01

    In this paper, we analyse the delay performance of a point-to-multipoint cognitive radio network which is sharing the spectrum with a point-to-multipoint primary network. The channel is assumed to be independent but not identically distributed and has Nakagami-m fading. A constraint on the peak transmit power of the secondary user transmitter (SU-Tx) is also considered in addition to the peak interference power constraint. Based on the constraints, a power allocation scheme which requires knowledge of the instantaneous channel state information (CSI) of the interference links is derived. The SU-Tx is assumed to be equipped with a buffer and is modelled using the M/G/1 queueing model. Closed form expressions for the probability distribution function (PDF) and cumulative distribution function (CDF) of the packet transmission time is derived. Using the PDF, the expressions for the moments of transmission time are obtained. In addition, using the moments, the expressions for the performance measures such as the total average waiting time of packets and the average number of packets waiting in the buffer of the SU-Tx are also obtained. Numerical simulations corroborate the theoretical results. © 2012 IEEE.

  2. Full-Duplex opportunistic relay selection in future spectrum-sharing networks

    KAUST Repository

    Khafagy, Mohammad Galal

    2015-06-01

    We propose and analyze the performance of full-duplex relay selection in primary/secondary spectrum-sharing networks. Contrary to half-duplex relaying, full-duplex relaying (FDR) enables simultaneous listening/forwarding at the secondary relay, thereby allowing for a higher spectral efficiency. However, since the source and relay simultaneously transmit in FDR, their superimposed signal at the primary receiver should now satisfy the existing interference constraint which can considerably limit the secondary network throughput. In this regard, relay selection can offer an adequate solution to boost the secondary throughput while satisfying the imposed interference limit. We first analyze the performance of opportunistic relay selection among a cluster of full-duplex decode-and-forward relays with self-interference by deriving the exact cumulative distribution function of its end-to-end signal-to-noise ratio. Second, we evaluate the end-to-end performance of relay selection with interference constraints due to the presence of a primary receiver. Finally, the presented exact theoretical findings are verified by numerical simulations.

  3. Adaptive transmission schemes for MISO spectrum sharing systems: Tradeoffs and performance analysis

    KAUST Repository

    Bouida, Zied

    2014-10-01

    In this paper, we propose a number of adaptive transmission techniques in order to improve the performance of the secondary link in a spectrum sharing system. We first introduce the concept of minimum-selection maximum ratio transmission (MS-MRT) as an adaptive variation of the existing MRT (MRT) technique. While in MRT all available antennas are used for transmission, MS-MRT uses the minimum subset of antennas verifying both the interference constraint (IC) to the primary user and the bit error rate (BER) requirements. Similar to MRT, MS-MRT assumes that perfect channel state information (CSI) is available at the secondary transmitter (ST), which makes this scheme challenging from a practical point of view. To overcome this challenge, we propose another transmission technique based on orthogonal space-time block codes with transmit antenna selection (TAS). This technique uses the full-rate full-diversity Alamouti scheme in order to maximize the secondary\\'s transmission rate. The performance of these techniques is analyzed in terms of the average spectral efficiency (ASE), average number of transmit antennas, average delay, average BER, and outage performance. In order to give the motivation behind these analytical results, the tradeoffs offered by the proposed schemes are summarized and then demonstrated through several numerical examples.

  4. A genome scan for loci shared by autism spectrum disorder and language impairment.

    Science.gov (United States)

    Bartlett, Christopher W; Hou, Liping; Flax, Judy F; Hare, Abby; Cheong, Soo Yeon; Fermano, Zena; Zimmerman-Bier, Barbie; Cartwright, Charles; Azaro, Marco A; Buyske, Steven; Brzustowicz, Linda M

    2014-01-01

    The authors conducted a genetic linkage study of families that have both autism spectrum disorder (ASD) and language-impaired probands to find common communication impairment loci. The hypothesis was that these families have a high genetic loading for impairments in language ability, thus influencing the language and communication deficits of the family members with ASD. Comprehensive behavioral phenotyping of the families also enabled linkage analysis of quantitative measures, including normal, subclinical, and disordered variation in all family members for the three general autism symptom domains: social, communication, and compulsive behaviors. The primary linkage analysis coded persons with either ASD or specific language impairment as "affected." The secondary linkage analysis consisted of quantitative metrics of autism-associated behaviors capturing normal to clinically severe variation, measured in all family members. Linkage to language phenotypes was established at two novel chromosomal loci, 15q23-26 and 16p12. The secondary analysis of normal and disordered quantitative variation in social and compulsive behaviors established linkage to two loci for social behaviors (at 14q and 15q) and one locus for repetitive behaviors (at 13q). These data indicate shared etiology of ASD and specific language impairment at two novel loci. Additionally, nonlanguage phenotypes based on social aloofness and rigid personality traits showed compelling evidence for linkage in this study group. Further genetic mapping is warranted at these loci.

  5. On the Impact of Closed Access and Users Identities in Spectrum-Shared Overlaid Wireless Networks

    KAUST Repository

    Radaydeh, Redha M.

    2016-03-28

    © 2015 IEEE. This paper develops analytical models to investigate the impact of various operation terms and parameters on the downlink performance of spectrum-shared overlaid networks under closed-access small cells deployment. It is considered that closed-access small cells (i.e., femtocells) can not reuse available channels, and can serve only active authorized user equipments (UEs). On the other hand, the macrocell base station can unconditionally reuse available channels to serve active macrocell UEs. The analysis characterizes UEs identities, their likelihoods of being active, and their likelihoods of initiating interference. Moreover, it quantifies interference sources observed from effective femtocells considering their over-loaded and under-loaded cell scenarios. The developed results to characterize an active UE performance and the impact of the number of available channels are thoroughly examined. The obtained results are generally applicable for any performance measure and any network channel models. Numerical and simulation examples are presented to clarify the main outcomes of this paper.

  6. Achievable Rate of Spectrum Sharing Cognitive Radio Multiple-Antenna Channels

    KAUST Repository

    Sboui, Lokman

    2015-04-28

    We investigate the spectral efficiency gain of an uplink Cognitive Radio (CR) Multi-Input-Multi-Output system in which the Secondary User (SU) is allowed to share the spectrum with the Primary User (PU) using a specific precoding scheme to communicate with a common receiver. The proposed scheme exploits, at the same time, the free eigenmodes of the primary channel after a space alignment procedure and the interference threshold tolerated by the PU. At the common receiver, we adopt a Successive Interference Cancellation (SIC) technique to eliminate the effect of the detected primary signal transmitted through the exploited eigenmodes. Furthermore, we analyze the SIC operation inaccuracy as well as the CSI estimation imperfection on the PU and SU throughputs. Numerical results show that our proposed scheme enhances considerably the cognitive achievable rate. For instance, in case of a perfect detection of the PU signal, the CR rate remains non-zero for high Signal to Noise Ratio which is usually impossible when we only employ a space alignment technique. We show that a modified water-filling power allocation policy at the PU can increase the secondary rate with a marginal degradation of the primary rate. Finally, we investigate the behavior of the PU and SU rates through the study of the rate achievable region.

  7. Adaptive rate transmission for spectrum sharing system with quantized channel state information

    KAUST Repository

    Abdallah, Mohamed M.

    2011-03-01

    The capacity of a secondary link in spectrum sharing systems has been recently investigated in fading environments. In particular, the secondary transmitter is allowed to adapt its power and rate to maximize its capacity subject to the constraint of maximum interference level allowed at the primary receiver. In most of the literature, it was assumed that estimates of the channel state information (CSI) of the secondary link and the interference level are made available at the secondary transmitter via an infinite-resolution feedback links between the secondary/primary receivers and the secondary transmitter. However, the assumption of having infinite resolution feedback links is not always practical as it requires an excessive amount of bandwidth. In this paper we develop a framework for optimizing the performance of the secondary link in terms of the average spectral efficiency assuming quantized CSI available at the secondary transmitter. We develop a computationally efficient algorithm for optimally quantizing the CSI and finding the optimal power and rate employed at the cognitive transmitter for each quantized CSI level so as to maximize the average spectral efficiency. Our results give the number of bits required to represent the CSI sufficient to achieve almost the maximum average spectral efficiency attained using full knowledge of the CSI for Rayleigh fading channels. © 2011 IEEE.

  8. Proactive Spectrum Sharing for SWIPT in MIMO Cognitive Radio Systems Using Antenna Switching Technique

    KAUST Repository

    Benkhelifa, Fatma

    2017-04-24

    In this paper, we consider the simultaneous wireless information and power transfer (SWIPT) for the spectrum sharing (SS) in a multiple-input multiple-output (MIMO) cognitive radio (CR) network. The secondary transmitter (ST) selects only one antenna which maximizes the received signal-to-noise ratio (SNR) at the secondary receiver (SR) and minimizes the interference induced at the primary receiver (PR). Moreover, PR is an energy harvesting (EH) node using the antenna switching (AS) which assigns a subset of its antennas to harvest the energy and assigns the rest to decode its information data. The objective of this work is to show that the SS is advantageous for both SR and PR sides and leads to a win-win situation. To illustrate the incentive of the SS in CR network, we evaluate the energy and data performance metrics in terms of the average harvested energy, the power outage, and the mutual outage probability (MOP) which declares a data outage event if the PR or SR is in an outage. We present some special cases and asymptotic results of the derived analytic results. Through the simulation results, we show the impact of various simulation parameters and the benefits due to the presence of ST.

  9. Achievable rate of spectrum sharing cognitive radio systems over fading channels at low-power regime

    KAUST Repository

    Sboui, Lokman

    2014-11-01

    We study the achievable rate of cognitive radio (CR) spectrum sharing systems at the low-power regime for general fading channels and then for Nakagami fading. We formally define the low-power regime and present the corresponding closed-form expressions of the achievable rate lower bound under various types of interference and/or power constraints, depending on the available channel state information of the cross link (CL) between the secondary-user transmitter and the primary-user receiver. We explicitly characterize two regimes where either the interference constraint or the power constraint dictates the optimal power profile. Our framework also highlights the effects of different fading parameters on the secondary link (SL) ergodic achievable rate. We also study more realistic scenarios when there is either 1-bit quantized channel feedback from the CL alone or 2-bit feedback from both the CL and the SL and propose simple power control schemes and show that these schemes achieve the previously achieved rate at the low-power regime. Interestingly, we show that the low-power regime analysis provides a specific insight into the maximum achievable rate behavior of CR that has not been reported by previous studies.

  10. Achievable rate of cognitive radio spectrum sharing MIMO channel with space alignment and interference temperature precoding

    KAUST Repository

    Sboui, Lokman

    2013-06-01

    In this paper, we investigate the spectral efficiency gain of an uplink Cognitive Radio (CR) Multi-Input MultiOutput (MIMO) system in which the Secondary/unlicensed User (SU) is allowed to share the spectrum with the Primary/licensed User (PU) using a specific precoding scheme to communicate with a common receiver. The proposed scheme exploits at the same time the free eigenmodes of the primary channel after a space alignment procedure and the interference threshold tolerated by the PU. In our work, we study the maximum achievable rate of the CR node after deriving an optimal power allocation with respect to an outage interference and an average power constraints. We, then, study a protection protocol that considers a fixed interference threshold. Applied to Rayleigh fading channels, we show, through numerical results, that our proposed scheme enhances considerably the cognitive achievable rate. For instance, in case of a perfect detection of the PU signal, after applying Successive Interference Cancellation (SIC), the CR rate remains non-zero for high Signal to Noise Ratio (SNR) which is usually impossible when we only use space alignment technique. In addition, we show that the rate gain is proportional to the allowed interference threshold by providing a fixed rate even in the high SNR range. © 2013 IEEE.

  11. End-to-end performance of cooperative relaying in spectrum-sharing systems with quality of service requirements

    KAUST Repository

    Asghari, Vahid Reza

    2011-07-01

    We propose adopting a cooperative relaying technique in spectrum-sharing cognitive radio (CR) systems to more effectively and efficiently utilize available transmission resources, such as power, rate, and bandwidth, while adhering to the quality of service (QoS) requirements of the licensed (primary) users of the shared spectrum band. In particular, we first consider that the cognitive (secondary) user\\'s communication is assisted by an intermediate relay that implements the decode-and-forward (DF) technique onto the secondary user\\'s relayed signal to help with communication between the corresponding source and the destination nodes. In this context, we obtain first-order statistics pertaining to the first- and second-hop transmission channels, and then, we investigate the end-to-end performance of the proposed spectrum-sharing cooperative relaying system under resource constraints defined to assure that the primary QoS is unaffected. Specifically, we investigate the overall average bit error rate (BER), ergodic capacity, and outage probability of the secondary\\'s communication subject to appropriate constraints on the interference power at the primary receivers. We then consider a general scenario where a cluster of relays is available between the secondary source and destination nodes. In this case, making use of the partial relay selection method, we generalize our results for the single-relay scheme and obtain the end-to-end performance of the cooperative spectrum-sharing system with a cluster of L available relays. Finally, we examine our theoretical results through simulations and comparisons, illustrating the overall performance of the proposed spectrum-sharing cooperative system and quantify its advantages for different operating scenarios and conditions. © 2011 IEEE.

  12. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  13. Delay performance of a broadcast spectrum sharing network in Nakagami-m fading

    KAUST Repository

    Khan, Fahd Ahmed

    2014-03-01

    In this paper, we analyze the delay performance of a point-to-multipoint secondary network (P2M-SN), which is concurrently sharing the spectrum with a point-to-multipoint primary network (P2M-PN). The channel is assumed to be independent but not identically distributed (i.n.i.d.) and has Nakagami-m fading. A constraint on the peak transmit power of the secondary-user transmitter (SU-Tx) is considered, in addition to the peak interference power constraint. The SU-Tx is assumed to be equipped with a buffer and is modeled using the M/G/1 queueing model. The performance of this system is analyzed for two scenarios: 1) P2M-SN does not experience interference from the primary network (denoted by P2M-SN-NI), and 2) P2M-SN does experience interference from the primary network (denoted by P2M-SN-WI). The performance of both P2M-SN-NI and P2M-SN-WI is analyzed in terms of the packet transmission time, and the closed-form cumulative density function (cdf) of the packet transmission time is derived for both scenarios. Furthermore, by utilizing the concept of timeout, an exact closed-form expression for the outage probability of the P2M-SN-NI is obtained. In addition, an accurate approximation for the outage probability of the P2M-SN-WI is also derived. Furthermore, for the P2M-SN-NI, the analytic expressions for the total average waiting time (TAW-time) of packets and the average number of packets waiting in the buffer of the SU-Tx are also derived. Numerical simulations are also performed to validate the derived analytical results. © 1967-2012 IEEE.

  14. A Strategic Bargaining Game for a Spectrum Sharing Scheme in Cognitive Radio-Based Heterogeneous Wireless Sensor Networks.

    Science.gov (United States)

    Mao, Yuxing; Cheng, Tao; Zhao, Huiyuan; Shen, Na

    2017-11-27

    In Wireless Sensor Networks (WSNs), unlicensed users, that is, sensor nodes, have excessively exploited the unlicensed radio spectrum. Through Cognitive Radio (CR), licensed radio spectra, which are owned by licensed users, can be partly or entirely shared with unlicensed users. This paper proposes a strategic bargaining spectrum-sharing scheme, considering a CR-based heterogeneous WSN (HWSN). The sensors of HWSNs are discrepant and exist in different wireless environments, which leads to various signal-to-noise ratios (SNRs) for the same or different licensed users. Unlicensed users bargain with licensed users regarding the spectrum price. In each round of bargaining, licensed users are allowed to adaptively adjust their spectrum price to the best for maximizing their profits. . Then, each unlicensed user makes their best response and informs licensed users of "bargaining" and "warning". Through finite rounds of bargaining, this scheme can obtain a Nash bargaining solution (NBS), which makes all licensed and unlicensed users reach an agreement. The simulation results demonstrate that the proposed scheme can quickly find a NBS and all players in the game prefer to be honest. The proposed scheme outperforms existing schemes, within a certain range, in terms of fairness and trade success probability.

  15. 76 FR 81991 - National Spectrum Sharing Research Experimentation, Validation, Verification, Demonstration and...

    Science.gov (United States)

    2011-12-29

    ... AGENCY: The National Coordination Office (NCO) for Networking and Information Technology Research and... academia will collaboratively define the concept and requirements of national level spectrum research... issued by the National Coordination Office (NCO) for the Networking and Information Technology Research...

  16. Sum-rate analysis of spectrum sharing spatial multiplexing MIMO systems with zero-forcing and multiuser diversity

    KAUST Repository

    Yang, Liang

    2013-06-01

    This paper considers a multiuser spectrum sharing (SS) multiple-input multiple-output (MIMO) system with zero-forcing (ZF) operating in a Rayleigh fading environment. We provide an asymptotic sum-rate analysis to investigate the effects of different parameters on the multiuser diversity gain. For a ZF SS spatial multiplexing system with scheduling, the asymptotic sum-rate scales like Nt log2(Q(Nt Np√K - 1)/N t), where Np denotes the number of antennas of primary receiver, Q is the interference temperature, and K represents the number of secondary transmitters. © 2013 IEEE.

  17. Shared heritability of attention-deficit/hyperactivity disorder and autism spectrum disorder

    NARCIS (Netherlands)

    Rommelse, N.N.J.; Franke, B.; Geurts, H.M.; Hartman, C.A.; Buitelaar, J.K.

    2010-01-01

    Attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) are both highly heritable neurodevelopmental disorders. Evidence indicates both disorders co-occur with a high frequency, in 20-50% of children with ADHD meeting criteria for ASD and in 30-80% of ASD children meeting

  18. Shared heritability of attention-deficit/hyperactivity disorder and autism spectrum disorder.

    NARCIS (Netherlands)

    Rommelse, N.N.J.; Franke, B.; Geurts, H.M.; Hartman, C.A.; Buitelaar, J.K.

    2010-01-01

    Attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) are both highly heritable neurodevelopmental disorders. Evidence indicates both disorders co-occur with a high frequency, in 20-50% of children with ADHD meeting criteria for ASD and in 30-80% of ASD children meeting

  19. Shared heritability of attention-deficit/hyperactivity disorder and autism spectrum disorder

    NARCIS (Netherlands)

    Rommelse, Nanda N. J.; Franke, Barbara; Geurts, Hilde M.; Hartman, Catharina A.; Buitelaar, Jan K.

    Attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) are both highly heritable neurodevelopmental disorders. Evidence indicates both disorders co-occur with a high frequency, in 20-50% of children with ADHD meeting criteria for ASD and in 30-80% of ASD children meeting

  20. Examining Shared and Unique Aspects of Social Anxiety Disorder and Autism Spectrum Disorder Using Factor Analysis

    Science.gov (United States)

    White, Susan W.; Bray, Bethany C.; Ollendick, Thomas H.

    2012-01-01

    Social Anxiety Disorder (SAD) and Autism Spectrum Disorders (ASD) are fairly common psychiatric conditions that impair the functioning of otherwise healthy young adults. Given that the two conditions frequently co-occur, measurement of the characteristics unique to each condition is critical. This study evaluated the structure and construct…

  1. Features of the broader autism phenotype in people with epilepsy support shared mechanisms between epilepsy and autism spectrum disorder.

    Science.gov (United States)

    Richard, Annie E; Scheffer, Ingrid E; Wilson, Sarah J

    2017-04-01

    Richard, A.E., I.E. Scheffer and S.J. Wilson. Features of the broader autism phenotype in people with epilepsy support shared mechanisms between epilepsy and autism spectrum disorder. NEUROSCI BIOBEHAV REV 21(1) XXX-XXX, 2016. To inform on mechanisms underlying the comorbidity of epilepsy and autism spectrum disorder (ASD), we conducted meta-analyses to test whether impaired facial emotion recognition (FER) and theory of mind (ToM), key phenotypic traits of ASD, are more common in people with epilepsy (PWE) than controls. We contrasted these findings with those of relatives of individuals with ASD (ASD-relatives) compared to controls. Furthermore, we examined the relationship of demographic (age, IQ, sex) and epilepsy-related factors (epilepsy onset age, duration, seizure laterality and origin) to FER and ToM. Thirty-one eligible studies of PWE (including 1449 individuals: 77% with temporal lobe epilepsy), and 22 of ASD-relatives (N=1295) were identified by a systematic database search. Analyses revealed reduced FER and ToM in PWE compared to controls (p<0.001), but only reduced ToM in ASD-relatives (p<0.001). ToM was poorer in PWE than ASD-relatives. Only weak associations were found between FER and ToM and epilepsy-related factors. These findings suggest shared mechanisms between epilepsy and ASD, independent of intellectual disability. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Spectrum

    DEFF Research Database (Denmark)

    Høgfeldt Hansen, Leif

    2016-01-01

    The publication functions as a proces description of the development and construction of an urban furniture SPECTRUM in the city of Gwangju, Republic of Korea. It is used as the cataloque for the exhibition of Spectrum.......The publication functions as a proces description of the development and construction of an urban furniture SPECTRUM in the city of Gwangju, Republic of Korea. It is used as the cataloque for the exhibition of Spectrum....

  3. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua

    2012-06-01

    Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance.

  4. Inclusion of children with autism spectrum disorders through shared peer activity

    Directory of Open Access Journals (Sweden)

    Stephen Von Tetzchner

    2013-11-01

    Full Text Available http://dx.doi.org/10.5902/1984686X9830Inclusion may be defined as having a full and active part in the life of the mainstream kindergarten or school. There are professional, political and ethical reasons for striving for inclusion and there are different approaches to how inclusive education and training of children with autism spectrum disorders (ASD should be organized. The basis for the illustrative case excerpts presented here is a blend of social constructivism, event cognition and ecological psychology. Children with ASD vary widely and intervention has to be based on knowledge about development, learning and autism in general, as well as knowledge about the individual child and his or her proximal environment or ecology. Many children with ASD need some one-to-one education but participation in child-managed activities and events is a core element of true inclusion. The case excerpts illustrate principles for how this may be achieved.   

  5. Adaptive multi-channel downlink assignment for overloaded spectrum-shared multi-antenna overlaid cellular networks

    KAUST Repository

    Radaydeh, Redha Mahmoud

    2012-10-19

    Overlaid cellular technology has been considered as a promising candidate to enhance the capacity and extend the coverage of cellular networks, particularly indoors. The deployment of small cells (e.g. femtocells and/or picocells) in an overlaid setup is expected to reduce the operational power and to function satisfactorily with the existing cellular architecture. Among the possible deployments of small-cell access points is to manage many of them to serve specific spatial locations, while reusing the available spectrum universally. This contribution considers the aforementioned scenario with the objective to serve as many active users as possible when the available downlink spectrum is overloaded. The case study is motivated by the importance of realizing universal resource sharing in overlaid networks, while reducing the load of distributing available resources, satisfying downlink multi-channel assignment, controlling the aggregate level of interference, and maintaining desired design/operation requirements. These objectives need to be achieved in distributed manner in each spatial space with as low processing load as possible when the feedback links are capacity-limited, multiple small-cell access points can be shared, and data exchange between access points can not be coordinated. This contribution is summarized as follows. An adaptive downlink multi-channel assignment scheme when multiple co-channel and shared small-cell access points are allocated to serve active users is proposed. It is assumed that the deployed access points employ isotropic antenna arrays of arbitrary sizes, operate using the open-access strategy, and transmit on shared physical channels simultaneously. Moreover, each active user can be served by a single transmit channel per each access point at a time, and can sense the concurrent interference level associated with each transmit antenna channel non-coherently. The proposed scheme aims to identify a suitable subset of transmit channels

  6. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research.

    Science.gov (United States)

    Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila

    2015-11-01

    Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage

  7. A Genome-scan for Loci Shared by Autism Spectrum Disorder and Language Impairment

    Science.gov (United States)

    Bartlett, Christopher W.; Hou, Liping; Flax, Judy F.; Hare, Abby; Cheong, Soo Yeon; Fermano, Zena; Zimmerman-Bier, Barbie; Cartwright, Charles; Azaro, Marco A.; Buyske, Steven; Brzustowicz, Linda M.

    2014-01-01

    Objective The authors conducted the first genetic linkage study of families that segregate both autism and specific language impairment to find common communication impairment loci. The hypothesis was that these families have a high genetic loading for impairments in language ability, thus influencing the language and communication deficits of the family members with autism. Comprehensive behavioral phenotyping of the families also enabled linkage analysis of quantitative measures, including normal, subclinical and disordered variation in all family members for the three general autism symptom domains: social, communication, and compulsive behaviors. Method The primary linkage analysis coded persons with either autism or specific language impairment as “affected” with language impairment. The secondary linkage analysis consisted of quantitative metrics of autism-associated behaviors capturing normal to clinically severe variation, measured in all family members. Results Linkage to language phenotypes was established at two novel chromosomal loci, 15q23-26 and 16p12. The secondary analysis of normal and disordered quantitative variation in social and compulsive behaviors established linkage to two loci for social behaviors (at 14q and 15q) and one locus for repetitive behaviors (at 13q). Conclusion These data indicate shared etiology of autism and specific language impairment at two novel loci. Additionally, non-language phenotypes based on social aloofness and rigid personality traits showed compelling evidence for linkage in this sample. Further genetic mapping is warranted at these loci. PMID:24170272

  8. Scalable privacy-preserving data sharing methodology for genome-wide association studies: an application to iDASH healthcare privacy protection challenge.

    Science.gov (United States)

    Yu, Fei; Ji, Zhanglong

    2014-01-01

    In response to the growing interest in genome-wide association study (GWAS) data privacy, the Integrating Data for Analysis, Anonymization and SHaring (iDASH) center organized the iDASH Healthcare Privacy Protection Challenge, with the aim of investigating the effectiveness of applying privacy-preserving methodologies to human genetic data. This paper is based on a submission to the iDASH Healthcare Privacy Protection Challenge. We apply privacy-preserving methods that are adapted from Uhler et al. 2013 and Yu et al. 2014 to the challenge's data and analyze the data utility after the data are perturbed by the privacy-preserving methods. Major contributions of this paper include new interpretation of the χ2 statistic in a GWAS setting and new results about the Hamming distance score, a key component for one of the privacy-preserving methods.

  9. Corfu: A Platform for Scalable Consistency

    OpenAIRE

    Wei, Michael

    2017-01-01

    Corfu is a platform for building systems which are extremely scalable, strongly consistent and robust. Unlike other systems which weaken guarantees to provide better performance, we have built Corfu with a resilient fabric tuned and engineered for scalability and strong consistency at its core: the Corfu shared log. On top of the Corfu log, we have built a layer of advanced data services which leverage the properties of the Corfu log. Today, Corfu is already replacing data platforms in commer...

  10. Statistical Delay QoS Provisioning for Energy-Efficient Spectrum-Sharing Based Wireless Ad Hoc Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yichen Wang

    2016-01-01

    Full Text Available In this paper, we develop the statistical delay quality-of-service (QoS provisioning framework for the energy-efficient spectrum-sharing based wireless ad hoc sensor network (WAHSN, which is characterized by the delay-bound violation probability. Based on the established delay QoS provisioning framework, we formulate the nonconvex optimization problem which aims at maximizing the average energy efficiency of the sensor node in the WAHSN while meeting PU’s statistical delay QoS requirement as well as satisfying sensor node’s average transmission rate, average transmitting power, and peak transmitting power constraints. By employing the theories of fractional programming, convex hull, and probabilistic transmission, we convert the original fractional-structured nonconvex problem to the additively structured parametric convex problem and obtain the optimal power allocation strategy under the given parameter via Lagrangian method. Finally, we derive the optimal average energy efficiency and corresponding optimal power allocation scheme by employing the Dinkelbach method. Simulation results show that our derived optimal power allocation strategy can be dynamically adjusted based on PU’s delay QoS requirement as well as the channel conditions. The impact of PU’s delay QoS requirement on sensor node’s energy efficiency is also illustrated.

  11. Spectrum Sharing between Cooperative Relay and Ad-hoc Networks: Dynamic Transmissions under Computation and Signaling Limitations

    CERN Document Server

    Sun, Yin; Li, Yunzhou; Zhou, Shidong; Xu, Xibin

    2010-01-01

    This paper studies a spectrum sharing scenario between an uplink cognitive relay network (CRN) and some nearby low power ad-hoc networks. In particular, the dynamic resource allocation of the CRN is analyzed, which aims to minimize the average interfering time with the ad-hoc networks subject to a minimal average uplink throughput constraint. A long term average rate formula is considered, which is achieved by a half-duplex decode-and-forward (DF) relay strategy with multi-channel transmissions. Both the source and relay are allowed to queue their data, by which they can adjust the transmission rates flexibly based on sensing and predicting the channel state and ad-hoc traffic. The dynamic resource allocation of the CRN is formulated as a non-convex stochastic optimization problem. By carefully analyzing the optimal transmission time scheduling, it is reduced to a stochastic convex optimization problem and solved by the dual optimization method. The signaling and computation processes are designed carefully t...

  12. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  13. Evidence for shared deficits in identifying emotions from faces and from voices in autism spectrum disorders and specific language impairment.

    Science.gov (United States)

    Taylor, Lauren J; Maybery, Murray T; Grayndler, Luke; Whitehouse, Andrew J O

    2015-07-01

    While autism spectrum disorder (ASD) and specific language impairment (SLI) have traditionally been conceptualized as distinct disorders, recent findings indicate that the boundaries between these two conditions are not clear-cut. While considerable research has investigated overlap in the linguistic characteristics of ASD and SLI, relatively less research has explored possible overlap in the socio-cognitive domain, particularly in terms of the emotion recognition abilities of these two groups of children. To investigate facial and vocal emotion recognition in children with ASD, children with SLI and typically developing (TD) children. To do so, the ASD group was subdivided into those with 'normal' (ALN) and those with 'impaired' (ALI) language to explore the extent to which language ability influenced performance on the emotion recognition task. Twenty-nine children with ASD (17 ALN and 12 ALI), 18 children with SLI and 66 TD children completed visual and auditory versions of an emotion recognition task. For the visual version of the task, the participants saw photographs of people expressing one of six emotions (happy, sad, scared, angry, surprised, disgusted) on the whole face. For the auditory modality, the participants heard a neutral sentence that conveyed one of the six emotional expressions in the tone of the voice. In both conditions, the children were required to indicate how the person they could see/hear was feeling by selecting a cartoon face that was presented on the computer screen. The results showed that all clinical groups were less accurate than the TD children when identifying emotions on the face and in the voice. While the ALN children were less accurate than the TD children only when identifying expressions that require inferring another's mental state (surprise, disgust) emotional expressions, the ALI and the SLI children were less accurate than the TD children when identifying the basic (happy, sad, scared, angry) as well as the inferred

  14. 77 FR 41956 - Relocation of and Spectrum Sharing by Federal Government Stations-Technical Panel and Dispute...

    Science.gov (United States)

    2012-07-17

    ... information. FOR FURTHER INFORMATION CONTACT: Milton Brown, NTIA, (202) 482-1816. SUPPLEMENTARY INFORMATION...., as amended by the Middle Class Tax Relief and Job Creation Act of 2012, Public Law 112-96, Title VI.../publications/tenyearplan_11152010.pdf . \\3\\ Commercial Spectrum Enhancement Act (CSEA), Public Law 108- 494...

  15. Evidence for Shared Deficits in Identifying Emotions from Faces and from Voices in Autism Spectrum Disorders and Specific Language Impairment

    Science.gov (United States)

    Taylor, Lauren J.; Maybery, Murray T.; Grayndler, Luke; Whitehouse, Andrew J. O.

    2015-01-01

    Background: While autism spectrum disorder (ASD) and specific language impairment (SLI) have traditionally been conceptualized as distinct disorders, recent findings indicate that the boundaries between these two conditions are not clear-cut. While considerable research has investigated overlap in the linguistic characteristics of ASD and SLI,…

  16. Integrative analysis of genetic data sets reveals a shared innate immune component in autism spectrum disorder and its co-morbidities.

    Science.gov (United States)

    Nazeen, Sumaiya; Palmer, Nathan P; Berger, Bonnie; Kohane, Isaac S

    2016-11-14

    Autism spectrum disorder (ASD) is a common neurodevelopmental disorder that tends to co-occur with other diseases, including asthma, inflammatory bowel disease, infections, cerebral palsy, dilated cardiomyopathy, muscular dystrophy, and schizophrenia. However, the molecular basis of this co-occurrence, and whether it is due to a shared component that influences both pathophysiology and environmental triggering of illness, has not been elucidated. To address this, we deploy a three-tiered transcriptomic meta-analysis that functions at the gene, pathway, and disease levels across ASD and its co-morbidities. Our analysis reveals a novel shared innate immune component between ASD and all but three of its co-morbidities that were examined. In particular, we find that the Toll-like receptor signaling and the chemokine signaling pathways, which are key pathways in the innate immune response, have the highest shared statistical significance. Moreover, the disease genes that overlap these two innate immunity pathways can be used to classify the cases of ASD and its co-morbidities vs. controls with at least 70 % accuracy. This finding suggests that a neuropsychiatric condition and the majority of its non-brain-related co-morbidities share a dysregulated signal that serves as not only a common genetic basis for the diseases but also as a link to environmental triggers. It also raises the possibility that treatment and/or prophylaxis used for disorders of innate immunity may be successfully used for ASD patients with immune-related phenotypes.

  17. Shared decision making and motivational interviewing: achieving patient-centered care across the spectrum of health care problems

    NARCIS (Netherlands)

    Elwyn, G.; Dehlendorf, C.; Epstein, R.M.; Marrin, K.; White, J.; Frosch, D.L.

    2014-01-01

    Patient-centered care requires different approaches depending on the clinical situation. Motivational interviewing and shared decision making provide practical and well-described methods to accomplish patient-centered care in the context of situations where medical evidence supports specific

  18. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J.; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  19. Power allocation and achievable data rate in spectrum-sharing channels under adaptive primary service outage constraints

    KAUST Repository

    Yang, Yuli

    2012-09-01

    In this paper, we focus on a cognitive radio network where adaptive modulation is adopted in primary links. The gap between the primary user (PU)\\'s received signal-to-noise ratio (SNR) and the lower SNR boundary of the modulation mode that is being used, provides an interference-tolerable zone. Based on this gap, a secondary user (SU) has an increased opportunity to access the licensed spectrum and to determine the transmit power it should use to keep the PU\\'s quality-of-service (QoS) unaffected. However, since the SU cannot obtain perfect information on the PU\\'s received SNR, it has to choose an SNR point between the lower and upper boundaries of the PU\\'s current modulation mode as if this point were the real SNR received by the PU. Considering this issue, in order to quantify the effect of the SU\\'s transmissions on the PU\\'s QoS, we define the PU\\'s service outage probability and obtain its closed-form expressions by taking into account whether the peak transmit power constraint is imposed on the secondary\\'s transmission or not. Subsequently, we derive the SU\\'s achievable data rate in closed form for counterpart scenarios. Numerical results provided here quantify the relation between the PU\\'s service outage probability and the SU\\'s achievable data rate, which further demonstrate that the higher the peak transmit power a secondary transmitter can support, the better performance the cognitive radio network can achieve. © 2012 IEEE.

  20. Large scale fusion of gray matter and resting-state functional MRI reveals common and shared biological markers across the psychosis spectrum in the B-SNIP cohort

    Directory of Open Access Journals (Sweden)

    Zheng eWang

    2015-12-01

    Full Text Available To investigate whether aberrant interactions between brain structure and function present similarly or differently across probands with psychotic illnesses (schizophrenia (SZ, schizoaffective disorder (SAD, and bipolar I disorder with psychosis (BP and whether these deficits are shared with their first-degree non-psychotic relatives. A total of 1199 subjects were assessed, including 220 SZ, 147 SAD, 180 psychotic BP, 150 first-degree relatives of SZ, 126 SAD relatives, 134 BP relatives and 242 healthy controls. All subjects underwent structural MRI (sMRI and resting-state functional MRI (rs-fMRI scanning. Joint independent analysis (jICA was used to fuse sMRI gray matter (GM and rs-fMRI amplitude of low frequency fluctuations (ALFF data to identify the relationship between the two modalities. Joint ICA revealed two significantly fused components. The association between functional brain alteration in a prefrontal-striatal-thalamic-cerebellar network and structural abnormalities in the default mode network (DMN was found to be common across psychotic diagnoses and correlated with cognitive function, social function and Schizo-Bipolar Scale (SBS scores. The fused alteration in the temporal lobe was unique to SZ and SAD. The above effects were not seen in any relative group (including those with cluster-A personality. Using a multivariate fused approach involving two widely used imaging markers we demonstrate both shared and distinct biological traits across the psychosis spectrum. Further, our results suggest that the above traits are psychosis biomarkers rather than endophenotypes.

  1. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  2. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  3. Targeting the Binding Interface on a Shared Receptor Subunit of a Cytokine Family Enables the Inhibition of Multiple Member Cytokines with Selectable Target Spectrum*

    Science.gov (United States)

    Nata, Toshie; Basheer, Asjad; Cocchi, Fiorenza; van Besien, Richard; Massoud, Raya; Jacobson, Steven; Azimi, Nazli; Tagaya, Yutaka

    2015-01-01

    The common γ molecule (γc) is a shared signaling receptor subunit used by six γc-cytokines. These cytokines play crucial roles in the differentiation of the mature immune system and are involved in many human diseases. Moreover, recent studies suggest that multiple γc-cytokines are pathogenically involved in a single disease, thus making the shared γc-molecule a logical target for therapeutic intervention. However, the current therapeutic strategies seem to lack options to treat such cases, partly because of the lack of appropriate neutralizing antibodies recognizing the γc and, more importantly, because of the inherent and practical limitations in the use of monoclonal antibodies. By targeting the binding interface of the γc and cytokines, we successfully designed peptides that not only inhibit multiple γc-cytokines but with a selectable target spectrum. Notably, the lead peptide inhibited three γc-cytokines without affecting the other three or non-γc-cytokines. Biological and mutational analyses of our peptide provide new insights to our current understanding on the structural aspect of the binding of γc-cytokines the γc-molecule. Furthermore, we provide evidence that our peptide, when conjugated to polyethylene glycol to gain stability in vivo, efficiently blocks the action of one of the target cytokines in animal models. Collectively, our technology can be expanded to target various combinations of γc-cytokines and thereby will provide a novel strategy to the current anti-cytokine therapies against immune, inflammatory, and malignant diseases. PMID:26183780

  4. Scalable Reliable SD Erlang Design

    OpenAIRE

    Chechina, Natalia; Trinder, Phil; Ghaffari, Amir; Green, Rickard; Lundin, Kenneth; Virding, Robert

    2014-01-01

    This technical report presents the design of Scalable Distributed (SD) Erlang: a set of language-level changes that aims to enable Distributed Erlang to scale for server applications on commodity hardware with at most 100,000 cores. We cover a number of aspects, specifically anticipated architecture, anticipated failures, scalable data structures, and scalable computation. Other two components that guided us in the design of SD Erlang are design principles and typical Erlang applications. The...

  5. Ultraviolet photovoltaics: Share the spectrum

    Science.gov (United States)

    Milliron, Delia J.

    2017-08-01

    Electrically controlled windows require power to switch between transparent and tinted states. Now, an ultraviolet light-harvesting solar cell can power smart windows without compromising their control over heat and light.

  6. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  7. A scalable synthesis of highly stable and water dispersible Ag 44(SR)30 nanoclusters

    KAUST Repository

    AbdulHalim, Lina G.

    2013-01-01

    We report the synthesis of atomically monodisperse thiol-protected silver nanoclusters [Ag44(SR)30] m, (SR = 5-mercapto-2-nitrobenzoic acid) in which the product nanocluster is highly stable in contrast to previous preparation methods. The method is one-pot, scalable, and produces nanoclusters that are stable in aqueous solution for at least 9 months at room temperature under ambient conditions, with very little degradation to their unique UV-Vis optical absorption spectrum. The composition, size, and monodispersity were determined by electrospray ionization mass spectrometry and analytical ultracentrifugation. The produced nanoclusters are likely to be in a superatom charge-state of m = 4-, due to the fact that their optical absorption spectrum shares most of the unique features of the intense and broadly absorbing nanoparticles identified as [Ag44(SR) 30]4- by Harkness et al. (Nanoscale, 2012, 4, 4269). A protocol to transfer the nanoclusters to organic solvents is also described. Using the disperse nanoclusters in organic media, we fabricated solid-state films of [Ag44(SR)30]m that retained all the distinct features of the optical absorption spectrum of the nanoclusters in solution. The films were studied by X-ray diffraction and photoelectron spectroscopy in order to investigate their crystallinity, atomic composition and valence band structure. The stability, scalability, and the film fabrication method demonstrated in this work pave the way towards the crystallization of [Ag44(SR)30]m and its full structural determination by single crystal X-ray diffraction. Moreover, due to their unique and attractive optical properties with multiple optical transitions, we anticipate these clusters to find practical applications in light-harvesting, such as photovoltaics and photocatalysis, which have been hindered so far by the instability of previous generations of the cluster. © 2013 The Royal Society of Chemistry.

  8. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  9. Introduction: Power-Sharing in Africa

    OpenAIRE

    Mehler, Andreas

    2009-01-01

    Introduction to the Featured Topic "Power-Sharing in Africa", Africa Spectrum, Vol. 44, No. 3 (2009). Einführung in den Themenschwerpunkt "Power-Sharing in Africa" in Heft 3, Jahrgang 44 (2009) der Zeitschrift "Africa Spectrum".

  10. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  11. Scalable Gravity Offload System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A scalable gravity offload device simulates reduced gravity for the testing of various surface system elements such as mobile robots, excavators, habitats, and...

  12. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  13. C-share: Optical circuits sharing for software-defined data-centers [arXiv

    DEFF Research Database (Denmark)

    Ben-Itzhak, Yaniv; Caba, Cosmin Marius; Schour, Liran

    2016-01-01

    are based on separated packet and circuit planes which do not have the ability to utilize an optical circuit with flows that do not arrive from or delivered to switches directly connected to the circuit’s end-points. Moreover, current SDN-based elephant flow rerouting methods require a forwarding rule...... for each flow, which raise scalability issues. In this paper, we present C-Share - a practical, scalable SDN-based circuit sharing solution for data center networks. C-Share inherently enable elephant flows to share optical circuits by exploiting a flat upper tier network topology. C-Share is based...... on a scalable and decoupled SDN-based elephant flow rerouting method comprised of elephant flow detection, tagging and identification, which is utilized by using a prevalent network sampling method (e.g., sFlow). C-Share requires only a single OpenFlow rule for each optical circuit, and therefore significantly...

  14. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  15. Scalable Deployment of Advanced Building Energy Management Systems

    Science.gov (United States)

    2013-06-01

    rooms, classrooms, a quarterdeck with a two-story atrium and office spaces, and a large cafeteria /galley. Buildings 7113 and 7114 are functionally...similar (include barracks, classroom, cafeteria , etc.) and share a common central chilled water plant. 3.1.1 Building 7230 The drill hall (Building...scalability of the proposed approach, and expanded the capabilities developed for a single building to a building campus at Naval Station Great Lakes

  16. Scalability study of solid xenon

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  17. Network selection, Information filtering and Scalable computation

    Science.gov (United States)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  18. Assured Resource Sharing in Ad-Hoc Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Gail-Joon [Arizona State Univ., Tempe, AZ (United States)

    2015-12-19

    The project seeks an innovative framework to enable users to access and selectively share resources in distributed environments, enhancing the scalability of information sharing. We have investigated secure sharing & assurance approaches for ad-hoc collaboration, focused on Grids, Clouds, and ad-hoc network environments.

  19. Quality scalable video data stream

    OpenAIRE

    Wiegand, T.; Kirchhoffer, H.; Schwarz, H

    2008-01-01

    An apparatus for generating a quality-scalable video data stream (36) is described which comprises means (42) for coding a video signal (18) using block-wise transformation to obtain transform blocks (146, 148) of transformation coefficient values for a picture (140) of the video signal, a predetermined scan order (154, 156, 164, 166) with possible scan positions being defined among the transformation coefficient values within the transform blocks so that in each transform block, for each pos...

  20. Autism Spectrum Disorder (ASD)

    Science.gov (United States)

    ... Español (Spanish) Recommend on Facebook Tweet Share Compartir Autism spectrum disorder (ASD) is a developmental disability that ... interview about being fathers of sons who have autism. Watch more Autism videos COMMUNITY REPORT The Community ...

  1. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  2. On the scalability of uncoordinated multiple access for the Internet of Things

    KAUST Repository

    Chisci, Giovanni

    2017-11-16

    The Internet of things (IoT) will entail massive number of wireless connections with sporadic traffic patterns. To support the IoT traffic, several technologies are evolving to support low power wide area (LPWA) wireless communications. However, LPWA networks rely on variations of uncoordinated spectrum access, either for data transmissions or scheduling requests, thus imposing a scalability problem to the IoT. This paper presents a novel spatiotemporal model to study the scalability of the ALOHA medium access. In particular, the developed mathematical model relies on stochastic geometry and queueing theory to account for spatial and temporal attributes of the IoT. To this end, the scalability of the ALOHA is characterized by the percentile of IoT devices that can be served while keeping their queues stable. The results highlight the scalability problem of ALOHA and quantify the extend to which ALOHA can support in terms of number of devices, traffic requirement, and transmission rate.

  3. Flexible scalable photonic manufacturing method

    Science.gov (United States)

    Skunes, Timothy A.; Case, Steven K.

    2003-06-01

    A process for flexible, scalable photonic manufacturing is described. Optical components are actively pre-aligned and secured to precision mounts. In a subsequent operation, the mounted optical components are passively placed onto a substrate known as an Optical Circuit Board (OCB). The passive placement may be either manual for low volume applications or with a pick-and-place robot for high volume applications. Mating registration features on the component mounts and the OCB facilitate accurate optical alignment. New photonic circuits may be created by changing the layout of the OCB. Predicted yield data from Monte Carlo tolerance simulations for two fiber optic photonic circuits is presented.

  4. Energy-Efficient Spectrum Sensing for Cognitive Radio Networks

    NARCIS (Netherlands)

    Maleki, S.

    2013-01-01

    Dynamic spectrum access employing cognitive radios has been proposed, in order to opportunistically use underutilized spectrum portions of a heavily licensed electromagnetic spectrum. Cognitive radios opportunistically share the spectrum, while avoiding any harmful interference to the primary

  5. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    Data.gov (United States)

    National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...

  6. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  7. Scalable rendering on PC clusters

    Energy Technology Data Exchange (ETDEWEB)

    WYLIE,BRIAN N.; LEWIS,VASILY; SHIRLEY,DAVID NOYES; PAVLAKOS,CONSTANTINE

    2000-04-25

    This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

  8. Sharing Economy

    DEFF Research Database (Denmark)

    Marton, Attila; Constantiou, Ioanna; Thoma, Antonela

    De spite the hype the notion of the sharing economy is surrounded by, our understanding of sharing is surprisingly undertheorized. In this paper, we make a first step towards rem edying this state of affairs by analy sing sharing as a s ocial practice. Based on a multi ple - case study, we analyse...

  9. Sharing City

    DEFF Research Database (Denmark)

    This magazine offers an insight into the growing commercial innovation, civic movements, and political narratives surrounding sharing economy services, solutions and organisational types. It presents a cross-section of the manifold sharing economy services and solutions that can be found in Denmark....... Solutions of sharing that seeks to improve our cities and local communities in both urban and rural environments. 24 sharing economy organisations and businesses addressing urban and rural issues are being portrayed and seven Danish municipalities that have explored the potentials of sharing economy....... Moreover, 15 thought leading experts - professionals and academic - have been invited to give their perspective on sharing economy for cities. This magazine touches upon aspects of the sharing economy as mobility, communities, sustainability, business development, mobility, and urban-rural relation....

  10. Highly Scalable Multiplication for Distributed Sparse Multivariate Polynomials on Many-core Systems

    OpenAIRE

    Gastineau, Mickael; Laskar, Jacques

    2013-01-01

    We present a highly scalable algorithm for multiplying sparse multivariate polynomials represented in a distributed format. This algo- rithm targets not only the shared memory multicore computers, but also computers clusters or specialized hardware attached to a host computer, such as graphics processing units or many-core coprocessors. The scal- ability on the large number of cores is ensured by the lacks of synchro- nizations, locks and false-sharing during the main parallel step.

  11. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  12. Sharing code.

    Science.gov (United States)

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  13. MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Directory of Open Access Journals (Sweden)

    Prem Prakash Jayaraman

    2014-05-01

    Full Text Available Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data acrossmultiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.

  14. Sharing City

    DEFF Research Database (Denmark)

    This magazine offers an insight into the growing commercial innovation, civic movements, and political narratives surrounding sharing economy services, solutions and organisational types. It presents a cross-section of the manifold sharing economy services and solutions that can be found in Denmark....... Moreover, 15 thought leading experts - professionals and academic - have been invited to give their perspective on sharing economy for cities. This magazine touches upon aspects of the sharing economy as mobility, communities, sustainability, business development, mobility, and urban-rural relation....

  15. Quality Scalability Aware Watermarking for Visual Content.

    Science.gov (United States)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  16. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    DEFF Research Database (Denmark)

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....... system that aims to improve the state of the art in scalability and reliability. We achieve scalability through limiting data sharing when possible, and through extensive use of lock-free data structures. Reliability is addressed with a careful re-design of the programming interface and structure...

  17. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbit......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....

  18. File sharing

    NARCIS (Netherlands)

    van Eijk, N.

    2011-01-01

    ‘File sharing’ has become generally accepted on the Internet. Users share files for downloading music, films, games, software etc. In this note, we have a closer look at the definition of file sharing, the legal and policy-based context as well as enforcement issues. The economic and cultural

  19. Overlay Spectrum Sharing using Improper Gaussian Signaling

    KAUST Repository

    Amin, Osama

    2016-11-30

    Improper Gaussian signaling (IGS) scheme has been recently shown to provide performance improvements in interference limited networks as opposed to the conventional proper Gaussian signaling (PGS) scheme. In this paper, we implement the IGS scheme in overlay cognitive radio system, where the secondary transmitter broadcasts a mixture of two different signals. The first signal is selected from the PGS scheme to match the primary message transmission. On the other hand, the second signal is chosen to be from the IGS scheme in order to reduce the interference effect on the primary receiver. We then optimally design the overlay cognitive radio to maximize the secondary link achievable rate while satisfying the primary network quality of service requirements. In particular, we consider full and partial channel knowledge scenarios and derive the feasibility conditions of operating the overlay cognitive radio systems. Moreover, we derive the superiority conditions of the IGS schemes over the PGS schemes supported with closed form expressions for the corresponding power distribution and the circularity coefficient and parameters. Simulation results are provided to support our theoretical derivations.

  20. Shared leadership

    DEFF Research Database (Denmark)

    Ulhøi, John Parm; Müller, Sabine

    2012-01-01

    The aim of this paper is twofold. First, this paper comprehensively will review the conceptual and empirical literature to identify such critical underlying mechanisms which enable shared or collective leadership. Second, this article identifies the antecedents and outcomes of shared leadership...... according to the literature review to develop a re-conceptualised and synthesized framework for managing the organizational issues associated with shared leadership on various organizational levels. The paper rectifies this by identifying the critical factors and mechanisms which enable shared leadership...... and its antecedents and outcomes, and to develop a re-conceptualized and synthesized framework of shared leadership. The paper closes with a brief discussion of avenues for future research and implications for managers....

  1. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  2. Scalable encryption using alpha rooting

    Science.gov (United States)

    Wharton, Eric J.; Panetta, Karen A.; Agaian, Sos S.

    2008-04-01

    Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV pay channels. Providers need to be able to protect their products while at the same time being able to provide demonstrations to attract new customers without giving away the full value of the content. If an algorithm were introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When parameters are selected using the measure, the output image achieves a balance between protecting the important data in the image while still containing a good overall representation of the image. We will show results for this encryption method on a number of images, using histograms to evaluate the effectiveness of the encryption.

  3. Finite Element Modeling on Scalable Parallel Computers

    Science.gov (United States)

    Cwik, T.; Zuffada, C.; Jamnejad, V.; Katz, D.

    1995-01-01

    A coupled finite element-integral equation was developed to model fields scattered from inhomogenous, three-dimensional objects of arbitrary shape. This paper outlines how to implement the software on a scalable parallel processor.

  4. Visual analytics in scalable visualization environments

    OpenAIRE

    Yamaoka, So

    2011-01-01

    Visual analytics is an interdisciplinary field that facilitates the analysis of the large volume of data through interactive visual interface. This dissertation focuses on the development of visual analytics techniques in scalable visualization environments. These scalable visualization environments offer a high-resolution, integrated virtual space, as well as a wide-open physical space that affords collaborative user interaction. At the same time, the sheer scale of these environments poses ...

  5. Fully scalable video coding in multicast applications

    Science.gov (United States)

    Lerouge, Sam; De Sutter, Robbie; Lambert, Peter; Van de Walle, Rik

    2004-01-01

    The increasing diversity of the characteristics of the terminals and networks that are used to access multimedia content through the internet introduces new challenges for the distribution of multimedia data. Scalable video coding will be one of the elementary solutions in this domain. This type of coding allows to adapt an encoded video sequence to the limitations of the network or the receiving device by means of very basic operations. Algorithms for creating fully scalable video streams, in which multiple types of scalability are offered at the same time, are becoming mature. On the other hand, research on applications that use such bitstreams is only recently emerging. In this paper, we introduce a mathematical model for describing such bitstreams. In addition, we show how we can model applications that use scalable bitstreams by means of definitions that are built on top of this model. In particular, we chose to describe a multicast protocol that is targeted at scalable bitstreams. This way, we will demonstrate that it is possible to define an abstract model for scalable bitstreams, that can be used as a tool for reasoning about such bitstreams and related applications.

  6. Autism Spectrum Disorder (ASD): Related Topics

    Science.gov (United States)

    ... Facebook Tweet Share Compartir Q: Do vaccines cause autism spectrum disorder (ASD)? A: Many studies that have ... whether there is a relationship between vaccines and autism spectrum disorder (ASD). To date, the studies continue ...

  7. Knowledge Sharing

    DEFF Research Database (Denmark)

    Holdt Christensen, Peter

    The concept of knowledge management has, indeed, become a buzzword that every single organization is expected to practice and live by. Knowledge management is about managing the organization's knowledge for the common good of the organization -but practicing knowledge management is not as simple...... as that. This article focuses on knowledge sharing as the process seeking to reduce the resources spent on reinventing the wheel.The article introduces the concept of time sensitiveness; i.e. that knowledge is either urgently needed, or not that urgently needed. Furthermore, knowledge sharing...... is considered as either a push or pull system. Four strategies for sharing knowledge - help, post-it, manuals and meeting, and advice are introduced. Each strategy requires different channels for sharing knowledge. An empirical analysis in a production facility highlights how the strategies can be practiced....

  8. Scalable, ultra-resistant structural colors based on network metamaterials

    CERN Document Server

    Galinski, Henning; Dong, Hao; Gongora, Juan S Totero; Favaro, Grégory; Döbeli, Max; Spolenak, Ralph; Fratalocchi, Andrea; Capasso, Federico

    2016-01-01

    Structural colours have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realise robust colours with a scalable fabrication technique is still lacking, hampering the realisation of practical applications with this platform. Here we develop a new approach based on large scale network metamaterials, which combine dealloyed subwavelength structures at the nanoscale with loss-less, ultra-thin dielectrics coatings. By using theory and experiments, we show how sub-wavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero (ENZ) regions generated in the metallic network, manifesting the formation of highly saturated structural colours that cover a wide portion of the spectrum. Ellipsometry measurements report the efficient observation of these colours even at angles of $70$ degrees. The network-like architecture of these nanoma...

  9. Sharing Death

    DEFF Research Database (Denmark)

    Sandvik, Kjetil; Refslund Christensen, Dorthe

    (s) displaying photographs, poetry, stories and expressions of grief and longing. They take part in expressions of empathy for others by lighting candles for other people's loved ones, they share their personal experiences in different chatrooms and the site offers services as a calendar displaying anniversaries...... allowing creating unique and editable profiles, adding personal content and sharing it with other people in your network(s) AND systems for publishing your own life: becoming visible to others, being connected and being observed. More and more sites turn up on the Internet that facilitates the process...

  10. Wanted: Scalable Tracers for Diffusion Measurements

    Science.gov (United States)

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  11. Scalable L-infinite coding of meshes.

    Science.gov (United States)

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  12. Shared Language

    Science.gov (United States)

    Bochicchio, Daniel; Cole, Shelbi; Ostien, Deborah; Rodriguez, Vanessa; Staples, Megan; Susla, Patricia; Truxaw, Mary

    2009-01-01

    This article describes a process by which seven educators collaboratively engaged in developing a shared language to describe the mathematics pedagogy used to guide whole-class discussions as well as the products of their work. Suggestions are made for how others might engage in similarly productive professional development activities. (Contains 3…

  13. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  14. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  15. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Quaglia, Davide

    2017-01-01

    The future smart power grid will consist of an unlimited number of smart devices that communicate with control units to maintain the grid’s sustainability, efficiency, and balancing. In order to build and verify such controllers over a large grid, a scalable simulation environment is needed....... This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...... and appliances. By using SGSim, different smart grid control strategies and protocols can be tested, validated and evaluated in a scalable environment....

  16. Autism Spectrum Disorder - Multiple Languages

    Science.gov (United States)

    ... Supplements Videos & Tools You Are Here: Home → Multiple Languages → All Health Topics → Autism Spectrum Disorder URL of this page: https://medlineplus. ... V W XYZ List of All Topics All Autism Spectrum Disorder - Multiple Languages To use the sharing features on this page, ...

  17. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    a long time to replicate, business model scalability can be cornered into four dimensions. In many corporate restructuring exercises and Mergers and Acquisitions there is a tendency to look for synergies in the form of cost reductions, lean workflows and market segments. However, this state of mind......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...

  18. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....

  19. Spectrum database as a service (SDaaS) for broadband innovation and efficient spectrum utilization

    CSIR Research Space (South Africa)

    Mfupe, L

    2013-10-01

    Full Text Available Broadband innovations for future wireless networks involves rapid sharing of radio frequency (RF) spectrum resources by means of dynamic spectrum access techniques (DSA) to satisfy immediate portfolio of heterogeneous demands of wireless IP services...

  20. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  1. Scalable Detection and Isolation of Phishing

    NARCIS (Netherlands)

    Moreira Moura, Giovane; Pras, Aiko

    2009-01-01

    This paper presents a proposal for scalable detection and isolation of phishing. The main ideas are to move the protection from end users towards the network provider and to employ the novel bad neighborhood concept, in order to detect and isolate both phishing e-mail senders and phishing web

  2. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  3. Realization of a scalable airborne radar

    NARCIS (Netherlands)

    Halsema, D. van; Jongh, R.V. de; Es, J. van; Otten, M.P.G.; Vermeulen, B.C.B.; Liempt, L.J. van

    2008-01-01

    Modern airborne ground surveillance radar systems are increasingly based on Active Electronically Scanned Array (AESA) antennas. Efficient use of array technology and the need for radar solutions for various airborne platforms, manned and unmanned, leads to the design of scalable radar systems. The

  4. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  5. Subjective comparison of temporal and quality scalability

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; You, Junyong

    2011-01-01

    and quality scalability. The practical experiments with low resolution video sequences show that in general, distortion is a more crucial factor for the perceived subjective quality than frame rate. However, the results also depend on the content. Moreover,, we discuss the role of other different influence...

  6. Fabrication of Scalable Indoor Light Energy Harvester and Study for Agricultural IoT Applications

    Science.gov (United States)

    Watanabe, M.; Nakamura, A.; Kunii, A.; Kusano, K.; Futagawa, M.

    2015-12-01

    A scalable indoor light energy harvester was fabricated by microelectromechanical system (MEMS) and printing hybrid technology and evaluated for agricultural IoT applications under different environmental input power density conditions, such as outdoor farming under the sun, greenhouse farming under scattered lighting, and a plant factory under LEDs. We fabricated and evaluated a dye- sensitized-type solar cell (DSC) as a low cost and “scalable” optical harvester device. We developed a transparent conductive oxide (TCO)-less process with a honeycomb metal mesh substrate fabricated by MEMS technology. In terms of the electrical and optical properties, we achieved scalable harvester output power by cell area sizing. Second, we evaluated the dependence of the input power scalable characteristics on the input light intensity, spectrum distribution, and light inlet direction angle, because harvested environmental input power is unstable. The TiO2 fabrication relied on nanoimprint technology, which was designed for optical optimization and fabrication, and we confirmed that the harvesters are robust to a variety of environments. Finally, we studied optical energy harvesting applications for agricultural IoT systems. These scalable indoor light harvesters could be used in many applications and situations in smart agriculture.

  7. Participation and Sharing Economy: The Spanish Case of #Compartirmola

    OpenAIRE

    Martínez-Polo, Josep; Martínez Sánchez,Jesús; Noguera Vivo, José Manuel

    2016-01-01

    Sharing economy, collaborative consumption or Participation Economy is changing our economy the way we know it. The tipping point at this moment is the fact that the Internet is giving us the chance to build this new economy under scalable and massive models. This paper is focused on a case study about a Spanish online movement called #compartirmola (“sharing is cool”), which is analyzed to identify forty collaborative initiatives from Spanish companies. The mixed quantitative and qualitative...

  8. Zellweger Spectrum

    Science.gov (United States)

    ... Us Donate The Zellweger Spectrum Zellweger Syndrome, Neonatal Adrenoleukodystrophy (NALD), and Infantile Refsum’s Disease (IRD) The disorders ... of the Zellweger spectrum: Zellweger syndrome (ZS), neonatal adrenoleukodystrophy (NALD), and infantile Refsum disease (IRD). While these ...

  9. GenePING: secure, scalable management of personal genomic data

    Directory of Open Access Journals (Sweden)

    Kohane Isaac S

    2006-04-01

    Full Text Available Abstract Background Patient genomic data are rapidly becoming part of clinical decision making. Within a few years, full genome expression profiling and genotyping will be affordable enough to perform on every individual. The management of such sizeable, yet fine-grained, data in compliance with privacy laws and best practices presents significant security and scalability challenges. Results We present the design and implementation of GenePING, an extension to the PING personal health record system that supports secure storage of large, genome-sized datasets, as well as efficient sharing and retrieval of individual datapoints (e.g. SNPs, rare mutations, gene expression levels. Even with full access to the raw GenePING storage, an attacker cannot discover any stored genomic datapoint on any single patient. Given a large-enough number of patient records, an attacker cannot discover which data corresponds to which patient, or even the size of a given patient's record. The computational overhead of GenePING's security features is a small constant, making the system usable, even in emergency care, on today's hardware. Conclusion GenePING is the first personal health record management system to support the efficient and secure storage and sharing of large genomic datasets. GenePING is available online at http://ping.chip.org/genepinghtml, licensed under the LGPL.

  10. Scalability Issues for Remote Sensing Infrastructure: A Case Study.

    Science.gov (United States)

    Liu, Yang; Picard, Sean; Williamson, Carey

    2017-04-29

    For the past decade, a team of University of Calgary researchers has operated a large "sensor Web" to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging). Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system's memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.

  11. Scalability Issues for Remote Sensing Infrastructure: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2017-04-01

    Full Text Available For the past decade, a team of University of Calgary researchers has operated a large “sensor Web” to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging. Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system’s memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.

  12. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  13. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    supports the PostgreSQL dialect of SQL. The prototype implementation is a compiler that translates CVL into SQL and stored procedures. (c) TileHeat is a framework and basic algorithm for partial materialization of hot tile sets for scalable map distribution. The framework predicts future map workloads......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...... goal. The results for Tileheat show that the prediction method offers a substantial improvement over the current method used by the Danish Geodata Agency. Thus, a large amount of computations can potentially be saved by this public institution, who is responsible for the distribution of government...

  14. A Scalability Model for ECS's Data Server

    Science.gov (United States)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  15. Stencil Lithography for Scalable Micro- and Nanomanufacturing

    Directory of Open Access Journals (Sweden)

    Ke Du

    2017-04-01

    Full Text Available In this paper, we review the current development of stencil lithography for scalable micro- and nanomanufacturing as a resistless and reusable patterning technique. We first introduce the motivation and advantages of stencil lithography for large-area micro- and nanopatterning. Then we review the progress of using rigid membranes such as SiNx and Si as stencil masks as well as stacking layers. We also review the current use of flexible membranes including a compliant SiNx membrane with springs, polyimide film, polydimethylsiloxane (PDMS layer, and photoresist-based membranes as stencil lithography masks to address problems such as blurring and non-planar surface patterning. Moreover, we discuss the dynamic stencil lithography technique, which significantly improves the patterning throughput and speed by moving the stencil over the target substrate during deposition. Lastly, we discuss the future advancement of stencil lithography for a resistless, reusable, scalable, and programmable nanolithography method.

  16. SPRNG Scalable Parallel Random Number Generator LIbrary

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-16

    This revision corrects some errors in SPRNG 1. Users of newer SPRNG versions can obtain the corrected files and build their version with it. This version also improves the scalability of some of the application-based tests in the SPRNG test suite. It also includes an interface to a parallel Mersenne Twister, so that if users install the Mersenne Twister, then they can test this generator with the SPRNG test suite and also use some SPRNG features with that generator.

  17. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; Renesse, Robbert,

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  18. Stencil Lithography for Scalable Micro- and Nanomanufacturing

    OpenAIRE

    Ke Du; Junjun Ding; Yuyang Liu; Ishan Wathuthanthri; Chang-Hwan Choi

    2017-01-01

    In this paper, we review the current development of stencil lithography for scalable micro- and nanomanufacturing as a resistless and reusable patterning technique. We first introduce the motivation and advantages of stencil lithography for large-area micro- and nanopatterning. Then we review the progress of using rigid membranes such as SiNx and Si as stencil masks as well as stacking layers. We also review the current use of flexible membranes including a compliant SiNx membrane with spring...

  19. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  20. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  1. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  2. Sharing values, sharing a vision

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-31

    Teamwork, partnership and shared values emerged as recurring themes at the Third Technology Transfer/Communications Conference. The program drew about 100 participants who sat through a packed two days to find ways for their laboratories and facilities to better help American business and the economy. Co-hosts were the Lawrence Livermore National Laboratory and the Lawrence Berkeley Laboratory, where most meetings took place. The conference followed traditions established at the First Technology Transfer/Communications Conference, conceived of and hosted by the Pacific Northwest Laboratory in May 1992 in Richmond, Washington, and the second conference, hosted by the National Renewable Energy Laboratory in January 1993 in Golden, Colorado. As at the other conferences, participants at the third session represented the fields of technology transfer, public affairs and communications. They came from Department of Energy headquarters and DOE offices, laboratories and production facilities. Continued in this report are keynote address; panel discussion; workshops; and presentations in technology transfer.

  3. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  4. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  5. Combined Scalable Video Coding Method for Wireless Transmission

    Directory of Open Access Journals (Sweden)

    Achmad Affandi

    2011-08-01

    Full Text Available Mobile video streaming is one of multimedia services that has developed very rapidly. Recently, bandwidth utilization for wireless transmission is the main problem in the field of multimedia communications. In this research, we offer a combination of scalable methods as the most attractive solution to this problem. Scalable method for wireless communication should adapt to input video sequence. Standard ITU (International Telecommunication Union - Joint Scalable Video Model (JSVM is employed to produce combined scalable video coding (CSVC method that match the required quality of video streaming services for wireless transmission. The investigation in this paper shows that combined scalable technique outperforms the non-scalable one, in using bit rate capacity at certain layer.

  6. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  7. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  8. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  9. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our...... parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in....

  10. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  11. Scalable and versatile graphene functionalized with the Mannich condensate.

    Science.gov (United States)

    Liao, Ruijuan; Tang, Zhenghai; Lin, Tengfei; Guo, Baochun

    2013-03-01

    The functionalized graphene (JTPG) is fabricated by chemical conversion of graphene oxide (GO), using tea polyphenols (TP) as the reducer and stabilizer, followed by further derivatization through the Mannich reaction between the pyrogallol groups on TP and Jeffamine M-2070. JTPG exhibits solubility in a broad spectrum of solvents, long-term stability and single-layered dispersion in water and organic solvents, which are substantiated by AFM, TEM, and XRD. The paper-like JTPG hybrids prepared by vacuum-assisted filtration exhibits an unusual combination of high toughness (tensile strength of ~275 MPa and break strain of ~8%) and high electrical conductivity (~700 S/m). Still, JTPG is revealed to be very promising in the fabrication of polymer/graphene composites due to the excellent solubility in the solvent with low boiling point and low toxicity. Accordingly, as an example, nitrile rubber/JTPG composites are fabricated by the solution compounding in acetone. The resulted composite shows low threshold percolation at 0.23 vol.% of graphene. The versatilities both in dispersibility and performance, together with the scalable process of JTPG, enable a new way to scale up the fabrication of the graphene-based polymer composites or hybrids with high performance.

  12. Scalable, ultra-resistant structural colors based on network metamaterials

    KAUST Repository

    Galinski, Henning

    2017-05-05

    Structural colors have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realize robust colors with a scalable fabrication technique is still lacking, hampering the realization of practical applications with this platform. Here, we develop a new approach based on large-scale network metamaterials that combine dealloyed subwavelength structures at the nanoscale with lossless, ultra-thin dielectric coatings. By using theory and experiments, we show how subwavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero regions generated in the metallic network, generating the formation of saturated structural colors that cover a wide portion of the spectrum. Ellipsometry measurements support the efficient observation of these colors, even at angles of 70°. The network-like architecture of these nanomaterials allows for high mechanical resistance, which is quantified in a series of nano-scratch tests. With such remarkable properties, these metastructures represent a robust design technology for real-world, large-scale commercial applications.

  13. Scalable Engineering of Quantum Optical Information Processing Architectures (SEQUOIA)

    Science.gov (United States)

    2016-12-13

    scalable architecture for LOQC and cluster state quantum computing (Ballistic or non-ballistic) - With parametric nonlinearities (Kerr, chi-2...Scalable Engineering of Quantum Optical Information-Processing Architectures (SEQUOIA) 5a. CONTRACT NUMBER W31-P4Q-15-C-0045 5b. GRANT NUMBER 5c...Technologies 13 December 2016 “Scalable Engineering of Quantum Optical Information-Processing Architectures (SEQUOIA)” Final R&D Status Report

  14. An open, interoperable, and scalable prehospital information technology network architecture.

    Science.gov (United States)

    Landman, Adam B; Rokos, Ivan C; Burns, Kevin; Van Gelder, Carin M; Fisher, Roger M; Dunford, James V; Cone, David C; Bogucki, Sandy

    2011-01-01

    Some of the most intractable challenges in prehospital medicine include response time optimization, inefficiencies at the emergency medical services (EMS)-emergency department (ED) interface, and the ability to correlate field interventions with patient outcomes. Information technology (IT) can address these and other concerns by ensuring that system and patient information is received when and where it is needed, is fully integrated with prior and subsequent patient information, and is securely archived. Some EMS agencies have begun adopting information technologies, such as wireless transmission of 12-lead electrocardiograms, but few agencies have developed a comprehensive plan for management of their prehospital information and integration with other electronic medical records. This perspective article highlights the challenges and limitations of integrating IT elements without a strategic plan, and proposes an open, interoperable, and scalable prehospital information technology (PHIT) architecture. The two core components of this PHIT architecture are 1) routers with broadband network connectivity to share data between ambulance devices and EMS system information services and 2) an electronic patient care report to organize and archive all electronic prehospital data. To successfully implement this comprehensive PHIT architecture, data and technology requirements must be based on best available evidence, and the system must adhere to health data standards as well as privacy and security regulations. Recent federal legislation prioritizing health information technology may position federal agencies to help design and fund PHIT architectures.

  15. Parallel peak pruning for scalable SMP contour tree computation

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Hamish A. [Univ. of Leeds (United Kingdom); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Sewell, Christopher M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahrens, James P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-09

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.

  16. Scalable Multicore Motion Planning Using Lock-Free Concurrency.

    Science.gov (United States)

    Ichnowski, Jeffrey; Alterovitz, Ron

    2014-10-01

    We present PRRT (Parallel RRT) and PRRT* (Parallel RRT*), sampling-based methods for feasible and optimal motion planning designed for modern multicore CPUs. We parallelize RRT and RRT* such that all threads concurrently build a single motion planning tree. Parallelization in this manner requires that data structures, such as the nearest neighbor search tree and the motion planning tree, are safely shared across multiple threads. Rather than rely on traditional locks which can result in slowdowns due to lock contention, we introduce algorithms based on lock-free concurrency using atomic operations. We further improve scalability by using partition-based sampling (which shrinks each core's working data set to improve cache efficiency) and parallel work-saving (in reducing the number of rewiring steps performed in PRRT*). Because PRRT and PRRT* are CPU-based, they can be directly integrated with existing libraries. We demonstrate that PRRT and PRRT* scale well as core counts increase, in some cases exhibiting superlinear speedup, for scenarios such as the Alpha Puzzle and Cubicles scenarios and the Aldebaran Nao robot performing a 2-handed task.

  17. Scalable Parallel Density-based Clustering and Applications

    Science.gov (United States)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  18. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  19. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  20. Using MPI to Implement Scalable Libraries

    Science.gov (United States)

    Lusk, Ewing

    MPI is an instantiation of a general-purpose programming model, and high-performance implementations of the MPI standard have provided scalability for a wide range of applications. Ease of use was not an explicit goal of the MPI design process, which emphasized completeness, portability, and performance. Thus it is not surprising that MPI is occasionally criticized for being inconvenient to use and thus a drag on software developer productivity. One approach to the productivity issue is to use MPI to implement simpler programming models. Such models may limit the range of parallel algorithms that can be expressed, yet provide sufficient generality to benefit a significant number of applications, even from different domains.We illustrate this concept with the ADLB (Asynchronous, Dynamic Load-Balancing) library, which can be used to express manager/worker algorithms in such a way that their execution is scalable, even on the largestmachines. ADLB makes sophisticated use ofMPI functionality while providing an extremely simple API for the application programmer.We will describe it in the context of solving Sudoku puzzles and a nuclear physics Monte Carlo application currently running on tens of thousands of processors.

  1. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  2. Using the scalable nonlinear equations solvers package

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W.D.; McInnes, L.C.; Smith, B.F.

    1995-02-01

    SNES (Scalable Nonlinear Equations Solvers) is a software package for the numerical solution of large-scale systems of nonlinear equations on both uniprocessors and parallel architectures. SNES also contains a component for the solution of unconstrained minimization problems, called SUMS (Scalable Unconstrained Minimization Solvers). Newton-like methods, which are known for their efficiency and robustness, constitute the core of the package. As part of the multilevel PETSc library, SNES incorporates many features and options from other parts of PETSc. In keeping with the spirit of the PETSc library, the nonlinear solution routines are data-structure-neutral, making them flexible and easily extensible. This users guide contains a detailed description of uniprocessor usage of SNES, with some added comments regarding multiprocessor usage. At this time the parallel version is undergoing refinement and extension, as we work toward a common interface for the uniprocessor and parallel cases. Thus, forthcoming versions of the software will contain additional features, and changes to parallel interface may result at any time. The new parallel version will employ the MPI (Message Passing Interface) standard for interprocessor communication. Since most of these details will be hidden, users will need to perform only minimal message-passing programming.

  3. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  4. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  5. An Open Infrastructure for Scalable, Reconfigurable Analysis

    Energy Technology Data Exchange (ETDEWEB)

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  6. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  7. Sharing Economy vs Sharing Cultures? Designing for social, economic and environmental good

    Directory of Open Access Journals (Sweden)

    Ann Light

    2015-05-01

    Full Text Available This paper explores the story behind a crowdfunding service as an example of sharing technology. Research in a small neighborhood of London showed how locally-developed initiatives can differ in tone, scale, ambition and practice to those getting attention in the so-called sharing economy. In local accounts, we see an emphasis on organizing together to create shared spaces for collaborative use of resources and joint ownership of projects and places. Whereas, many global business models feature significant elements of renting, leasing and hiring and focus only on resource management, sometimes at the expense of community growth. The service we discuss is based in the area we studied and has a collective model of sharing, but hopes to be part of the new global movement. We use this hybridity to problematize issues of culture, place and scalability in developing sharing resources and addressing sustainability concerns. We relate this to the motivation, rhetoric and design choices of other local sharing enterprises and other global sharing economy initiatives, arguing, in conclusion, that there is no sharing economy, but a variety of new cultures being fostered.

  8. A Scalable Object-Based Architecture

    NARCIS (Netherlands)

    Hummel, S.F.; Kaashoek, M.F.; Tanenbaum, A.S.

    1991-01-01

    Although large-scale shared-memory multiprocessors are believed to be easier to program than disjoint-memory multicomputers with similar numbers of processors, they have proven harder to build. To date, the efficiency of software implementations of virtual shared-memory (VSM) on multicomputers with

  9. Scheduling Heterogeneous Wireless Systems for Efficient Spectrum Access

    Directory of Open Access Journals (Sweden)

    Lichun Bao

    2010-01-01

    Full Text Available The spectrum scarcity problem emerged in recent years, due to unbalanced utilization of RF (radio frequency bands in the current state of wireless spectrum allocations. Spectrum access scheduling addresses challenges arising from spectrum sharing by interleaving the channel access among multiple wireless systems in a TDMA fashion. Different from cognitive radio approaches which are opportunistic and noncollaborative in general, spectrum access scheduling proactively structures and interleaves the channel access pattern of heterogeneous wireless systems, using collaborative designs by implementing a crucial architectural component—the base stations on software defined radios (SDRs. We discuss our system design choices for spectrum sharing from multiple perspectives and then present the mechanisms for spectrum sharing and coexistence of GPRS+WiMAX and GPRS+WiFi as use cases, respectively. Simulations were carried out to prove that spectrum access scheduling is an alternative, feasible, and promising approach to the spectrum scarcity problem.

  10. Towards Reliable, Scalable, and Energy Efficient Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2017-11-01

    The cognitive radio (CR) concept is expected to be adopted along with many technologies to meet the requirements of the next generation of wireless and mobile systems, the 5G. Consequently, it is important to determine the performance of the CR systems with respect to these requirements. In this thesis, after briefly describing the 5G requirements, we present three main directions in which we aim to enhance the CR performance. The first direction is the reliability. We study the achievable rate of a multiple-input multiple-output (MIMO) relay-assisted CR under two scenarios; an unmanned aerial vehicle (UAV) one-way relaying (OWR) and a fixed two-way relaying (TWR). We propose special linear precoding schemes that enable the secondary user (SU) to take advantage of the primary-free channel eigenmodes. We study the SU rate sensitivity to the relay power, the relay gain, the UAV altitude, the number of antennas and the line of sight availability. The second direction is the scalability. We first study a multiple access channel (MAC) with multiple SUs scenario. We propose a particular linear precoding and SUs selection scheme maximizing their sum-rate. We show that the proposed scheme provides a significant sum-rate improvement as the number of SUs increases. Secondly, we expand our scalability study to cognitive cellular networks. We propose a low-complexity algorithm for base station activation/deactivation and dynamic spectrum management maximizing the profits of primary and secondary networks subject to green constraints. We show that our proposed algorithms achieve performance close to those obtained with the exhaustive search method. The third direction is the energy efficiency (EE). We present a novel power allocation scheme based on maximizing the EE of both single-input and single-output (SISO) and MIMO systems. We solve a non-convex problem and derive explicit expressions of the corresponding optimal power. When the instantaneous channel is not available, we

  11. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  12. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  13. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  14. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  15. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...

  16. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  17. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  18. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation......The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  19. Towards scalable Byzantine fault-tolerant replication

    Science.gov (United States)

    Zbierski, Maciej

    2017-08-01

    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  20. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  1. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    foreground layers is merited. (2) The typical map making professional has changed from a GIS specialist to a busy person with map making as a secondary skill. Today, thematic maps are produced by journalists, aid workers, amateur data enth siasts, and scientists alike. Therefore it is crucial...... that this diverse group of map makers is provided with easy-to-use and expressible thematic map design tools. Such tools should support customized selection of data for maps in scenarios where developer time is a scarce resource. (3) The Web provides access to massive data repositories for thematic maps...... based on an access log of recent requests. The results show that Glossy SQL og CVL can be used to compute cartographic selection by processing one or more complex queries in a relational database. The scalability of the approach has been verified up to half a million objects in the database. Furthermore...

  2. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Science.gov (United States)

    Tizon, Nicolas; Pesquet-Popescu, Béatrice

    2008-12-01

    This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate) variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  3. Scalable privacy-preserving data sharing methodology for genome-wide association studies.

    Science.gov (United States)

    Yu, Fei; Fienberg, Stephen E; Slavković, Aleksandra B; Uhler, Caroline

    2014-08-01

    The protection of privacy of individual-level information in genome-wide association study (GWAS) databases has been a major concern of researchers following the publication of "an attack" on GWAS data by Homer et al. (2008). Traditional statistical methods for confidentiality and privacy protection of statistical databases do not scale well to deal with GWAS data, especially in terms of guarantees regarding protection from linkage to external information. The more recent concept of differential privacy, introduced by the cryptographic community, is an approach that provides a rigorous definition of privacy with meaningful privacy guarantees in the presence of arbitrary external information, although the guarantees may come at a serious price in terms of data utility. Building on such notions, Uhler et al. (2013) proposed new methods to release aggregate GWAS data without compromising an individual's privacy. We extend the methods developed in Uhler et al. (2013) for releasing differentially-private χ(2)-statistics by allowing for arbitrary number of cases and controls, and for releasing differentially-private allelic test statistics. We also provide a new interpretation by assuming the controls' data are known, which is a realistic assumption because some GWAS use publicly available data as controls. We assess the performance of the proposed methods through a risk-utility analysis on a real data set consisting of DNA samples collected by the Wellcome Trust Case Control Consortium and compare the methods with the differentially-private release mechanism proposed by Johnson and Shmatikov (2013). Copyright © 2014 Elsevier Inc. All rights reserved.

  4. A Scalable QoS-Aware VoD Resource Sharing Scheme for Next Generation Networks

    Science.gov (United States)

    Huang, Chenn-Jung; Luo, Yun-Cheng; Chen, Chun-Hua; Hu, Kai-Wen

    In network-aware concept, applications are aware of network conditions and are adaptable to the varying environment to achieve acceptable and predictable performance. In this work, a solution for video on demand service that integrates wireless and wired networks by using the network aware concepts is proposed to reduce the blocking probability and dropping probability of mobile requests. Fuzzy logic inference system is employed to select appropriate cache relay nodes to cache published video streams and distribute them to different peers through service oriented architecture (SOA). SIP-based control protocol and IMS standard are adopted to ensure the possibility of heterogeneous communication and provide a framework for delivering real-time multimedia services over an IP-based network to ensure interoperability, roaming, and end-to-end session management. The experimental results demonstrate that effectiveness and practicability of the proposed work.

  5. A Hardware-Efficient Scalable Spike Sorting Neural Signal Processor Module for Implantable High-Channel-Count Brain Machine Interfaces.

    Science.gov (United States)

    Yang, Yuning; Boling, Sam; Mason, Andrew J

    2017-08-01

    Next-generation brain machine interfaces demand a high-channel-count neural recording system to wirelessly monitor activities of thousands of neurons. A hardware efficient neural signal processor (NSP) is greatly desirable to ease the data bandwidth bottleneck for a fully implantable wireless neural recording system. This paper demonstrates a complete multichannel spike sorting NSP module that incorporates all of the necessary spike detector, feature extractor, and spike classifier blocks. To meet high-channel-count and implantability demands, each block was designed to be highly hardware efficient and scalable while sharing resources efficiently among multiple channels. To process multiple channels in parallel, scalability analysis was performed, and the utilization of each block was optimized according to its input data statistics and the power, area and/or speed of each block. Based on this analysis, a prototype 32-channel spike sorting NSP scalable module was designed and tested on an FPGA using synthesized datasets over a wide range of signal to noise ratios. The design was mapped to 130 nm CMOS to achieve 0.75 μW power and 0.023 mm2 area consumptions per channel based on post synthesis simulation results, which permits scalability of digital processing to 690 channels on a 4×4 mm2 electrode array.

  6. Share your Sweets

    DEFF Research Database (Denmark)

    Byrnit, Jill; Høgh-Olesen, Henrik; Makransky, Guido

    2015-01-01

    All over the world, humans (Homo sapiens) display resource-sharing behavior, and common patterns of sharing seem to exist across cultures. Humans are not the only primates to share, and observations from the wild have long documented food sharing behavior in our closest phylogenetic relatives...

  7. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  8. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  9. Spectrum war

    DEFF Research Database (Denmark)

    Henten, Anders; Tadayoni, Reza; Windekilde, Iwona Maria

    a conflict in accessing to the valuable spectrum resources allocated to TV broadcast that has been there for many years and which has been intensified in different phases of technological development and the second being an obvious conflict of interest between the different stake holders within the mobile...

  10. Sagnac secret sharing over telecom fiber networks.

    Science.gov (United States)

    Bogdanski, Jan; Ahrens, Johan; Bourennane, Mohamed

    2009-01-19

    We report the first Sagnac quantum secret sharing (in three-and four-party implementations) over 1550 nm single mode fiber (SMF) networks, using a single qubit protocol with phase encoding. Our secret sharing experiment has been based on a single qubit protocol, which has opened the door to practical secret sharing implementation over fiber telecom channels and in free-space. The previous quantum secret sharing proposals were based on multiparticle entangled states, difficult in the practical implementation and not scalable. Our experimental data in the three-party implementation show stable (in regards to birefringence drift) quantum secret sharing transmissions at the total Sagnac transmission loop distances of 55-75 km with the quantum bit error rates (QBER) of 2.3-2.4% for the mean photon number micro?= 0.1 and 1.7-2.1% for micro= 0.3. In the four-party case we have achieved quantum secret sharing transmissions at the total Sagnac transmission loop distances of 45-55 km with the quantum bit error rates (QBER) of 3.0-3.7% for the mean photon number micro= 0.1 and 1.8-3.0% for micro?= 0.3. The stability of quantum transmission has been achieved thanks to our new concept for compensation of SMF birefringence effects in Sagnac, based on a polarization control system and a polarization insensitive phase modulator. The measurement results have showed feasibility of quantum secret sharing over telecom fiber networks in Sagnac configuration, using standard fiber telecom components.

  11. Scalable, remote administration of Windows NT.

    Energy Technology Data Exchange (ETDEWEB)

    Gomberg, M.; Stacey, C.; Sayre, J.

    1999-06-08

    In the UNIX community there is an overwhelming perception that NT is impossible to manage remotely and that NT administration doesn't scale. This was essentially true with earlier versions of the operating system. Even today, out of the box, NT is difficult to manage remotely. Many tools, however, now make remote management of NT not only possible, but under some circumstances very easy. In this paper we discuss how we at Argonne's Mathematics and Computer Science Division manage all our NT machines remotely from a single console, with minimum locally installed software overhead. We also present NetReg, which is a locally developed tool for scalable registry management. NetReg allows us to apply a registry change to a specified set of machines. It is a command line utility that can be run in either interactive or batch mode and is written in Perl for Win32, taking heavy advantage of the Win32::TieRegistry module.

  12. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.

    2015-02-01

    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  13. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  14. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  15. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.

  16. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  17. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  18. Scalability and interoperability within glideinWMS

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, D.; /Wisconsin U., Madison; Sfiligoi, I.; /Fermilab; Padhi, S.; /UC, San Diego; Frey, J.; /Wisconsin U., Madison; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  19. ARC Code TI: Block-GP: Scalable Gaussian Process Regression

    Data.gov (United States)

    National Aeronautics and Space Administration — Block GP is a Gaussian Process regression framework for multimodal data, that can be an order of magnitude more scalable than existing state-of-the-art nonlinear...

  20. Scalable pattern recognition algorithms applications in computational biology and bioinformatics

    CERN Document Server

    Maji, Pradipta

    2014-01-01

    Reviews the development of scalable pattern recognition algorithms for computational biology and bioinformatics Includes numerous examples and experimental results to support the theoretical concepts described Concludes each chapter with directions for future research and a comprehensive bibliography

  1. Scalability of telecom cloud architectures for live-TV distribution

    OpenAIRE

    Asensio Carmona, Adrian; Contreras, Luis Miguel; Ruiz Ramírez, Marc; López Álvarez, Victor; Velasco Esteban, Luis Domingo

    2015-01-01

    A hierarchical distributed telecom cloud architecture for live-TV distribution exploiting flexgrid networking and SBVTs is proposed. Its scalability is compared to that of a centralized architecture. Cost savings as high as 32 % are shown. Peer Reviewed

  2. Evaluating the Scalability of Enterprise JavaBeans Technology

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yan (Jenny); Gorton, Ian; Liu, Anna; Chen, Shiping; Paul A Strooper; Pornsiri Muenchaisri

    2002-12-04

    One of the major problems in building large-scale distributed systems is to anticipate the performance of the eventual solution before it has been built. This problem is especially germane to Internet-based e-business applications, where failure to provide high performance and scalability can lead to application and business failure. The fundamental software engineering problem is compounded by many factors, including individual application diversity, software architecture trade-offs, COTS component integration requirements, and differences in performance of various software and hardware infrastructures. In this paper, we describe the results of an empirical investigation into the scalability of a widely used distributed component technology, Enterprise JavaBeans (EJB). A benchmark application is developed and tested to measure the performance of a system as both the client load and component infrastructure are scaled up. A scalability metric from the literature is then applied to analyze the scalability of the EJB component infrastructure under two different architectural solutions.

  3. Scalable RFCMOS Model for 90 nm Technology

    Directory of Open Access Journals (Sweden)

    Ah Fatt Tong

    2011-01-01

    Full Text Available This paper presents the formation of the parasitic components that exist in the RF MOSFET structure during its high-frequency operation. The parasitic components are extracted from the transistor's S-parameter measurement, and its geometry dependence is studied with respect to its layout structure. Physical geometry equations are proposed to represent these parasitic components, and by implementing them into the RF model, a scalable RFCMOS model, that is, valid up to 49.85 GHz is demonstrated. A new verification technique is proposed to verify the quality of the developed scalable RFCMOS model. The proposed technique can shorten the verification time of the scalable RFCMOS model and ensure that the coded scalable model file is error-free and thus more reliable to use.

  4. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  5. Improving the Performance Scalability of the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, Arthur [Lawrence Livermore National Laboratory (LLNL); Worley, Patrick H [ORNL

    2012-01-01

    The Community Atmosphere Model (CAM), which serves as the atmosphere component of the Community Climate System Model (CCSM), is the most computationally expensive CCSM component in typical configurations. On current and next-generation leadership class computing systems, the performance of CAM is tied to its parallel scalability. Improving performance scalability in CAM has been a challenge, due largely to algorithmic restrictions necessitated by the polar singularities in its latitude-longitude computational grid. Nevertheless, through a combination of exploiting additional parallelism, implementing improved communication protocols, and eliminating scalability bottlenecks, we have been able to more than double the maximum throughput rate of CAM on production platforms. We describe these improvements and present results on the Cray XT5 and IBM BG/P. The approaches taken are not specific to CAM and may inform similar scalability enhancement activities for other codes.

  6. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...

  7. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Moerman Ingrid

    2007-01-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  8. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Augustin I. Gavrilescu

    2007-02-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  9. 47 CFR 90.1407 - Spectrum use in the network.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Spectrum use in the network. 90.1407 Section 90... network. (a) Spectrum use. The Shared Wireless Broadband Network will operate using spectrum associated... from the primary public safety operations in the 763-768 MHz and 793-798 MHz bands. The network...

  10. Cognitive Spectrum Efficient Multiple Access Technique using Relay Systems

    DEFF Research Database (Denmark)

    Frederiksen, Flemming Bjerge; Prasad, Ramjee

    2007-01-01

    Methods to enhance the use of the frequency spectrum by automatical spectrum sensing plus spectrum sharing in a cognitive radio technology context will be presented and discussed in this paper. Ideas to increase the coverage of cellular systems by relay channels, relay stations and collaborate...

  11. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    Science.gov (United States)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  12. SDC: Scalable description coding for adaptive streaming media

    OpenAIRE

    Quinlan, Jason J.; Zahran, Ahmed H.; Sreenan, Cormac J.

    2012-01-01

    Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-percei...

  13. Scalable persistent identifier systems for dynamic datasets

    Science.gov (United States)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.

    2016-12-01

    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  14. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  15. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Science.gov (United States)

    Zeiher, Johannes; Schauß, Peter; Hild, Sebastian; Macrı, Tommaso; Bloch, Immanuel; Gross, Christian

    2015-07-01

    Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a "superatom," is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  16. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  17. Physical principles for scalable neural recording.

    Science.gov (United States)

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  18. Memory-Scalable GPU Spatial Hierarchy Construction.

    Science.gov (United States)

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  19. Robust and Scalable DTLS Session Establishment

    OpenAIRE

    Tiloca, Marco; Gehrmann, Christian; Seitz, Ludwig

    2016-01-01

    The Datagram Transport Layer Security (DTLS) protocol is highly vulnerable to a form of denial-of-service attack (DoS), aimed at establishing a high number of invalid, half-open, secure sessions. Moreover, even when the efficient pre-shared key provisioning mode is considered, the key storage on the server side scales poorly with the number of clients. SICS Swedish ICT has designed a security architecture that efficiently addresses both issues without breaking the current standard.

  20. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    Security Features Feature Allowed Values Client authentication Custom user/password; X509; LDAP ; Kerberos; HTTPS Server authentication Shared keyfile... SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 10 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b...Distribution, Data Replication, and Security . We decomposed each of these feature categories into a collection of specific features. Each feature has a set of

  1. Motivation for Knowledge Sharing by Expert Participants in Company-Hosted Online User Communities

    Science.gov (United States)

    Cheng, Jingli

    2014-01-01

    Company-hosted online user communities are increasingly popular as firms continue to search for ways to provide their customers with high quality and reliable support in a low cost and scalable way. Yet, empirical understanding of motivations for knowledge sharing in this type of online communities is lacking, especially with regard to an…

  2. Share Your Values

    Science.gov (United States)

    ... Español Text Size Email Print Share Share Your Values Page Content Article Body Today, teenagers are bombarded ... mid-twenties. The Most Effective Way to Instill Values? By Example Your words will carry more weight ...

  3. Sharing is sparing

    NARCIS (Netherlands)

    P.Y. Kocher; U. Gaudenz; P. Troxler; Dr. P. Troxler; P. Wolf

    2014-01-01

    The commitment of the Fab Lab community to participate in processes of commons-based knowledge production thus also includes global knowledge sharing. For sharing back into the global commons, new knowledge needs however to be documented in a way that allows to share it by the means of information

  4. Urban sharing culture

    DEFF Research Database (Denmark)

    Fjalland, Emmy Laura Perez

    In urban areas sharing cultures, services and economies are rising. People share, rent and recycle their homes, cars, bikes, rides, tools, cloths, working space, knowhow and so on. The sharing culture can be understood as mobilities (Kesselring and Vogl 2013) of goods, values and ideas reshaping...... problems and side effects from concentration of consumption and contamination; and due to the shift from ownership to access it change our basic social cultural norms (Sayer 2005; Sayer 2011) about the ‘good’ life and social status (Freudendal-Pedersen 2007), commons and individuality, responsibility...... and trust. (Thomsen 2013; Bauman 2000; Beck 1992; Giddens 1991). The sharing economy is currently hyper trendy but before claiming capitalism as dead we need to understand the basics of the sharing economies and cultures asking who can share and what will we share. Furthermore it is crucial to study what...

  5. Spectrum pooling in MnWave Networks

    DEFF Research Database (Denmark)

    Boccardi, Federico; Shokri-Ghadikolaei, Hossein; Fodor, Gabor

    2016-01-01

    Motivated by the specific characteristics of mmWave technologies, we discuss the possibility of an authorization regime that allows spectrum sharing between multiple operators, also referred to as spectrum pooling. In particular, considering user rate as the performance measure, we assess...... the benefit of coordination among networks of different operators, study the impact of beamforming at both base stations and user terminals, and analyze the pooling performance at different frequency carriers. We also discuss the enabling spectrum mechanisms, architectures, and protocols required to make...... spectrum pooling work in real networks. Our initial results show that, from a technical perspective, spectrum pooling at mmWave has the potential to use the resources more efficiently than traditional exclusive spectrum allocation to a single operator. However, further studies are needed in order to reach...

  6. A Hybrid MPI-OpenMP Scheme for Scalable Parallel Pseudospectral Computations for Fluid Turbulence

    Science.gov (United States)

    Rosenberg, D. L.; Mininni, P. D.; Reddy, R. N.; Pouquet, A.

    2010-12-01

    A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves near ideal scalability up to ~20000 compute cores with a maximum mean efficiency of 83%. Data are presented that demonstrate how to choose the optimal number of MPI processes and OpenMP threads in order to optimize code performance on two different platforms.

  7. Oceanotron, Scalable Server for Marine Observations

    Science.gov (United States)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to

  8. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  9. Scalability of the parallel CFD simulations of flow past a fluttering airfoil in OpenFOAM

    Directory of Open Access Journals (Sweden)

    Šidlof Petr

    2015-01-01

    Full Text Available The paper is devoted to investigation of unsteady subsonic airflow past an elastically supported airfoil during onset of the flutter instability. Based on the geometry, boundary conditions and airfoil motion data identified from wind-tunnel measurements, a 3D CFD model has been set up in OpenFOAM. The model is based on incompressible Navier-Stokes equations. The turbulence is modelled by the Menter’s k-omega shear stress transport turbulence model. The computational mesh was generated in GridPro, a mesh generator capable of producing highly orthogonal structured C-type meshes. The mesh totals 3.1 million elements. Parallel scalability was measured on a small shared-memory SGI Altix UV 100 supercomputer.

  10. Scalability of the parallel CFD simulations of flow past a fluttering airfoil in OpenFOAM

    Science.gov (United States)

    Šidlof, Petr; Řidký, Václav

    2015-05-01

    The paper is devoted to investigation of unsteady subsonic airflow past an elastically supported airfoil during onset of the flutter instability. Based on the geometry, boundary conditions and airfoil motion data identified from wind-tunnel measurements, a 3D CFD model has been set up in OpenFOAM. The model is based on incompressible Navier-Stokes equations. The turbulence is modelled by the Menter's k-omega shear stress transport turbulence model. The computational mesh was generated in GridPro, a mesh generator capable of producing highly orthogonal structured C-type meshes. The mesh totals 3.1 million elements. Parallel scalability was measured on a small shared-memory SGI Altix UV 100 supercomputer.

  11. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz

    2012-01-01

    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  12. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  13. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    Energy Technology Data Exchange (ETDEWEB)

    Masalma, Yahya [Universidad del Turabo; Jiao, Yu [ORNL

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  14. The Sharing Economy

    DEFF Research Database (Denmark)

    Avital, Michel; Carroll, John M.; Hjalmarsson, Anders

    2015-01-01

    the ongoing debate about the sharing economy and contribute to the discourse with insights about how digital technologies are critical in shaping this turbulent ecosystem. Furthermore, we will define an agenda for future research on the sharing economy as it becomes part of the mainstream society as well......The sharing economy is spreading rapidly worldwide in a number of industries and markets. The disruptive nature of this phenomenon has drawn mixed responses ranging from active conflict to adoption and assimilation. Yet, in spite of the growing attention to the sharing economy, we still do not know...... much about it. With the abundant enthusiasm about the benefits that the sharing economy can unleash and the weekly reminders about its dark side, further examination is required to determine the potential of the sharing economy while mitigating its undesirable side effects. The panel will join...

  15. Game theory for dynamic spectrum sharing cognitive radio

    OpenAIRE

    Raoof, Omar

    2013-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University on 21 June 2010. ‘Game Theory’ is the formal study of conflict and cooperation. The theory is based on a set of tools that have been developed in order to assist with the modelling and analysis of individual, independent decision makers. These actions potentially affect any decisions, which are made by other competitors. Therefore, it is well suited and capable of addressing the various is...

  16. Spectrally efficient switched transmit diversity for spectrum sharing systems

    KAUST Repository

    Bouida, Zied

    2011-09-01

    Under the scenario of an underlay cognitive radio network, we propose in this paper an adaptive scheme using switched transmit diversity and adaptive modulation in order to increase the spectral efficiency of the secondary link. The proposed bandwidth efficient scheme (BES) uses the scan and wait (SWC) combining technique where a transmission occurs only when a branch with an acceptable performance is found, otherwise data is buffered. In our scheme, the modulation constellation size and the used transmit branch are determined to achieve the highest spectral efficiency given the fading channel conditions, the required error rate performance, and a peak interference constraint to the primary receiver. Selected numerical examples show that the BES scheme increases the capacity of the secondary link when compared to an existing switching efficient scheme (SES). This spectral efficiency comes at the expense of an increased average number of switched branches and thus an increased average delay. © 2011 IEEE.

  17. Adaptive transmission schemes for MISO spectrum sharing systems

    KAUST Repository

    Bouida, Zied

    2013-06-01

    We propose three adaptive transmission techniques aiming to maximize the capacity of a multiple-input-single-output (MISO) secondary system under the scenario of an underlay cognitive radio network. In the first scheme, namely the best antenna selection (BAS) scheme, the antenna maximizing the capacity of the secondary link is used for transmission. We then propose an orthogonal space time bloc code (OSTBC) transmission scheme using the Alamouti scheme with transmit antenna selection (TAS), namely the TAS/STBC scheme. The performance improvement offered by this scheme comes at the expense of an increased complexity and delay when compared to the BAS scheme. As a compromise between these schemes, we propose a hybrid scheme using BAS when only one antenna verifies the interference condition and TAS/STBC when two or more antennas are illegible for communication. We first derive closed-form expressions of the statistics of the received signal-to-interference-and-noise ratio (SINR) at the secondary receiver (SR). These results are then used to analyze the performance of the proposed techniques in terms of the average spectral efficiency, the average number of transmit antennas, and the average bit error rate (BER). This performance is then illustrated via selected numerical examples. © 2013 IEEE.

  18. Adaptive discrete rate and power transmission for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.

    2012-04-01

    In this paper we develop a framework for optimizing the performance of the secondary link in terms of the average spectral efficiency assuming quantized channel state information (CSI) of the secondary and the secondary-to-primary interference channels available at the secondary transmitter. We consider the problem under the constraints of maximum average interference power levels at the primary receiver. We develop a sub-optimal computationally efficient iterative algorithm for finding the optimal CSI quantizers as well as the discrete power and rate employed at the cognitive transmitter for each quantized CSI level so as to maximize the average spectral efficiency. We show via analysis and simulations that the proposed algorithm converges for Rayleigh fading channels. Our numerical results give the number of bits required to sufficiently represent the CSI to achieve almost the maximum average spectral efficiency attained using full knowledge of the CSI. © 2012 IEEE.

  19. Inferring demographic history from a spectrum of shared haplotype lengths

    DEFF Research Database (Denmark)

    Harris, Kelley; Nielsen, Rasmus

    2013-01-01

    There has been much recent excitement about the use of genetics to elucidate ancestral history and demography. Whole genome data from humans and other species are revealing complex stories of divergence and admixture that were left undiscovered by previous smaller data sets. A central challenge...

  20. Shared governance. Sharing power and opportunity.

    Science.gov (United States)

    Prince, S B

    1997-03-01

    Responding to an enlarged span of control and an ever changing health care environment, the author describes the implementation of a unit-based shared governance model. Through study,literature review, and team consensus, a new management style emerged. Using Rosabeth Kanter's framework for work effectiveness, the unit governance structure was transformed. The process, progress, and outcomes are described, analyzed, and celebrated.

  1. Current parallel I/O limitations to scalable data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  2. Secure association rule sharing

    OpenAIRE

    Oliveira,Stanley R. de M.; Zaïane, Osmar R.; Saygın, Yücel; Saygin, Yucel

    2004-01-01

    The sharing of association rules is often beneficial in industry, but requires privacy safeguards. One may decide to disclose only part of the knowledge and conceal strategic patterns which we call restrictive rules. These restrictive rules must be protected before sharing since they are paramount for strategic decisions and need to remain private. To address this challenging problem, we propose a unified framework for protecting sensitive knowledge before sharing. This framework encompasses:...

  3. Efficiency in Shared Services

    OpenAIRE

    Prachýl, Lukáš

    2010-01-01

    The thesis describes and analyzes shared services organizations as a management tool to achieve efficiency in the organizations' processes. Paper builds on established theoretical principles, enhance them with up-to-date insights on the current situation and development and create a valuable knowledge base on shared services organizations. Strong emphasis is put on concrete means on how exactly efficiency could be achieved. Major relevant topics such as reasons for shared services, people man...

  4. Factors Impacting Knowledge Sharing

    DEFF Research Database (Denmark)

    Schulzmann, David; Slepniov, Dmitrij

    The purpose of this paper is to examine various factors affecting knowledge sharing at the R&D center of a Western MNE in China. The paper employs qualitative methodology and is based on the action research and case study research techniques. The findings of the paper advance our understanding...... about factors that affect knowledge sharing. The main emphasis is given to the discussion on how to improve knowledge sharing in global R&D organizations....

  5. A Data Sharing Story

    Directory of Open Access Journals (Sweden)

    Mercè Crosas

    2012-01-01

    Full Text Available From the early days of modern science through this century of Big Data, data sharing has enabled some of the greatest advances in science. In the digital age, technology can facilitate more effective and efficient data sharing and preservation practices, and provide incentives for making data easily accessible among researchers. At the Institute for Quantitative Social Science at Harvard University, we have developed an open-source software to share, cite, preserve, discover and analyze data, named the Dataverse Network. We share here the project’s motivation, its growth and successes, and likely evolution.

  6. Natural product synthesis in the age of scalability.

    Science.gov (United States)

    Kuttruff, Christian A; Eastgate, Martin D; Baran, Phil S

    2014-04-01

    The ability to procure useful quantities of a molecule by simple, scalable routes is emerging as an important goal in natural product synthesis. Approaches to molecules that yield substantial material enable collaborative investigations (such as SAR studies or eventual commercial production) and inherently spur innovation in chemistry. As such, when evaluating a natural product synthesis, scalability is becoming an increasingly important factor. In this Highlight, we discuss recent examples of natural product synthesis from our laboratory and others, where the preparation of gram-scale quantities of a target compound or a key intermediate allowed for a deeper understanding of biological activities or enabled further investigational collaborations.

  7. Providing scalable system software for high-end simulations

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D. [Sandia National Labs., Albuquerque, NM (United States)

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  8. Scalable and Hybrid Radio Resource Management for Future Wireless Networks

    DEFF Research Database (Denmark)

    Mino, E.; Luo, Jijun; Tragos, E.

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios, from local area to wide area wireless networks. The integration in a unique radio system of a cellular and local area type networks supposes...... describes a proposal for scalable and hybrid radio resource management to efficiently integrate the different WINNER system modes. Index...... a great advantage for the final user and for the operator, compared with the current situation, with disconnected systems, usually with different subscriptions, radio interfaces and terminals. To be a ubiquitous wireless system, the IST project WINNER II has defined three system modes. This contribution...

  9. Scalability limitations of VIA-based technologies in supporting MPI

    Energy Technology Data Exchange (ETDEWEB)

    BRIGHTWELL,RONALD B.; MACCABE,ARTHUR BERNARD

    2000-04-17

    This paper analyzes the scalability limitations of networking technologies based on the Virtual Interface Architecture (VIA) in supporting the runtime environment needed for an implementation of the Message Passing Interface. The authors present an overview of the important characteristics of VIA and an overview of the runtime system being developed as part of the Computational Plant (Cplant) project at Sandia National Laboratories. They discuss the characteristics of VIA that prevent implementations based on this system to meet the scalability and performance requirements of Cplant.

  10. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can...... be challenging to get large data sets due to privacy and/or data protection regulations. This paper presents a scalable smart meter data generator using Spark that can generate realistic data sets. The proposed data generator is based on a supervised machine learning method that can generate data of any size...

  11. Scalable Track Initiation for Optical Space Surveillance

    Science.gov (United States)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    least cubic and commonly quartic or higher. Therefore, practical implementations require attention to the scalability of the algorithms, when one is dealing with the very large number of observations from large surveillance telescopes. We address two broad categories of algorithms. The first category includes and extends the classical methods of Laplace and Gauss, as well as the more modern method of Gooding, in which one solves explicitly for the apparent range to the target in terms of the given data. In particular, recent ideas offered by Mortari and Karimi allow us to construct a family of range-solution methods that can be scaled to many processors efficiently. We find that the orbit solutions (data association hypotheses) can be ranked by means of a concept we call persistence, in which a simple statistical measure of likelihood is based on the frequency of occurrence of combinations of observations in consistent orbit solutions. Of course, range-solution methods can be expected to perform poorly if the orbit solutions of most interest are not well conditioned. The second category of algorithms addresses this difficulty. Instead of solving for range, these methods attach a set of range hypotheses to each measured line of sight. Then all pair-wise combinations of observations are considered and the family of Lambert problems is solved for each pair. These algorithms also have polynomial complexity, though now the complexity is quadratic in the number of observations and also quadratic in the number of range hypotheses. We offer a novel type of admissible-region analysis, constructing partitions of the orbital element space and deriving rigorous upper and lower bounds on the possible values of the range for each partition. This analysis allows us to parallelize with respect to the element partitions and to reduce the number of range hypotheses that have to be considered in each processor simply by making the partitions smaller. Naturally, there are many ways to

  12. Radio Access Sharing Strategies for Multiple Operators in Cellular Networks

    DEFF Research Database (Denmark)

    Popovska Avramova, Andrijana; Iversen, Villy Bæk

    2015-01-01

    deployments (required for coverage enhancement), increased base station utilization, and reduced overall power consumption. Today, network sharing in the radio access part is passive and limited to cell sites. With the introduction of Cloud Radio Access Network and Software Defined Networking adoption......Mobile operators are moving towards sharing network capacity in order to reduce capital and operational expenditures, while meeting the increasing demand for mobile broadband data services. Radio access network sharing is a promising technique that leads to reduced number of physical base station...... to the radio access network, the possibility for sharing baseband processing and radio spectrum becomes an important aspect of network sharing. This paper investigates strategies for active sharing of radio access among multiple operators, and analyses the individual benefits depending on the sharing degree...

  13. Millennials and the Sharing Economy

    DEFF Research Database (Denmark)

    Ranzini, Giulia; Newlands, Gemma; Anselmi, Guido

    Report from the EU H2020 Research Project Ps2Share: Participation, Privacy, and Power in the Sharing Economy......Report from the EU H2020 Research Project Ps2Share: Participation, Privacy, and Power in the Sharing Economy...

  14. Parametric investigation of scalable tactile sensors

    Science.gov (United States)

    Saadatzi, Mohammad Nasser; Yang, Zhong; Baptist, Joshua R.; Sahasrabuddhe, Ritvij R.; Wijayasinghe, Indika B.; Popa, Dan O.

    2017-05-01

    In the near future, robots and humans will share the same environment and perform tasks cooperatively. For intuitive, safe, and reliable physical human-robot interaction (pHRI), sensorized robot skins for tactile measurements of contact are necessary. In a previous study, we presented skins consisting of strain gauge arrays encased in silicone encapsulants. Although these structures could measure normal forces applied directly onto the sensing elements, they also exhibited blind spots and response asymmetry to certain loading patterns. This study presents a parametric investigation of piezoresistive polymeric strain gauge that exhibits a symmetric omniaxial response thanks to its novel star-shaped structure. This strain gauge relies on the use of gold micro-patterned star-shaped structures with a thin layer of PEDOT:PSS which is a flexible polymer with piezoresistive properties. In this paper, the sensor is first modeled and comprehensively analyzed in the finite-element simulation environment COMSOL. Simulations include stress-strain loading for a variety of structure parameters such as gauge lengths, widths, and spacing, as well as multiple load locations relative to the gauge. Subsequently, sensors with optimized configurations obtained through simulations were fabricated using cleanroom photolithographic and spin-coating processes, and then experimentally tested. Results show a trend-wise agreement between experiments and simulations.

  15. Phenomenology of experiential sharing

    DEFF Research Database (Denmark)

    León, Felipe; Zahavi, Dan

    2016-01-01

    The chapter explores the topic of experiential sharing by drawing on the early contributions of the phenomenologists Alfred Schutz and Gerda Walther. It is argued that both Schutz and Walther support, from complementary perspectives, an approach to experiential sharing that has tended to be overl...

  16. Satisfaction and 'comparison sharing'

    DEFF Research Database (Denmark)

    Amilon, Anna

    2009-01-01

    Despite the high degree of flexibility and generosity in Sweden’s parental leave program, one fifth of parents are not satisfied with the sharing of parental leave. This paper investigates whether ‘comparison sharing’, the sharing of parental leave by other comparable couples, influences the prob...

  17. Mobile energy sharing futures

    DEFF Research Database (Denmark)

    Worgan, Paul; Knibbe, Jarrod; Plasencia, Diego Martinez

    2016-01-01

    We foresee a future where energy in our mobile devices can be shared and redistributed to suit our current task needs. Many of us are beginning to carry multiple mobile devices and we seek to re-evaluate the traditional view of a mobile device as only accepting energy. In our vision, we can...... sharing futures....

  18. Facilitating Knowledge Sharing

    DEFF Research Database (Denmark)

    Holdt Christensen, Peter

    Abstract This paper argues that knowledge sharing can be conceptualized as different situations of exchange in which individuals relate to each other in different ways, involving different rules, norms and traditions of reciprocity regulating the exchange. The main challenge for facilitating...... knowledge sharing is to ensure that the exchange is seen as equitable for the parties involved, and by viewing the problems of knowledge sharing as motivational problems situated in different organizational settings, the paper explores how knowledge exchange can be conceptualized as going on in four...... and the intermediaries regulating the exchange, and facilitating knowledge sharing should therefore be viewed as a continuum of practices under the influence of opportunistic behaviour, obedience or organizational citizenship behaviour. Keywords: Knowledge sharing, motivation, organizational settings, situations...

  19. Exploring the Sharing Economy

    DEFF Research Database (Denmark)

    Netter, Sarah

    Despite the growing interest on the part of proponents and opponents - ranging from business, civil society, media, to policy-makers alike - there is still limited knowledge about the working mechanisms of the sharing economy. The thesis is dedicated to explore this understudied phenomenon...... and to provide a more nuanced understanding of the micro- and macro-level tensions that characterize the sharing economy. This thesis consists of four research papers, each using different literature, methodology, and data sets. The first paper investigates how the sharing economy is diffused and is ‘talked......-level tensions experience by sharing platforms by looking at the case of mobile fashion reselling and swapping markets. The final paper combines the perspectives of different sharing economy stakeholders and outlines some of the micro and macro tensions arising in and influencing the organization of these multi...

  20. Quicksilver: Middleware for Scalable Self-Regenerative Systems

    Science.gov (United States)

    2006-04-01

    standard best practice in the area, and hence helped us identify problems that can be justified in terms of real user needs. Our own group may write a...semantics, generally lack efficient, scalable implementations. Systems aproaches usually lack a precise formal specification, limiting the

  1. Scalable learning of probabilistic latent models for collaborative filtering

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2015-01-01

    Collaborative filtering has emerged as a popular way of making user recommendations, but with the increasing sizes of the underlying databases scalability is becoming a crucial issue. In this paper we focus on a recently proposed probabilistic collaborative filtering model that explicitly...

  2. PSOM2—partitioning-based scalable ontology matching using ...

    Indian Academy of Sciences (India)

    B Sathiya

    2017-11-16

    Nov 16, 2017 ... Abstract. The growth and use of semantic web has led to a drastic increase in the size, heterogeneity and number of ontologies that are available on the web. Correspondingly, scalable ontology matching algorithms that will eliminate the heterogeneity among large ontologies have become a necessity.

  3. Cognition-inspired Descriptors for Scalable Cover Song Retrieval

    NARCIS (Netherlands)

    van Balen, J.M.H.; Bountouridis, D.; Wiering, F.; Veltkamp, R.C.

    2014-01-01

    Inspired by representations used in music cognition studies and computational musicology, we propose three simple and interpretable descriptors for use in mid- to high-level computational analysis of musical audio and applications in content-based retrieval. We also argue that the task of scalable

  4. Scalable Directed Self-Assembly Using Ultrasound Waves

    Science.gov (United States)

    2015-09-04

    at Aberdeen Proving Grounds (APG), to discuss a possible collaboration. The idea is to integrate the ultrasound directed self- assembly technique ...difference between the ultrasound technology studied in this project, and other directed self-assembly techniques is its scalability and...deliverable: A scientific tool to predict particle organization, pattern, and orientation, based on the operating and design parameters of the ultrasound

  5. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    Science.gov (United States)

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  6. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  7. Coilable Crystalline Fiber (CCF) Lasers and their Scalability

    Science.gov (United States)

    2014-03-01

    highly power scalable, nearly diffraction-limited output laser. 37 References 1. Snitzer, E. Optical Maser Action of Nd 3+ in A Barium Crown Glass ...Electron Devices Directorate Helmuth Meissner Onyx Optics Approved for public release; distribution...lasers, but their composition ( glass ) poses significant disadvantages in pump absorption, gain, and thermal conductivity. All-crystalline fiber lasers

  8. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  9. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  10. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  11. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  12. Broadband and scalable mobile satellite communication system for future access networks

    Science.gov (United States)

    Ohata, Kohei; Kobayashi, Kiyoshi; Nakahira, Katsuya; Ueba, Masazumi

    2005-07-01

    Due to the recent market trends, NTT has begun research into next generation satellite communication systems, such as broadband and scalable mobile communication systems. One service application objective is to provide broadband Internet access for transportation systems, temporal broadband access networks and telemetries to remote areas. While these are niche markets the total amount of capacity should be significant. We set a 1-Gb/s total transmission capacity as our goal. Our key concern is the system cost, which means that the system should be unified system with diversified services and not tailored for each application. As satellites account for a large portion of the total system cost, we set the target satellite size as a small, one-ton class dry mass with a 2-kW class payload power. In addition to the payload power and weight, the mobile satellite's frequency band is extremely limited. Therefore, we need to develop innovative technologies that will reduce the weight and maximize spectrum and power efficiency. Another challenge is the need for the system to handle up to 50 dB and a wide data rate range of other applications. This paper describes the key communication system technologies; the frequency reuse strategy, multiplexing scheme, resource allocation scheme, and QoS management algorithm to ensure excellent spectrum efficiency and support a variety of services and quality requirements in the mobile environment.

  13. Global resource sharing

    CERN Document Server

    Frederiksen, Linda; Nance, Heidi

    2011-01-01

    Written from a global perspective, this book reviews sharing of library resources on a global scale. With expanded discovery tools and massive digitization projects, the rich and extensive holdings of the world's libraries are more visible now than at any time in the past. Advanced communication and transmission technologies, along with improved international standards, present a means for the sharing of library resources around the globe. Despite these significant improvements, a number of challenges remain. Global Resource Sharing provides librarians and library managers with a comprehensive

  14. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SVC...

  15. Sharing resources@CERN

    CERN Multimedia

    Maximilien Brice

    2002-01-01

    The library is launching a 'sharing resources@CERN' campaign, aiming to increase the library's utility by including the thousands of books bought by individual groups at CERN. This will improve sharing of information among CERN staff and users. Photo 01: L. to r. Eduardo Aldaz, from the PS division, Corrado Pettenati, Head Librarian, and Isabel Bejar, from the ST division, read their divisional copies of the same book.

  16. The Spanish Sharing Rule

    OpenAIRE

    Bernarda Zamora

    2003-01-01

    In this paper we estimate the intrahousehold distribution of household's private expenditures between men and women (the sharing rule) in two types of Spanish households: those in which the woman works and those in which the woman does not work. The results for working women are parallel to those obtained for other countries which indicate a proportionally higher transfer from the woman to the man than from the man to the woman, such that the proportion of the woman's share decreases both wit...

  17. Bonobos Share with Strangers

    OpenAIRE

    Jingzhi Tan; Brian Hare

    2013-01-01

    Humans are thought to possess a unique proclivity to share with others ? including strangers. This puzzling phenomenon has led many to suggest that sharing with strangers originates from human-unique language, social norms, warfare and/or cooperative breeding. However, bonobos, our closest living relative, are highly tolerant and, in the wild, are capable of having affiliative interactions with strangers. In four experiments, we therefore examined whether bonobos will voluntarily donate food ...

  18. [The shared nursing function].

    Science.gov (United States)

    Fleury, Cynthia

    The Chair of Philosophy at Hôtel-Dieu hospital in Paris, is a place for the sharing of knowledge and recognition. It provides a place where the subjective, institutional and political dimension of care can be considered, by all stakeholders: patients, nurses, families and citizens. The aim is to invent a shared nursing function. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  19. Sharing big biomedical data

    OpenAIRE

    Toga, Arthur W.; Ivo D Dinov

    2015-01-01

    Background The promise of Big Biomedical Data may be offset by the enormous challenges in handling, analyzing, and sharing it. In this paper, we provide a framework for developing practical and reasonable data sharing policies that incorporate the sociological, financial, technical and scientific requirements of a sustainable Big Data dependent scientific community. Findings Many biomedical and healthcare studies may be significantly impacted by using large, heterogeneous and incongruent data...

  20. Evaluation of a Connectionless NoC for a Real-Time Distributed Shared Memory Many-Core System

    NARCIS (Netherlands)

    Rutgers, J.H.; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2012-01-01

    Real-time embedded systems like smartphones tend to comprise an ever increasing number of processing cores. For scalability and the need for guaranteed performance, the use of a connection-oriented network-on-chip (NoC) is advocated. Furthermore, a distributed shared memory architecture is preferred

  1. Information partnerships--shared data, shared scale.

    Science.gov (United States)

    Konsynski, B R; McFarlan, F W

    1990-01-01

    How can one company gain access to another's resources or customers without merging ownership, management, or plotting a takeover? The answer is found in new information partnerships, enabling diverse companies to develop strategic coalitions through the sharing of data. The key to cooperation is a quantum improvement in the hardware and software supporting relational databases: new computer speeds, cheaper mass-storage devices, the proliferation of fiber-optic networks, and networking architectures. Information partnerships mean that companies can distribute the technological and financial exposure that comes with huge investments. For the customer's part, partnerships inevitably lead to greater simplification on the desktop and more common standards around which vendors have to compete. The most common types of partnership are: joint marketing partnerships, such as American Airline's award of frequent flyer miles to customers who use Citibank's credit card; intraindustry partnerships, such as the insurance value-added network service (which links insurance and casualty companies to independent agents); customer-supplier partnerships, such as Baxter Healthcare's electronic channel to hospitals for medical and other equipment; and IT vendor-driven partnerships, exemplified by ESAB (a European welding supplies and equipment company), whose expansion strategy was premised on a technology platform offered by an IT vendor. Partnerships that succeed have shared vision at the top, reciprocal skills in information technology, concrete plans for an early success, persistence in the development of usable information for all partners, coordination on business policy, and a new and imaginative business architecture.

  2. Regulating the sharing economy

    Directory of Open Access Journals (Sweden)

    Kristofer Erickson

    2016-06-01

    Full Text Available In this introductory essay, we explore definitions of the ‘sharing economy’, a concept indicating both social (relational, communitarian and economic (allocative, profit-seeking aspects which appear to be in tension. We suggest combining the social and economic logics of the sharing economy to focus on the central features of network enabled, aggregated membership in a pool of offers and demands (for goods, services, creative expressions. This definition of the sharing economy distinguishes it from other related peer-to-peer and collaborative forms of production. Understanding the social and economic motivations for and implications of participating in the sharing economy is important to its regulation. Each of the papers in this special issue contributes to knowledge by linking the social and economic aspects of sharing economy practices to regulatory norms and mechanisms. We conclude this essay by suggesting future research to further clarify and render intelligible the sharing economy, not as a contradiction in terms but as an empirically observable realm of socio-economic activity.

  3. 47 CFR 27.1307 - Spectrum use in the network.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Spectrum use in the network. 27.1307 Section 27... network. (a) Spectrum use. The shared wireless broadband network developed by the 700 MHz Public/Private... from the primary public safety operations in the 763-768 MHz and 793-798 MHz bands. The network...

  4. Mabuchi spectrum from the minisuperspace

    Directory of Open Access Journals (Sweden)

    Corinne de Lacroix

    2016-07-01

    Full Text Available It was recently shown that other functionals contribute to the effective action for the Liouville field when considering massive matter coupled to two-dimensional gravity in the conformal gauge. The most important of these new contributions corresponds to the Mabuchi functional. We propose a minisuperspace action that reproduces the main features of the Mabuchi action in order to describe the dynamics of the zero-mode. We show that the associated Hamiltonian coincides with the (quantum mechanical Liouville Hamiltonian. As a consequence the Liouville theory and our model of the Mabuchi theory both share the same spectrum, eigenfunctions and – in this approximation – correlation functions.

  5. An Efficient, Scalable and Robust P2P Overlay for Autonomic Communication

    Science.gov (United States)

    Li, Deng; Liu, Hui; Vasilakos, Athanasios

    The term Autonomic Communication (AC) refers to self-managing systems which are capable of supporting self-configuration, self-healing and self-optimization. However, information reflection and collection, lack of centralized control, non-cooperation and so on are just some of the challenges within AC systems. Since many self-* properties (e.g. selfconfiguration, self-optimization, self-healing, and self-protecting) are achieved by a group of autonomous entities that coordinate in a peer-to-peer (P2P) fashion, it has opened the door to migrating research techniques from P2P systems. P2P's meaning can be better understood with a set of key characteristics similar to AC: Decentralized organization, Self-organizing nature (i.e. adaptability), Resource sharing and aggregation, and Fault-tolerance. However, not all P2P systems are compatible with AC. Unstructured systems are designed more specifically than structured systems for the heterogeneous Internet environment, where the nodes' persistence and availability are not guaranteed. Motivated by the challenges in AC and based on comprehensive analysis of popular P2P applications, three correlative standards for evaluating the compatibility of a P2P system with AC are presented in this chapter. According to these standards, a novel Efficient, Scalable and Robust (ESR) P2P overlay is proposed. Differing from current structured and unstructured, or meshed and tree-like P2P overlay, the ESR is a whole new three dimensional structure to improve the efficiency of routing, while information exchanges take in immediate neighbors with local information to make the system scalable and fault-tolerant. Furthermore, rather than a complex game theory or incentive mechanism, asimple but effective punish mechanism has been presented based on a new ID structure which can guarantee the continuity of each node's record in order to discourage negative behavior on an autonomous environment as AC.

  6. Secure data sharing in public cloud

    Science.gov (United States)

    Venkataramana, Kanaparti; Naveen Kumar, R.; Tatekalva, Sandhya; Padmavathamma, M.

    2012-04-01

    Secure multi-party protocols have been proposed for entities (organizations or individuals) that don't fully trust each other to share sensitive information. Many types of entities need to collect, analyze, and disseminate data rapidly and accurately, without exposing sensitive information to unauthorized or untrusted parties. Solutions based on secure multiparty computation guarantee privacy and correctness, at an extra communication (too costly in communication to be practical) and computation cost. The high overhead motivates us to extend this SMC to cloud environment which provides large computation and communication capacity which makes SMC to be used between multiple clouds (i.e., it may between private or public or hybrid clouds).Cloud may encompass many high capacity servers which acts as a hosts which participate in computation (IaaS and PaaS) for final result, which is controlled by Cloud Trusted Authority (CTA) for secret sharing within the cloud. The communication between two clouds is controlled by High Level Trusted Authority (HLTA) which is one of the hosts in a cloud which provides MgaaS (Management as a Service). Due to high risk for security in clouds, HLTA generates and distributes public keys and private keys by using Carmichael-R-Prime- RSA algorithm for exchange of private data in SMC between itself and clouds. In cloud, CTA creates Group key for Secure communication between the hosts in cloud based on keys sent by HLTA for exchange of Intermediate values and shares for computation of final result. Since this scheme is extended to be used in clouds( due to high availability and scalability to increase computation power) it is possible to implement SMC practically for privacy preserving in data mining at low cost for the clients.

  7. Scalable graphene coatings for enhanced condensation heat transfer.

    Science.gov (United States)

    Preston, Daniel J; Mafra, Daniela L; Miljkovic, Nenad; Kong, Jing; Wang, Evelyn N

    2015-05-13

    Water vapor condensation is commonly observed in nature and routinely used as an effective means of transferring heat with dropwise condensation on nonwetting surfaces exhibiting heat transfer improvement compared to filmwise condensation on wetting surfaces. However, state-of-the-art techniques to promote dropwise condensation rely on functional hydrophobic coatings that either have challenges with chemical stability or are so thick that any potential heat transfer improvement is negated due to the added thermal resistance of the coating. In this work, we show the effectiveness of ultrathin scalable chemical vapor deposited (CVD) graphene coatings to promote dropwise condensation while offering robust chemical stability and maintaining low thermal resistance. Heat transfer enhancements of 4× were demonstrated compared to filmwise condensation, and the robustness of these CVD coatings was superior to typical hydrophobic monolayer coatings. Our results indicate that graphene is a promising surface coating to promote dropwise condensation of water in industrial conditions with the potential for scalable application via CVD.

  8. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  9. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  10. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide.

    Science.gov (United States)

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-03-08

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits), and nanoscale sensors based on individual color centers. Toward this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1400 nm diameters. We obtain high collection efficiency of up to 22 kcounts/s optical saturation rates from a single silicon vacancy center while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  11. Scalable metagenomic taxonomy classification using a reference genome database.

    Science.gov (United States)

    Ames, Sasha K; Hysom, David A; Gardner, Shea N; Lloyd, G Scott; Gokhale, Maya B; Allen, Jonathan E

    2013-09-15

    Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take contents of the sample. Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat allen99@llnl.gov Supplementary data are available at Bioinformatics online.

  12. Potential of Scalable Vector Graphics (SVG) for Ocean Science Research

    Science.gov (United States)

    Sears, J. R.

    2002-12-01

    Scalable Vector Graphics (SVG), a graphic format encoded in Extensible Markup Language (XML), is a recent W3C standard. SVG is text-based and platform-neutral, allowing interoperability and a rich array of features that offer significant promise for the presentation and publication of ocean and earth science research. This presentation (a) provides a brief introduction to SVG with real-world examples; (b) reviews browsers, editors, and other SVG tools; and (c) talks about some of the more powerful capabilities of SVG that might be important for ocean and earth science data presentation, such as searchability, animation and scripting, interactivity, accessibility, dynamic SVG, layers, scalability, SVG Text, SVG Audio, server-side SVG, and embedding metadata and data. A list of useful SVG resources is also given.

  13. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer

    2013-03-01

    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  14. Scalable, flexible and high resolution patterning of CVD graphene.

    Science.gov (United States)

    Hofmann, Mario; Hsieh, Ya-Ping; Hsu, Allen L; Kong, Jing

    2014-01-07

    The unique properties of graphene make it a promising material for interconnects in flexible and transparent electronics. To increase the commercial impact of graphene in those applications, a scalable and economical method for producing graphene patterns is required. The direct synthesis of graphene from an area-selectively passivated catalyst substrate can generate patterned graphene of high quality. We here present a solution-based method for producing patterned passivation layers. Various deposition methods such as ink-jet deposition and microcontact printing were explored, that can satisfy application demands for low cost, high resolution and scalable production of patterned graphene. The demonstrated high quality and nanometer precision of grown graphene establishes the potential of this synthesis approach for future commercial applications of graphene. Finally, the ability to transfer high resolution graphene patterns onto complex three-dimensional surfaces affords the vision of graphene-based interconnects in novel electronics.

  15. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide

    Science.gov (United States)

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G.; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-03-01

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits) and nanoscale sensors based on individual color centers. Towards this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1,400 nm diameters. We obtain high collection efficiency, up to 22 kcounts/s optical saturation rates from a single silicon vacancy center, while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  16. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  17. Sharing heterogeneous data: the national database for autism research.

    Science.gov (United States)

    Hall, Dan; Huerta, Michael F; McAuliffe, Matthew J; Farber, Gregory K

    2012-10-01

    The National Database for Autism Research (NDAR) is a secure research data repository designed to promote scientific data sharing and collaboration among autism spectrum disorder investigators. The goal of the project is to accelerate scientific discovery through data sharing, data harmonization, and the reporting of research results. Data from over 25,000 research participants are available to qualified investigators through the NDAR portal. Summary information about the available data is available to everyone through that portal.

  18. Scalable, Self Aligned Printing of Flexible Graphene Micro Supercapacitors (Postprint)

    Science.gov (United States)

    2017-05-11

    reduced graphene oxide: 0.4 mF cm−2)[11,39–41] prepared by conventional micro- fabrication techniques, the printed MSCs offer distinct advan- tages in...AFRL-RX-WP-JA-2017-0318 SCALABLE, SELF-ALIGNED PRINTING OF FLEXIBLE GRAPHENE MICRO-SUPERCAPACITORS (POSTPRINT) Woo Jin Hyun, Chang-Hyun Kim...including suggestions for reducing this burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  19. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-16

    Technology: Permanent Magnet Brushless DC machine • Model: Self-generating torque-speed-efficiency map • Future improvements: Induction machine ...system to the standard driveline – Example: BAS System – 3 kW system ISG Block, Rev. 2.0 Revision 2.0 • Four quadrant • PM Brushless Machine • Speed...and systems engineering. • Scope: Scalable, generic MATLAB/Simulink models in three areas: – Electromechanical machines (Integrated Starter

  20. Scalable privacy-preserving big data aggregation mechanism

    OpenAIRE

    Dapeng Wu; Boran Yang; Ruyan Wang

    2016-01-01

    As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs) recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA) method is proposed in this paper. Firstly, according...

  1. Fast & scalable pattern transfer via block copolymer nanolithography

    DEFF Research Database (Denmark)

    Li, Tao; Wang, Zhongli; Schulte, Lars

    2015-01-01

    A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin-casting of s...... on long range lateral order, including fabrication of substrates for catalysis, solar cells, sensors, ultrafiltration membranes and templating of semiconductors or metals....

  2. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    Directory of Open Access Journals (Sweden)

    Jacob R. Hauser

    2016-09-01

    Full Text Available 2-Cyanobenzothiazoles (CBTs are useful building blocks for: 1 luciferin derivatives for bioluminescent imaging; and 2 handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT, which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO, and discuss its advantages for scale-up over previously reported routes.

  3. Scalable Cluster-based Routing in Large Wireless Sensor Networks

    OpenAIRE

    Jiandong Li; Xuelian Cai; Jin Yang; Lina Zhu

    2012-01-01

    Large control overhead is the leading factor limiting the scalability of wireless sensor networks (WSNs). Clustering network nodes is an efficient solution, and Passive Clustering (PC) is one of the most efficient clustering methods. In this letter, we propose an improved PC-based route building scheme, named Route Reply (RREP) Broadcast with Passive Clustering (in short RBPC). Through broadcasting RREP packets on an expanding ring to build route, sensor nodes cache their route to the sink no...

  4. Semantic Models for Scalable Search in the Internet of Things

    OpenAIRE

    Dennis Pfisterer; Kay Römer; Richard Mietz; Sven Groppe

    2013-01-01

    The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper...

  5. Coordinating Shared Activities

    Science.gov (United States)

    Clement, Bradley

    2004-01-01

    Shared Activity Coordination (ShAC) is a computer program for planning and scheduling the activities of an autonomous team of interacting spacecraft and exploratory robots. ShAC could also be adapted to such terrestrial uses as helping multiple factory managers work toward competing goals while sharing such common resources as floor space, raw materials, and transports. ShAC iteratively invokes the Continuous Activity Scheduling Planning Execution and Replanning (CASPER) program to replan and propagate changes to other planning programs in an effort to resolve conflicts. A domain-expert specifies which activities and parameters thereof are shared and reports the expected conditions and effects of these activities on the environment. By specifying these conditions and effects differently for each planning program, the domain-expert subprogram defines roles that each spacecraft plays in a coordinated activity. The domain-expert subprogram also specifies which planning program has scheduling control over each shared activity. ShAC enables sharing of information, consensus over the scheduling of collaborative activities, and distributed conflict resolution. As the other planning programs incorporate new goals and alter their schedules in the changing environment, ShAC continually coordinates to respond to unexpected events.

  6. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  7. Design and Implementation of Ceph: A Scalable Distributed File System

    Energy Technology Data Exchange (ETDEWEB)

    Weil, S A; Brandt, S A; Miller, E L; Long, D E; Maltzahn, C

    2006-04-19

    File system designers continue to look to new architectures to improve scalability. Object-based storage diverges from server-based (e.g. NFS) and SAN-based storage systems by coupling processors and memory with disk drives, delegating low-level allocation to object storage devices (OSDs) and decoupling I/O (read/write) from metadata (file open/close) operations. Even recent object-based systems inherit decades-old architectural choices going back to early UNIX file systems, however, limiting their ability to effectively scale to hundreds of petabytes. We present Ceph, a distributed file system that provides excellent performance and reliability with unprecedented scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable OSDs. We leverage OSD intelligence to distribute data replication, failure detection and recovery with semi-autonomous OSDs running a specialized local object storage file system (EBOFS). Finally, Ceph is built around a dynamic distributed metadata management cluster that provides extremely efficient metadata management that seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. We present performance measurements under a variety of workloads that show superior I/O performance and scalable metadata management (more than a quarter million metadata ops/sec).

  8. Event metadata records as a testbed for scalable data mining

    Science.gov (United States)

    van Gemmeren, P.; Malon, D.

    2010-04-01

    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  9. Scalable Dynamic Instrumentation for BlueGene/L

    Energy Technology Data Exchange (ETDEWEB)

    Schulz, M; Ahn, D; Bernat, A; de Supinski, B R; Ko, S Y; Lee, G; Rountree, B

    2005-09-08

    Dynamic binary instrumentation for performance analysis on new, large scale architectures such as the IBM Blue Gene/L system (BG/L) poses new challenges. Their scale--with potentially hundreds of thousands of compute nodes--requires new, more scalable mechanisms to deploy and to organize binary instrumentation and to collect the resulting data gathered by the inserted probes. Further, many of these new machines don't support full operating systems on the compute nodes; rather, they rely on light-weight custom compute kernels that do not support daemon-based implementations. We describe the design and current status of a new implementation of the DPCL (Dynamic Probe Class Library) API for BG/L. DPCL provides an easy to use layer for dynamic instrumentation on parallel MPI applications based on the DynInst dynamic instrumentation mechanism for sequential platforms. Our work includes modifying DynInst to control instrumentation from remote I/O nodes and porting DPCL's communication to use MRNet, a scalable data reduction network for collecting performance data. We describe extensions to the DPCL API that support instrumentation of task subsets and aggregation of collected performance data. Overall, our implementation provides a scalable infrastructure that provides efficient binary instrumentation on BG/L.

  10. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  11. Scalable MPEG-4 Encoder on FPGA Multiprocessor SOC

    Directory of Open Access Journals (Sweden)

    Kulmala Ari

    2006-01-01

    Full Text Available High computational requirements combined with rapidly evolving video coding algorithms and standards are a great challenge for contemporary encoder implementations. Rapid specification changes prefer full programmability and configurability both for software and hardware. This paper presents a novel scalable MPEG-4 video encoder on an FPGA-based multiprocessor system-on-chip (MPSOC. The MPSOC architecture is truly scalable and is based on a vendor-independent intellectual property (IP block interconnection network. The scalability in video encoding is achieved by spatial parallelization where images are divided to horizontal slices. A case design is presented with up to four synthesized processors on an Altera Stratix 1S40 device. A truly portable ANSI-C implementation that supports an arbitrary number of processors gives 11 QCIF frames/s at 50 MHz without processor specific optimizations. The parallelization efficiency is 97% for two processors and 93% with three. The FPGA utilization is 70%, requiring 28 797 logic elements. The implementation effort is significantly lower compared to traditional multiprocessor implementations.

  12. Scalable MPEG-4 Encoder on FPGA Multiprocessor SOC

    Directory of Open Access Journals (Sweden)

    Marko Hännikäinen

    2006-10-01

    Full Text Available High computational requirements combined with rapidly evolving video coding algorithms and standards are a great challenge for contemporary encoder implementations. Rapid specification changes prefer full programmability and configurability both for software and hardware. This paper presents a novel scalable MPEG-4 video encoder on an FPGA-based multiprocessor system-on-chip (MPSOC. The MPSOC architecture is truly scalable and is based on a vendor-independent intellectual property (IP block interconnection network. The scalability in video encoding is achieved by spatial parallelization where images are divided to horizontal slices. A case design is presented with up to four synthesized processors on an Altera Stratix 1S40 device. A truly portable ANSI-C implementation that supports an arbitrary number of processors gives 11 QCIF frames/s at 50 MHz without processor specific optimizations. The parallelization efficiency is 97% for two processors and 93% with three. The FPGA utilization is 70%, requiring 28 797 logic elements. The implementation effort is significantly lower compared to traditional multiprocessor implementations.

  13. Scalability improvements to NRLMOL for DFT calculations of large molecules

    Science.gov (United States)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  14. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  15. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  16. Event metadata records as a testbed for scalable data mining

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D, E-mail: gemmeren@anl.go [Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2010-04-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  17. The cloud storage service bwSync&Share at KIT

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The Karlsruhe Institute of Technology introduced the bwSync&Share collaboration service in January 2014. The service is an on-premise alternative to existing public cloud storage solutions for students and scientists in the German state of Baden-Württemberg, which allows the synchronization and sharing of documents between multiple devices and users. The service is based on the commercial software PowerFolder and is deployed on a virtual environment to support high reliability and scalability for potential 450,000 users. The integration of the state-wide federated identity management system (bwIDM) and a centralized helpdesk portal allows the service to be used by all academic institutions in the state of Baden-Württemberg. Since starting, approximately 15 organizations and 8,000 users joined the service. The talk gives an overview of related challenges, technical and organizational requirements, current architecture and future development plans.

  18. Sharing big biomedical data.

    Science.gov (United States)

    Toga, Arthur W; Dinov, Ivo D

    The promise of Big Biomedical Data may be offset by the enormous challenges in handling, analyzing, and sharing it. In this paper, we provide a framework for developing practical and reasonable data sharing policies that incorporate the sociological, financial, technical and scientific requirements of a sustainable Big Data dependent scientific community. Many biomedical and healthcare studies may be significantly impacted by using large, heterogeneous and incongruent datasets; however there are significant technical, social, regulatory, and institutional barriers that need to be overcome to ensure the power of Big Data overcomes these detrimental factors. Pragmatic policies that demand extensive sharing of data, promotion of data fusion, provenance, interoperability and balance security and protection of personal information are critical for the long term impact of translational Big Data analytics.

  19. Shared care (comanagement).

    Science.gov (United States)

    Montero Ruiz, E

    2016-01-01

    Surgical departments have increasing difficulties in caring for their hospitalised patients due to the patients' advanced age and comorbidity, the growing specialisation in medical training and the strong political-healthcare pressure that a healthcare organisation places on them, where surgical acts take precedence over other activities. The pressure exerted by these departments on the medical area and the deficient response by the interconsultation system have led to the development of a different healthcare organisation model: Shared care, which includes perioperative medicine. In this model, 2 different specialists share the responsibility and authority in caring for hospitalised surgical patients. Internal Medicine is the most appropriate specialty for shared care. Internists who exercise this responsibility should have certain characteristics and must overcome a number of concerns from the surgeon and anaesthesiologist. Copyright © 2015 Elsevier España, S.L.U. y Sociedad Española de Medicina Interna (SEMI). All rights reserved.

  20. Adapting an evidence-based intervention for autism spectrum disorder for scaling up in resource-constrained settings: the development of the PASS intervention in South Asia

    Directory of Open Access Journals (Sweden)

    Gauri Divan

    2015-08-01

    Full Text Available Background: Evidence-based interventions for autism spectrum disorders evaluated in high-income countries typically require highly specialised manpower, which is a scarce resource in most low- and middle-income settings. This resource limitation results in most children not having access to evidence-based interventions. Objective: This paper reports on the systematic adaptation of an evidence-based intervention, the Preschool Autism Communication Therapy (PACT evaluated in a large trial in the United Kingdom for delivery in a low-resource setting through the process of task-shifting. Design: The adaptation process used the Medical Research Council framework for the development and adaptation of complex interventions, focusing on qualitative methods and case series and was conducted simultaneously in India and Pakistan. Results: The original intervention delivered by speech and language therapists in a high-resource setting required adaptation in some aspects of its content and delivery to enhance contextual acceptability and to enable the intervention to be delivered by non-specialists. Conclusions: The resulting intervention, the Parent-mediated intervention for Autism Spectrum Disorder in South Asia (PASS, shares the core theoretical foundations of the original PACT but is adapted in several respects to enhance its acceptability, feasibility, and scalability in low-resource settings.

  1. 77 FR 40647 - Toward Innovative Spectrum-Sharing Technologies: Wireless Spectrum Research and Development...

    Science.gov (United States)

    2012-07-10

    ... Steering Group (WSRD SSG) Workshop III AGENCY: The National Coordination Office (NCO) for Networking and.... SUMMARY: Representatives from Federal research agencies, private industry, and academia will build on the...: This notice is issued by the National Coordination Office for the Networking and Information Technology...

  2. Sharing the dance -

    DEFF Research Database (Denmark)

    He, Jing; Ravn, Susanne

    2017-01-01

    to the highly specialized field of elite sports dance, we aim at exploring the way in which reciprocity unfolds in intensive deliberate practices of movement. In our analysis, we specifically argue that the ongoing dynamics of two separate flows of movement constitute a shared experience of dancing together....... In this sense, moving together, in sports dance, is a practical way of understanding each other. In agreement with Zahavi, our analysis emphasizes the bi-directed nature of sharing. However, at the same time, we contribute to Zahavi’s ongoing endeavour as the special case of sports dance reveals how reciprocity...

  3. Rethinking the Sharing Economy

    DEFF Research Database (Denmark)

    Kornberger, Martin; Leixnering, Stephan; Meyer, Renate

    2017-01-01

    -governmental organization Train of Hope – labeled as a ‘citizen start-up’ by City of Vienna officials – played an outstanding role in mastering the crisis. In a blog post during his visit in Vienna at the time, and experiencing the refugee crisis first-hand, it was actually Henry Mintzberg who suggested reading...... arguments. Second, we hold that a particular form of organizing facilitates the sharing economy: the sharing economy organization. This particular organizational form is distinctive – at the same time selectively borrowing and skillfully combining features from platform organizations (e.g., use...

  4. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  5. Decision Analysis of Dynamic Spectrum Access Rules

    Energy Technology Data Exchange (ETDEWEB)

    Juan D. Deaton; Luiz A. DaSilva; Christian Wernz

    2011-12-01

    A current trend in spectrum regulation is to incorporate spectrum sharing through the design of spectrum access rules that support Dynamic Spectrum Access (DSA). This paper develops a decision-theoretic framework for regulators to assess the impacts of different decision rules on both primary and secondary operators. We analyze access rules based on sensing and exclusion areas, which in practice can be enforced through geolocation databases. Our results show that receiver-only sensing provides insufficient protection for primary and co-existing secondary users and overall low social welfare. On the other hand, using sensing information between the transmitter and receiver of a communication link, provides dramatic increases in system performance. The performance of using these link end points is relatively close to that of using many cooperative sensing nodes associated to the same access point and large link exclusion areas. These results are useful to regulators and network developers in understanding in developing rules for future DSA regulation.

  6. Scaling Non-Regular Shared-Memory Codes by Reusing Custom Loop Schedules

    OpenAIRE

    Nikolopoulos, Dimitrios S.; Ernest Artiaga; Eduard Ayguadé; Jesús Labarta

    2003-01-01

    In this paper we explore the idea of customizing and reusing loop schedules to improve the scalability of non-regular numerical codes in shared-memory architectures with non-uniform memory access latency. The main objective is to implicitly setup affinity links between threads and data, by devising loop schedules that achieve balanced work distribution within irregular data spaces and reusing them as much as possible along the execution of the program for better memory access locality. This t...

  7. Information Sharing and Knowledge Sharing as Communicative Activities

    Science.gov (United States)

    Savolainen, Reijo

    2017-01-01

    Introduction: This paper elaborates the picture of information sharing and knowledge sharing as forms of communicative activity. Method: A conceptual analysis was made to find out how researchers have approached information sharing and knowledge sharing from the perspectives of transmission and ritual. The findings are based on the analysis of one…

  8. Computing on quantum shared secrets

    Science.gov (United States)

    Ouyang, Yingkai; Tan, Si-Hui; Zhao, Liming; Fitzsimons, Joseph F.

    2017-11-01

    A (k ,n )-threshold secret-sharing scheme allows for a string to be split into n shares in such a way that any subset of at least k shares suffices to recover the secret string, but such that any subset of at most k -1 shares contains no information about the secret. Quantum secret-sharing schemes extend this idea to the sharing of quantum states. Here we propose a method of performing computation securely on quantum shared secrets. We introduce a (n ,n )-quantum secret sharing scheme together with a set of algorithms that allow quantum circuits to be evaluated securely on the shared secret without the need to decode the secret. We consider a multipartite setting, with each participant holding a share of the secret. We show that if there exists at least one honest participant, no group of dishonest participants can recover any information about the shared secret, independent of their deviations from the algorithm.

  9. Shared Care in Diabetes?

    DEFF Research Database (Denmark)

    Bødker, Keld

    2006-01-01

    The Danish National Board of Health has recently released a report that is intended to mark the start of a new project to establish it support for shared care in diabetes. In this paper I raise a number of concerns where lack of attention towards participation from prospective users constitute...

  10. A shared vision.

    Science.gov (United States)

    Hogan, Brigid

    2007-12-01

    One of today's most powerful technologies in biomedical research--the creation of mutant mice by gene targeting in embryonic stem (ES) cells--was finally celebrated in this year's Nobel Prize in Medicine. The history of how ES cells were first discovered and genetically manipulated highlights the importance of collaboration among scientists from different backgrounds with a shared vision.

  11. Beyond processor sharing

    NARCIS (Netherlands)

    S. Aalto; U. Ayesta (Urtzi); S.C. Borst (Sem); V. Misra; R. Núñez Queija (Rudesindo (Sindo))

    2007-01-01

    textabstractWhile the (Egalitarian) Processor-Sharing (PS) discipline offers crucial insights in the performance of fair resource allocation mechanisms, it is inherently limited in analyzing and designing differentiated scheduling algorithms such as Weighted Fair Queueing and Weighted Round-Robin.

  12. Too Much Information Sharing?

    DEFF Research Database (Denmark)

    Ganuza, Juan José; Jansen, Jos

    2013-01-01

    parameters gives the following trade-off in Cournot oligopoly. On the one hand, it decreases the expected consumer surplus for a given information precision, as the literature shows. On the other hand, information sharing increases the firms’ incentives to acquire information, and the consumer surplus...

  13. Promoting teachers’ knowledge sharing

    NARCIS (Netherlands)

    Runhaar, P.R.; Sanders, K.

    2016-01-01

    Teachers’ professional development is nowadays seen as key in efforts to improve education. Knowledge sharing is a learning activity with which teachers not only professionalize themselves, but contribute to the professional development of their colleagues as well. This paper presents two studies,

  14. The Sharing Economy

    DEFF Research Database (Denmark)

    Hamari, Juho; Sjöklint, Mimmi; Ukkonen, Antti

    2016-01-01

    Information and communications technologies (ICTs) have enabled the rise of so-called “Collaborative Consumption” (CC): the peer-to-peer-based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services. CC has been expected to a...

  15. Decreasing Serial Cost Sharing

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter

    The increasing serial cost sharing rule of Moulin and Shenker [Econometrica 60 (1992) 1009] and the decreasing serial rule of de Frutos [Journal of Economic Theory 79 (1998) 245] have attracted attention due to their intuitive appeal and striking incentive properties. An axiomatic characterization...

  16. Decreasing serial cost sharing

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter Raahave

    2009-01-01

    The increasing serial cost sharing rule of Moulin and Shenker (Econometrica 60:1009-1037, 1992) and the decreasing serial rule of de Frutos (J Econ Theory 79:245-275, 1998) are known by their intuitive appeal and striking incentive properties. An axiomatic characterization of the increasing serial...

  17. SharedSpaces mingle

    NARCIS (Netherlands)

    Handberg, L.; Gullström, C.; Kort, J.; Nyström, J.

    2016-01-01

    SharedSpaces is a WebRTC design prototype that creates a virtual media space where people can mingle and interact. Although you are in different locations, you appear side by side in front of a chosen backdrop. This interactive installation addresses spatial and social connectedness, stressing the

  18. IBM Software Defined Storage and ownCloud Enterprise Editon - a perfect match for hyperscale Enterprise File Sync and Share

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    IBM Software Defined Storage, in particular the technology offering codenamed Elastic Storage (based on GPFS technology) has proven to be an ideal match for Enterprise File Sync and Share (EFSS) solutions that need highly scalable storage. The presentation will provide insight into the integration of Elastic Storage with the ownCloud Enterprise Edition (based on Open Source technology) software that showed impressive scalability and performance metrics during a proof-of-concept phase of an installation that is supposed to serve 300000 users when fully deployed.

  19. Creativity and psychopathology: a shared vulnerability model.

    Science.gov (United States)

    Carson, Shelley H

    2011-03-01

    Creativity is considered a positive personal trait. However, highly creative people have demonstrated elevated risk for certain forms of psychopathology, including mood disorders, schizophrenia spectrum disorders, and alcoholism. A model of shared vulnerability explains the relation between creativity and psychopathology. This model, supported by recent findings from neuroscience and molecular genetics, suggests that the biological determinants conferring risk for psychopathology interact with protective cognitive factors to enhance creative ideation. Elements of shared vulnerability include cognitive disinhibition (which allows more stimuli into conscious awareness), an attentional style driven by novelty salience, and neural hyperconnectivity that may increase associations among disparate stimuli. These vulnerabilities interact with superior meta-cognitive protective factors, such as high IQ, increased working memory capacity, and enhanced cognitive flexibility, to enlarge the range and depth of stimuli available in conscious awareness to be manipulated and combined to form novel and original ideas.

  20. Distributed Programming with Shared Data

    NARCIS (Netherlands)

    Bal, H.E.; Tanenbaum, A.S.

    1988-01-01

    Operating system primitives (e.g., problem-oriented shared memory, shared virtual memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared-variable paradigm without the presence of physical

  1. Quantum game application to spectrum scarcity problems

    Science.gov (United States)

    Zabaleta, O. G.; Barrangú, J. P.; Arizmendi, C. M.

    2017-01-01

    Recent spectrum-sharing research has produced a strategy to address spectrum scarcity problems. This novel idea, named cognitive radio, considers that secondary users can opportunistically exploit spectrum holes left temporarily unused by primary users. This presents a competitive scenario among cognitive users, making it suitable for game theory treatment. In this work, we show that the spectrum-sharing benefits of cognitive radio can be increased by designing a medium access control based on quantum game theory. In this context, we propose a model to manage spectrum fairly and effectively, based on a multiple-users multiple-choice quantum minority game. By taking advantage of quantum entanglement and quantum interference, it is possible to reduce the probability of collision problems commonly associated with classic algorithms. Collision avoidance is an essential property for classic and quantum communications systems. In our model, two different scenarios are considered, to meet the requirements of different user strategies. The first considers sensor networks where the rational use of energy is a cornerstone; the second focuses on installations where the quality of service of the entire network is a priority.

  2. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    Science.gov (United States)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  3. Autism Spectrum Disorder

    Science.gov (United States)

    Autism spectrum disorder (ASD) is a neurological and developmental disorder that begins early in childhood and lasts throughout a person's life. ... be known as Asperger syndrome and pervasive developmental disorders. It is called a "spectrum" disorder because people ...

  4. Risk Sharing under Incentive Constraints.

    OpenAIRE

    Wagner, W.B.

    2002-01-01

    In addressing the matter, this thesis covers issues such as the welfare gains from international risk sharing, the impact of international risk sharing on national economic policies and production efficiency, the welfare effects of international risk sharing in the presence of tax competition, and risk sharing among entrepreneurs that face financing constraints. The thesis outlines the implications of incentive constraints for the efficiency of the actual extent and pattern of risk sharing am...

  5. Coopetitive Business Models in Future Mobile Broadband with Licensed Shared Access (LSA

    Directory of Open Access Journals (Sweden)

    P. Ahokangas

    2016-08-01

    Full Text Available Spectrum scarcity forces mobile network operators (MNOs providing mobile broadband services to develop new business models that address spectrum sharing. It engages MNOs into coopetitive relationship with incumbents. Licensed Shared Access (LSA concept complements traditional licensing and helps MNOs to access new spectrum bands on a shared basis. This paper discusses spectrum sharing with LSA from business perspective. It describes how coopetition and business model are linked conceptually, and identifies the influence of coopetition on future business models in LSA. We develop business models for dominant and challenger MNOs in traditional licensing and future with LSA. The results indicate that coopetition and business model concepts are linked via value co-creation and value co-capture. LSA offers different business opportunities to dominant and challenger MNOs. Offering, value proposition, customer segments and differentiation in business models become critical in mobile broadband.

  6. Radio resource allocation and dynamic spectrum access

    CERN Document Server

    Benmammar , Badr

    2013-01-01

    We are currently witnessing an increase in telecommunications norms and standards given the recent advances in this field. The increasing number of normalized standards paves the way for an increase in the range of services available for each consumer. Moreover, the majority of available radio frequencies have already been allocated. This explains the emergence of cognitive radio (CR) - the sharing of the spectrum between a primary user and a secondary user.In this book, we will present the state of the art of the different techniques for spectrum access using cooperation and competit

  7. Sharing the dance -

    DEFF Research Database (Denmark)

    He, Jing; Ravn, Susanne

    2018-01-01

    to the highly specialized field of elite sports dance, we aim at exploring the way in which reciprocity unfolds in intensive deliberate practices of movement. In our analysis, we specifically argue that the ongoing dynamics of two separate flows of movement constitute a shared experience of dancing together....... In this sense, moving together, in sports dance, is a practical way of understanding each other. In agreement with Zahavi, our analysis emphasizes the bi-directed nature of sharing. However, at the same time, we contribute to Zahavi’s ongoing endeavour as the special case of sports dance reveals how reciprocity...... can be deliberately shaped through the mutual coordination and affective bound dynamics of movement. Our article thus both pursues the methodological point that qualitative research of expert competences can constructively enrich phenomenological analysis and indicates how movement can be fundamental...

  8. Towards A Shared Mission

    DEFF Research Database (Denmark)

    Staunstrup, Jørgen; Orth Gaarn-Larsen, Carsten

    in the context of universities. Although the economic aspects of value are important and cannot be ignored, we argue for a much richer interpretation of value that captures the many and varied results from universities. A shared mission is a prerequisite for university management and leadership. It makes......A mission shared by stakeholders, management and employees is a prerequisite for an engaging dialog about the many and substantial changes and challenges currently facing universities. Too often this essen-tial dialog reveals mistrust and misunderstandings about the role and outcome...... of the universities. The sad result is that the dialog about university development, resources, leadership, governance etc. too often ends up in rather fruitless discussions and sometimes even mutual suspicion. This paper argues for having a dialog involving both internal and external stakeholders agreeing...

  9. Highly scalable multichannel mesh electronics for stable chronic brain electrophysiology

    Science.gov (United States)

    Fu, Tian-Ming; Hong, Guosong; Viveros, Robert D.; Zhou, Tao

    2017-01-01

    Implantable electrical probes have led to advances in neuroscience, brain−machine interfaces, and treatment of neurological diseases, yet they remain limited in several key aspects. Ideally, an electrical probe should be capable of recording from large numbers of neurons across multiple local circuits and, importantly, allow stable tracking of the evolution of these neurons over the entire course of study. Silicon probes based on microfabrication can yield large-scale, high-density recording but face challenges of chronic gliosis and instability due to mechanical and structural mismatch with the brain. Ultraflexible mesh electronics, on the other hand, have demonstrated negligible chronic immune response and stable long-term brain monitoring at single-neuron level, although, to date, it has been limited to 16 channels. Here, we present a scalable scheme for highly multiplexed mesh electronics probes to bridge the gap between scalability and flexibility, where 32 to 128 channels per probe were implemented while the crucial brain-like structure and mechanics were maintained. Combining this mesh design with multisite injection, we demonstrate stable 128-channel local field potential and single-unit recordings from multiple brain regions in awake restrained mice over 4 mo. In addition, the newly integrated mesh is used to validate stable chronic recordings in freely behaving mice. This scalable scheme for mesh electronics together with demonstrated long-term stability represent important progress toward the realization of ideal implantable electrical probes allowing for mapping and tracking single-neuron level circuit changes associated with learning, aging, and neurodegenerative diseases. PMID:29109247

  10. Shared goals and development

    DEFF Research Database (Denmark)

    Blomberg, Olle

    2015-01-01

    In 'Joint Action and Development', Stephen Butterfill argues that if several agents' actions are driven by what he calls a "shared goal" -- a certain pattern of goal-relations and expectations -- then these actions constitute a joint action. This kind of joint action is sufficiently cognitively...... a counterexample, I show that the pattern of goal-relations and expectations specified by Butterfill cannot play this role. I then provide an appropriately conceptually and cognitively undemanding amendment with which the account can be saved....

  11. Sharing data increases citations

    DEFF Research Database (Denmark)

    Drachen, Thea Marie; Ellegaard, Ole; Larsen, Asger Væring

    2016-01-01

    This paper presents some indications to the existence of a citation advantage related to sharing data using astrophysics as a case. Through bibliometric analyses we find a citation advantage for astrophysical papers in core journals. The advantage arises as indexed papers are associated with data...... by bibliographical links, and consists of papers receiving on average significantly more citations per paper per year, than do papers not associated with links to data....

  12. Shared Health Governance

    Science.gov (United States)

    Ruger, Jennifer Prah

    2014-01-01

    Health and Social Justice (Ruger 2009a) developed the “health capability paradigm,” a conception of justice and health in domestic societies. This idea undergirds an alternative framework of social cooperation called “shared health governance” (SHG). SHG puts forth a set of moral responsibilities, motivational aspirations, and institutional arrangements, and apportions roles for implementation in striving for health justice. This article develops further the SHG framework and explains its importance and implications for governing health domestically. PMID:21745082

  13. A scalable pairwise class interaction framework for multidimensional classification

    DEFF Research Database (Denmark)

    Arias, Jacinto; Gámez, Jose A.; Nielsen, Thomas Dyhre

    2016-01-01

    We present a general framework for multidimensional classification that cap- tures the pairwise interactions between class variables. The pairwise class inter- actions are encoded using a collection of base classifiers (Phase 1), for which the class predictions are combined in a Markov random field...... inference methods in the second phase. We describe the basic framework and its main properties, as well as strategies for ensuring the scalability of the framework. We include a detailed experimental evaluation based on a range of publicly available databases. Here we analyze the overall performance...

  14. SAR++: A Multi-Channel Scalable and Reconfigurable SAR System

    DEFF Research Database (Denmark)

    Høeg, Flemming; Christensen, Erik Lintz

    2002-01-01

    SAR++ is a technology program aiming at developing know-how and technology needed to design the next generation civilian SAR systems. Technology has reached a state, which allows major parts of the digital subsystem to be built using custom-off-the-shelf (COTS) components. A design goal...... is to design a modular, scalable and reconfigurable SAR system using such components, in order to ensure maximum flexibility for the users of the actual system and for future system updates. Having these aspects in mind the SAR++ system is presented with focus on the digital subsystem architecture...

  15. Scalable brain network construction on white matter fibers

    Science.gov (United States)

    Chung, Moo K.; Adluru, Nagesh; Dalton, Kim M.; Alexander, Andrew L.; Davidson, Richard J.

    2011-03-01

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ɛ-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  16. Using overlay network architectures for scalable video distribution

    Science.gov (United States)

    Patrikakis, Charalampos Z.; Despotopoulos, Yannis; Fafali, Paraskevi; Cha, Jihun; Kim, Kyuheon

    2004-11-01

    Within the last years, the enormous growth of Internet based communication as well as the rapid increase of available processing power has lead to the widespread use of multimedia streaming as a means to convey information. This work aims at providing an open architecture designed to support scalable streaming to a large number of clients using application layer multicast. The architecture is based on media relay nodes that can be deployed transparently to any existing media distribution scheme, which can support media streamed using the RTP and RTSP protocols. The architecture is based on overlay networks at application level, featuring rate adaptation mechanisms for responding to network congestion.

  17. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...... been developed. Hybris is a prototype rendering architeture which can be tailored to many specific 3D graphics applications and implemented in various ways. Parallel software implementations for both single and multi-processor Windows 2000 system have been demonstrated. Working hardware...... as a case study and an application of the Hybris graphics architecture....

  18. Scalable web services for the PSIPRED Protein Analysis Workbench.

    Science.gov (United States)

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  19. A Scalable Framework to Detect Personal Health Mentions on Twitter.

    Science.gov (United States)

    Yin, Zhijun; Fabbri, Daniel; Rosenbloom, S Trent; Malin, Bradley

    2015-06-05

    Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual's health. The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (PTwitter in a scalable manner. These mentions correspond to the health issues of the Twitter users

  20. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  1. Scalable implementation of boson sampling with trapped ions.

    Science.gov (United States)

    Shen, C; Zhang, Z; Duan, L-M

    2014-02-07

    Boson sampling solves a classically intractable problem by sampling from a probability distribution given by matrix permanents. We propose a scalable implementation of boson sampling using local transverse phonon modes of trapped ions to encode the bosons. The proposed scheme allows deterministic preparation and high-efficiency readout of the bosons in the Fock states and universal mode mixing. With the state-of-the-art trapped ion technology, it is feasible to realize boson sampling with tens of bosons by this scheme, which would outperform the most powerful classical computers and constitute an effective disproof of the famous extended Church-Turing thesis.

  2. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  3. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  4. Bonobos share with strangers.

    Science.gov (United States)

    Tan, Jingzhi; Hare, Brian

    2013-01-01

    Humans are thought to possess a unique proclivity to share with others--including strangers. This puzzling phenomenon has led many to suggest that sharing with strangers originates from human-unique language, social norms, warfare and/or cooperative breeding. However, bonobos, our closest living relative, are highly tolerant and, in the wild, are capable of having affiliative interactions with strangers. In four experiments, we therefore examined whether bonobos will voluntarily donate food to strangers. We show that bonobos will forego their own food for the benefit of interacting with a stranger. Their prosociality is in part driven by unselfish motivation, because bonobos will even help strangers acquire out-of-reach food when no desirable social interaction is possible. However, this prosociality has its limitations because bonobos will not donate food in their possession when a social interaction is not possible. These results indicate that other-regarding preferences toward strangers are not uniquely human. Moreover, language, social norms, warfare and cooperative breeding are unnecessary for the evolution of xenophilic sharing. Instead, we propose that prosociality toward strangers initially evolves due to selection for social tolerance, allowing the expansion of individual social networks. Human social norms and language may subsequently extend this ape-like social preference to the most costly contexts.

  5. Sharing resources@CERN

    CERN Multimedia

    2002-01-01

    The library is launching a 'sharing resources@CERN' campaign, aiming to increase the library's utility by including the thousands of books bought by individual groups at CERN. This will improve sharing of information among CERN staff and users. Until now many people were unaware that copies of the same book (or standard, or journal) are often held not only by the library but by different divisions. (Here Eduardo Aldaz, from the PS division, and Isabel Bejar, from the ST division, read their divisional copies of the same book.) The idea behind the library's new sharing resources@CERN' initiative is not at all to collect the books in individual collections at the CERN library, but simply to register them in the Library database. Those not belonging to the library will in principle be unavailable for loan, but should be able to be consulted by anybody at CERN who is interested. "When you need a book urgently and it is not available in the library,' said PS Division engineer Eduardo Aldaz Carroll, it is a sham...

  6. Bonobos share with strangers.

    Directory of Open Access Journals (Sweden)

    Jingzhi Tan

    Full Text Available Humans are thought to possess a unique proclivity to share with others--including strangers. This puzzling phenomenon has led many to suggest that sharing with strangers originates from human-unique language, social norms, warfare and/or cooperative breeding. However, bonobos, our closest living relative, are highly tolerant and, in the wild, are capable of having affiliative interactions with strangers. In four experiments, we therefore examined whether bonobos will voluntarily donate food to strangers. We show that bonobos will forego their own food for the benefit of interacting with a stranger. Their prosociality is in part driven by unselfish motivation, because bonobos will even help strangers acquire out-of-reach food when no desirable social interaction is possible. However, this prosociality has its limitations because bonobos will not donate food in their possession when a social interaction is not possible. These results indicate that other-regarding preferences toward strangers are not uniquely human. Moreover, language, social norms, warfare and cooperative breeding are unnecessary for the evolution of xenophilic sharing. Instead, we propose that prosociality toward strangers initially evolves due to selection for social tolerance, allowing the expansion of individual social networks. Human social norms and language may subsequently extend this ape-like social preference to the most costly contexts.

  7. Maximizing the utility of radio spectrum: Broadband spectrum measurements and occupancy model for use by cognitive radio

    Science.gov (United States)

    Petrin, Allen J.

    Radio spectrum is a vital national asset; proper management of this finite resource is essential to the operation and development of telecommunications, radio-navigation, radio astronomy, and passive remote sensing services. To maximize the utility of the radio spectrum, knowledge of its current usage is beneficial. As a result, several spectrum studies have been conducted in urban Atlanta, suburban Atlanta, and rural North Carolina. These studies improve upon past spectrum studies by resolving spectrum usage by nearly all its possible parameters: frequency, time, polarization, azimuth, and location type. The continuous frequency range from 400MHz to 7.2 GHz was measured with a custom-designed system. More than 8 billion spectrum measurements were taken over several months of observation. A multi-parameter spectrum usage detection method was developed and analyzed with data from the spectrum studies. This method was designed to exploit all the characteristics of spectral information that was available from the spectrum studies. Analysis of the spectrum studies showed significant levels of underuse. The level of spectrum usage in time and azimuthal space was determined to be only 6.5 % for the urban Atlanta, 5.3 % for suburban Atlanta, and 0.8 % for the rural North Carolina spectrum studies. Most of the frequencies measured never experienced usage. Interference was detected in several protected radio astronomy and sensitive radio navigation bands. A cognitive radio network architecture to share spectrum with fixed microwave systems was developed. The architecture uses a broker-based sharing method to control spectrum access and investigate interference issues.

  8. Privacy-Preserving and Scalable Service Recommendation Based on SimHash in a Distributed Cloud Environment

    Directory of Open Access Journals (Sweden)

    Yanwei Xu

    2017-01-01

    Full Text Available With the increasing volume of web services in the cloud environment, Collaborative Filtering- (CF- based service recommendation has become one of the most effective techniques to alleviate the heavy burden on the service selection decisions of a target user. However, the service recommendation bases, that is, historical service usage data, are often distributed in different cloud platforms. Two challenges are present in such a cross-cloud service recommendation scenario. First, a cloud platform is often not willing to share its data to other cloud platforms due to privacy concerns, which decreases the feasibility of cross-cloud service recommendation severely. Second, the historical service usage data recorded in each cloud platform may update over time, which reduces the recommendation scalability significantly. In view of these two challenges, a novel privacy-preserving and scalable service recommendation approach based on SimHash, named SerRecSimHash, is proposed in this paper. Finally, through a set of experiments deployed on a real distributed service quality dataset WS-DREAM, we validate the feasibility of our proposal in terms of recommendation accuracy and efficiency while guaranteeing privacy-preservation.

  9. Privacy in the Sharing Economy

    DEFF Research Database (Denmark)

    Ranzini, Giulia; Etter, Michael; Lutz, Christoph

    ’s digital services through providing recommendations to Europe’s institutions. The initial stage of this research project involves a set of three literature reviews of the state of research on three core topics in relation to the sharing economy: participation (1), privacy (2), and power (3). This piece......Report from the EU H2020 Research Project Ps2Share:Participation, Privacy, and Power in the Sharing Economy. This paper gives an in-depth overview of the topic of power in the sharing economy. It forms one part of a European Union Horizon 2020 Research Project on the sharing economy: "Ps2Share...... Participation, Privacy, and Power in the Sharing Economy". We aim to foster better awareness of the consequences which the sharing economy has on the way people behave, think, interact, and socialize across Europe. Our overarching objective is to identify key challenges of the sharing economy and improve Europe...

  10. ANALYZING AVIATION SAFETY REPORTS: FROM TOPIC MODELING TO SCALABLE MULTI-LABEL CLASSIFICATION

    Data.gov (United States)

    National Aeronautics and Space Administration — ANALYZING AVIATION SAFETY REPORTS: FROM TOPIC MODELING TO SCALABLE MULTI-LABEL CLASSIFICATION AMRUDIN AGOVIC*, HANHUAI SHAN, AND ARINDAM BANERJEE Abstract. The...

  11. Implementing a hardware-friendly wavelet entropy codec for scalable video

    Science.gov (United States)

    Eeckhaut, Hendrik; Christiaens, Mark; Devos, Harald; Stroobandt, Dirk

    2005-11-01

    In the RESUME project (Reconfigurable Embedded Systems for Use in Multimedia Environments) we explore the benefits of an implementation of scalable multimedia applications using reconfigurable hardware by building an FPGA implementation of a scalable wavelet-based video decoder. The term "scalable" refers to a design that can easily accommodate changes in quality of service with minimal computational overhead. This is important for portable devices that have different Quality of Service (QoS) requirements and have varying power restrictions. The scalable video decoder consists of three major blocks: a Wavelet Entropy Decoder (WED), an Inverse Discrete Wavelet Transformer (IDWT) and a Motion Compensator (MC). The WED decodes entropy encoded parts of the video stream into wavelet transformed frames. These frames are decoded bitlayer per bitlayer. The more bitlayers are decoded the higher the image quality (scalability in image quality). Resolution scalability is obtained as an inherent property of the IDWT. Finally framerate scalability is achieved through hierarchical motion compensation. In this article we present the results of our investigation into the hardware implementation of such a scalable video codec. In particular we found that the implementation of the entropy codec is a significant bottleneck. We present an alternative, hardware-friendly algorithm for entropy coding with excellent data locality (both temporal and spatial), streaming capabilities, a high degree of parallelism, a smaller memory footprint and state-of-the-art compression while maintaining all required scalability properties. These claims are supported by an effective hardware implementation on an FPGA.

  12. SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS

    Data.gov (United States)

    National Aeronautics and Space Administration — SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Biomass monitoring,...

  13. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  14. GeoWeb Crawler: An Extensible and Scalable Web Crawling Framework for Discovering Geospatial Web Resources

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Huang

    2016-08-01

    Full Text Available With the advance of the World-Wide Web (WWW technology, people can easily share content on the Web, including geospatial data and web services. Thus, the “big geospatial data management” issues start attracting attention. Among the big geospatial data issues, this research focuses on discovering distributed geospatial resources. As resources are scattered on the WWW, users cannot find resources of their interests efficiently. While the WWW has Web search engines addressing web resource discovery issues, we envision that the geospatial Web (i.e., GeoWeb also requires GeoWeb search engines. To realize a GeoWeb search engine, one of the first steps is to proactively discover GeoWeb resources on the WWW. Hence, in this study, we propose the GeoWeb Crawler, an extensible Web crawling framework that can find various types of GeoWeb resources, such as Open Geospatial Consortium (OGC web services, Keyhole Markup Language (KML and Environmental Systems Research Institute, Inc (ESRI Shapefiles. In addition, we apply the distributed computing concept to promote the performance of the GeoWeb Crawler. The result shows that for 10 targeted resources types, the GeoWeb Crawler discovered 7351 geospatial services and 194,003 datasets. As a result, the proposed GeoWeb Crawler framework is proven to be extensible and scalable to provide a comprehensive index of GeoWeb.

  15. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  16. Application of the Scalable Coherent Interface to Data Acquisition at LHC

    CERN Multimedia

    2002-01-01

    RD24 : The RD24 activities in 1996 were dominated by test and integration of PCI-SCI bridges for VME-bus and for PC's for the 1996 milestones. In spite of the dispersion of RD24 membership into the ATLAS, ALICE and the proposed LHC-B experiments, collaboration and sharing of resources of SCI laboratories and equipment continued with excellent results and several doctoral theses. The availability of cheap PCI-SCI adapters has allowed construction of VME multicrate testbenches based on a variety of VME processors and work-stations. Transparent memory-to-memory accesses between remote PCI buses over SCI have been established under the Linux, Lynx-OS and Windows-NT operating systems as a proof that scalable multicrate systems are ready to be implemented with off-the-shelf products. Commercial SCI-PCI adapters are based on a PCI-SCI ASIC from Dolphin. The FPGA based PCI-SCI adapter, designed by CERN and LBL for data acquisition at LHC and STAR allows addition of DAQ functions. The step from multicrate systems towa...

  17. Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging

    Directory of Open Access Journals (Sweden)

    Alberto Izquierdo

    2016-10-01

    Full Text Available This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system.

  18. Scalable and Fault Tolerant Failure Detection and Consensus

    Energy Technology Data Exchange (ETDEWEB)

    Katti, Amogh [University of Reading, UK; Di Fatta, Giuseppe [University of Reading, UK; Naughton III, Thomas J [ORNL; Engelmann, Christian [ORNL

    2015-01-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.

  19. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  20. Performance and Scalability Evaluation of the Ceph Parallel File System

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Nelson, Mark [Inktank Storage, Inc.; Oral, H Sarp [ORNL; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Caldwell, Blake A [ORNL; Hill, Jason J [ORNL

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  1. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  2. Towards Scalable Strain Gauge-Based Joint Torque Sensors.

    Science.gov (United States)

    Khan, Hamza; D'Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G; Cuschieri, Alfred; Semini, Claudio

    2017-08-18

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR).

  3. Developing a scalable artificial photosynthesis technology through nanomaterials by design.

    Science.gov (United States)

    Lewis, Nathan S

    2016-12-06

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  4. A highly scalable peptide-based assay system for proteomics.

    Directory of Open Access Journals (Sweden)

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  5. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  6. The dust acoustic waves in three dimensional scalable complex plasma

    CERN Document Server

    Zhukhovitskii, D I

    2015-01-01

    Dust acoustic waves in the bulk of a dust cloud in complex plasma of low pressure gas discharge under microgravity conditions are considered. The dust component of complex plasma is assumed a scalable system that conforms to the ionization equation of state (IEOS) developed in our previous study. We find singular points of this IEOS that determine the behavior of the sound velocity in different regions of the cloud. The fluid approach is utilized to deduce the wave equation that includes the neutral drag term. It is shown that the sound velocity is fully defined by the particle compressibility, which is calculated on the basis of the scalable IEOS. The sound velocities and damping rates calculated for different 3D complex plasmas both in ac and dc discharges demonstrate a good correlation with experimental data that are within the limits of validity of the theory. The theory provides interpretation for the observed independence of the sound velocity on the coordinate and for a weak dependence on the particle ...

  7. Developing a scalable artificial photosynthesis technology through nanomaterials by design

    Science.gov (United States)

    Lewis, Nathan S.

    2016-12-01

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  8. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Directory of Open Access Journals (Sweden)

    Zihao Yang

    2017-09-01

    Full Text Available The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15 wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  9. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Science.gov (United States)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  10. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  11. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  12. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  13. Scalable Feedback and Assessment Activities in Open Online Education

    NARCIS (Netherlands)

    Kasch, Julia; Van Rosmalen, Peter; Kalz, Marco

    2016-01-01

    Open Online Education is something great. You can reach thousands of students, you can share ideas quickly whenever and wherever. Right now, for example, I am giving this live presentation about my PhD project and I can easily share my presentation with you who I otherwise probably would never meet.

  14. Cooperation Scheme For Distributed Spectrum Sensing In Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Ying Dai

    2014-09-01

    Full Text Available Spectrum sensing is an essential phase in cognitive radio networks (CRNs. It enables secondary users (SUs to access licensed spectrum, which is temporarily not occupied by the primary users (PUs. The widely used scheme of spectrum sensing is cooperative sensing, in which an SU shares its sensing results with other SUs to improve the overall sensing performance, while maximizing its throughput. For a single SU, if its sensing results are shared early, it would have more time for data transmission, which improves the throughput. However, when multiple SUs send their sensing results early, they are more likely to send out their sensing results simultaneously over the same signaling channel. Under these conditions, conflicts would likely happen. Then, both the sensing performance and throughput would be affected. Therefore, it is important to take when-to-share into account. We model the spectrum sensing as an evolutionary game. Different from previous works, the strategy set for each player in our game model contains not only whether to share its sensing results, but also when to share. The payoff for each player is defined based on the throughput, which considers the influence of the time spent both on sensing and sharing. We prove the existence of the evolutionarily stable strategy (ESS. In addition, we propose a practical algorithm for each secondary user to converge to the ESS. We conduct experiments on our testbed consisting of 4 USRP N200s. The experimental results verify for our model, including the convergence to the ESS.

  15. Level-2 Milestone 5588: Deliver Strategic Plan and Initial Scalability Assessment by Advanced Architecture and Portability Specialists Team

    Energy Technology Data Exchange (ETDEWEB)

    Draeger, Erik W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-30

    This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEM and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.

  16. Fixed Access Network Sharing

    Science.gov (United States)

    Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio

    2015-12-01

    Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.

  17. Risk Sharing and Layoff Risk in Profit Sharing

    OpenAIRE

    Fabella, Raul V.

    1995-01-01

    We show that if the employer is risk averse, however slightly, there is always a profit sharing contract that will Pareto-dominate the spot wage contract in the sense of pure risk sharing. The smaller is the employer risk aversion, the narrower is the room for profit sharing. The higher the workers value employment stability (less layoff risk), the more Pareto attractive is profit sharing regardless of employer risk aversion.

  18. Shared care and boundaries:

    DEFF Research Database (Denmark)

    Winthereik, Brit Ross

    2008-01-01

    between home and clinic, which the project identifies as problematic and seeks to transgress. Research limitations/implications – The pilot project, which is used as a case, is terminated prematurely. However, this does not affect the fact that more attention should be paid to the specific redistribution......Purpose – The paper seeks to examine how an online maternity record involving pregnant women worked as a means to create shared maternity care. Design/methodology/approach – Ethnographic techniques have been used. The paper adopts a theoretical/methodological framework based on science...

  19. Can power be shared?

    Science.gov (United States)

    Ten Pas, William S

    2013-01-01

    Dental insurance began with a partnership between dental service organizations and state dental associations with a view toward expanding the number of Americans receiving oral health care and as a means for permitting firms and other organizations to offer employee benefits. The goals have been achieved, but the alliance between dentistry and insurance has become strained. A lack of dialogue has fostered mutual misconceptions, some of which are reviewed in this paper. It is possible that the public, the profession, and the dental insurance industry can all be strengthened, but only through power-sharing around the original common objective.

  20. Shared Oral Care

    DEFF Research Database (Denmark)

    Hede, Børge; Elmelund Poulsen,, Johan; Christophersen, Rasmus

    2014-01-01

    Shared Oral Care - Forebyggelse af orale sygdomme på plejecentre Introduktion og formål: Mangelfuld mundhygiejne hos plejekrævende ældre er et alment og veldokumenteret sundhedsproblem, der kan føre til massiv udvikling af tandsygdomme, og som yderligere kan være medvirkende årsag til alvorlige...... ressourceanvendelse er muligt at skabe en betydeligt forbedret mundhygiejne hos plejekrævende ældre Key words: Geriatric dentistry, nursing home, community health services, prevention, situated learning...

  1. Shared consultant physician posts.

    LENUS (Irish Health Repository)

    Cooke, J

    2012-01-31

    Our aim was to assess the acceptability and cost-efficiency of shared consultancy posts. Two consultant physicians worked alternate fortnights for a period of twelve months. Questionnaires were distributed to general practitioners, nurses, consultants and junior doctors affected by the arrangement. Patients or their next of kin were contacted by telephone. 1\\/17 of consultants described the experience as negative. 14\\/19 junior doctors reported a positive experience. 11 felt that training had been improved while 2 felt that it had been adversely affected. 17\\/17 GPs were satisfied with the arrangement. 1\\/86 nurses surveyed reported a negative experience. 1\\/48 patients were unhappy with the arrangement. An extra 2.2 (p<0.001) patients were seen per clinic. Length of stay was shortened by 2.49 days (p<0.001). A saving of 69,212 was made due to decreased locum requirements. We present data suggesting structured shared consultancy posts can be broadly acceptable and cost efficient in Ireland.

  2. Reconceptualising Shared Services

    Directory of Open Access Journals (Sweden)

    Peter McKinlay

    2011-12-01

    Full Text Available Endeavours to improve the efficiency and effectiveness of local government have been a persistent theme both of politicians in higher tiers of government and of interest groups, especially business. The two contenders for improvement which receive most coverage both in the research literature and in popular discussion are amalgamation and shared services. Arguments from the literature have generally favoured shared services over amalgamation. Bish (2001 in a comprehensive review of North American research dismisses the argument for amalgamation as a product of flawed nineteenth-century thinking and a bureaucratic urge for centralized control. He does so making the very reasonable point that the presumed economies of scale which will result from amalgamation are a function not of the size and scale of individual local authorities, but of the services for which those local authorities are responsible, and the point at which economies of scale will be optimised will be very different for different services. The case against amalgamation is also reinforced by the absence of any significant post-facto evidence that amalgamation achieves either the promised savings or the anticipated efficiency gains (McKinlay 2006.

  3. Vaccines, our shared responsibility.

    Science.gov (United States)

    Pagliusi, Sonia; Jain, Rishabh; Suri, Rajinder Kumar

    2015-05-05

    The Developing Countries Vaccine Manufacturers' Network (DCVMN) held its fifteenth annual meeting from October 27-29, 2014, New Delhi, India. The DCVMN, together with the co-organizing institution Panacea Biotec, welcomed over 240 delegates representing high-profile governmental and nongovernmental global health organizations from 36 countries. Over the three-day meeting, attendees exchanged information about their efforts to achieve their shared goal of preventing death and disability from known and emerging infectious diseases. Special praise was extended to all stakeholders involved in the success of polio eradication in South East Asia and highlighted challenges in vaccine supply for measles-rubella immunization over the coming decades. Innovative vaccines and vaccine delivery technologies indicated creative solutions for achieving global immunization goals. Discussions were focused on three major themes including regulatory challenges for developing countries that may be overcome with better communication; global collaborations and partnerships for leveraging investments and enable uninterrupted supply of affordable and suitable vaccines; and leading innovation in vaccines difficult to develop, such as dengue, Chikungunya, typhoid-conjugated and EV71, and needle-free technologies that may speed up vaccine delivery. Moving further into the Decade of Vaccines, participants renewed their commitment to shared responsibility toward a world free of vaccine-preventable diseases. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Knowledge sharing in horizontal networks

    National Research Council Canada - National Science Library

    Juliano Nunes Alves; Breno Augusto Diniz Pereira; Augusto Diniz

    2012-01-01

      The present study aimed to identify the process of sharing knowledge between the partners involved in the network, as well as the dimensions on sharing of knowledge between enterprises belonging...

  5. Electron-Nuclear Energy Sharing in Above-Threshold Multiphoton Dissociative Ionization of H2

    DEFF Research Database (Denmark)

    Wu, J.; Kunitski, M.; Pitzer, M.

    2013-01-01

    We report experimental observation of the energy sharing between electron and nuclei in above-threshold multiphoton dissociative ionization of H2 by strong laser fields. The absorbed photon energy is shared between the ejected electron and nuclei in a correlated fashion, resulting in multiple...... diagonal lines in their joint energy spectrum governed by the energy conservation of all fragment particles....

  6. Electron-nuclear energy sharing in above-threshold multiphoton dissociative ionization of H2.

    Science.gov (United States)

    Wu, J; Kunitski, M; Pitzer, M; Trinter, F; Schmidt, L Ph H; Jahnke, T; Magrakvelidze, M; Madsen, C B; Madsen, L B; Thumm, U; Dörner, R

    2013-07-12

    We report experimental observation of the energy sharing between electron and nuclei in above-threshold multiphoton dissociative ionization of H2 by strong laser fields. The absorbed photon energy is shared between the ejected electron and nuclei in a correlated fashion, resulting in multiple diagonal lines in their joint energy spectrum governed by the energy conservation of all fragment particles.

  7. Shared Services Management: Critical Factors

    OpenAIRE

    Shouhong Wang; Hai Wang

    2015-01-01

    The cloud computing technology has accelerated shared services in the government and private sectors. This paper proposes a research framework of critical success factors of shared services in the aspects of strategy identification, collaborative partnership networking, optimal shared services process re-designing, and new policies and regulations. A survey has been employed to test the hypotheses. The test results indicate that clear vision of strategies of shared services, long term busines...

  8. Model Sharing and Collaboration using HydroShare

    Science.gov (United States)

    Goodall, J. L.; Morsy, M. M.; Castronova, A. M.; Miles, B.; Merwade, V.; Tarboton, D. G.

    2015-12-01

    HydroShare is a web-based system funded by the National Science Foundation (NSF) for sharing hydrologic data and models as resources. Resources in HydroShare can either be assigned a generic type, meaning the resource only has Dublin Core metadata properties, or one of a growing number of specific resource types with enhanced metadata profiles defined by the HydroShare development team. Examples of specific resource types in the current release of HydroShare (http://www.hydroshare.org) include time series, geographic raster, Multidimensional (NetCDF), model program, and model instance. Here we describe research and development efforts in HydroShare project for model-related resources types. This work has included efforts to define metadata profiles for common modeling resources, execute models directly through the HydroShare user interface using Docker containers, and interoperate with the 3rd party application SWATShare for model execution and visualization. These examples demonstrate the benefit of HydroShare to support model sharing and address collaborative problems involving modeling. The presentation will conclude with plans for future modeling-related development in HydroShare including supporting the publication of workflow resources, enhanced metadata for additional hydrologic models, and linking model resources with other resources in HydroShare to capture model provenance.

  9. Fractions: How to Fair Share

    Science.gov (United States)

    Wilson, P. Holt; Edgington, Cynthia P.; Nguyen, Kenny H.; Pescosolido, Ryan S.; Confrey, Jere

    2011-01-01

    Children learn from a very early age what it means to get their "fair share." Whether it is candy or birthday cake, many children successfully create equal-size groups or parts of a collection or whole but later struggle to create fair shares of multiple wholes, such as fairly sharing four pies among a family of seven. Recent research suggests…

  10. Risk sharing and public transfers

    OpenAIRE

    Dercon, Stefan; Krishnan, Pramila

    2002-01-01

    We use public transfers in the form of food aid to test for the presence of risk sharing arrangements at the village level in rural Ethiopia. We reject perfect risk-sharing, but find evidence of partial risk-sharing via transfers. There is also evidence consistent with crowding out of informal insurance linked to food aid programmes. – risk ; public transfers ; informal insurance

  11. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  12. Scalable nanostructuring on polymer by a SiC stamp: optical and wetting effects

    DEFF Research Database (Denmark)

    Argyraki, Aikaterini; Lu, Weifang; Petersen, Paul Michael

    2015-01-01

    A method for fabricating scalable antireflective nanostructures on polymer surfaces (polycarbonate) is demonstrated. The transition from small scale fabrication of nanostructures to a scalable replication technique can be quite challenging. In this work, an area per print corresponding to a 2-inch...

  13. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  14. Scalable Multifunction Active Phased Array Systems: from concept to implementation; 2006BU1-IS

    NARCIS (Netherlands)

    LaMana, M.; Huizing, A.

    2006-01-01

    The SMRF (Scalable Multifunction Radio Frequency Systems) concept has been launched in the WEAG (Western European Armament Group) context, recently restructured into the EDA (European Defence Agency). A derived concept is introduced here, namely the SMRF-APAS (Scalable Multifunction Radio

  15. A NEaT Design for reliable and scalable network stacks

    NARCIS (Netherlands)

    Hruby, Tomas; Giuffrida, Cristiano; Sambuc, Lionel; Bos, Herbert; Tanenbaum, Andrew S.

    2016-01-01

    Operating systems provide a wide range of services, which are crucial for the increasingly high reliability and scalability demands of modern applications. Providing both reliability and scalability at the same time is hard. Commodity OS architectures simply lack the design abstractions to do so for

  16. Shared consultant physician posts.

    Science.gov (United States)

    Cooke, J; Molefe, C; Carew, S; Finucane, P; Clinch, D

    2009-01-01

    Our aim was to assess the acceptability and cost-efficiency of shared consultancy posts. Two consultant physicians worked alternate fortnights for a period of twelve months. Questionnaires were distributed to general practitioners, nurses, consultants and junior doctors affected by the arrangement. Patients or their next of kin were contacted by telephone. 1/17 of consultants described the experience as negative. 14/19 junior doctors reported a positive experience. 11 felt that training had been improved while 2 felt that it had been adversely affected. 17/17 GPs were satisfied with the arrangement. 1/86 nurses surveyed reported a negative experience. 1/48 patients were unhappy with the arrangement. An extra 2.2 (pposts can be broadly acceptable and cost efficient in Ireland.

  17. SHARED TECHNOLOGY TRANSFER PROGRAM

    Energy Technology Data Exchange (ETDEWEB)

    GRIFFIN, JOHN M. HAUT, RICHARD C.

    2008-03-07

    The program established a collaborative process with domestic industries for the purpose of sharing Navy-developed technology. Private sector businesses were educated so as to increase their awareness of the vast amount of technologies that are available, with an initial focus on technology applications that are related to the Hydrogen, Fuel Cells and Infrastructure Technologies (Hydrogen) Program of the U.S. Department of Energy. Specifically, the project worked to increase industry awareness of the vast technology resources available to them that have been developed with taxpayer funding. NAVSEA-Carderock and the Houston Advanced Research Center teamed with Nicholls State University to catalog NAVSEA-Carderock unclassified technologies, rated the level of readiness of the technologies and established a web based catalog of the technologies. In particular, the catalog contains technology descriptions, including testing summaries and overviews of related presentations.

  18. Borrowing brainpower - sharing insecurities

    DEFF Research Database (Denmark)

    Wegener, Charlotte; Meier, Ninna; Ingerslev, Karen

    2016-01-01

    Academic writing is a vital, yet complex skill that must be developed within a doctoral training process. In addition, becoming an academic researcher is a journey of changing sense of self and identity. Through analysis of a group session, we show how the feedback of peers addresses questions...... of structure and writing style along with wider issues of researcher identity. Thus, peer learning is demonstrated as a process of simultaneously building a text and an identity as scholarly researcher. The paper advocates ‘borrowing brainpower’ from peers in order to write better texts and, at the same time......, ‘share insecurities’ during the development of the researcher identity. Based on a distributed notion of peer learning and identity, we point to the need for further research into the everyday activities of doctoral writing groups in order to understand the dynamic relationship between production of text...

  19. Photonic spin-controlled multifunctional shared-aperture antenna array.

    Science.gov (United States)

    Maguid, Elhanan; Yulevich, Igor; Veksler, Dekel; Kleiner, Vladimir; Brongersma, Mark L; Hasman, Erez

    2016-06-03

    The shared-aperture phased antenna array developed in the field of radar applications is a promising approach for increased functionality in photonics. The alliance between the shared-aperture concepts and the geometric phase phenomenon arising from spin-orbit interaction provides a route to implement photonic spin-control multifunctional metasurfaces. We adopted a thinning technique within the shared-aperture synthesis and investigated interleaved sparse nanoantenna matrices and the spin-enabled asymmetric harmonic response to achieve helicity-controlled multiple structured wavefronts such as vortex beams carrying orbital angular momentum. We used multiplexed geometric phase profiles to simultaneously measure spectrum characteristics and the polarization state of light, enabling integrated on-chip spectropolarimetric analysis. The shared-aperture metasurface platform opens a pathway to novel types of nanophotonic functionality. Copyright © 2016, American Association for the Advancement of Science.

  20. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  1. Optimization of Hierarchical Modulation for Use of Scalable Media

    Directory of Open Access Journals (Sweden)

    Heneghan Conor

    2010-01-01

    Full Text Available This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP and Lower Priority (LP stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR for the particular examples shown.

  2. Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations

    DEFF Research Database (Denmark)

    Pedersen, Thomas Lin

    2017-01-01

    information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes...... along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do...... of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https...

  3. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  4. Scalable Fabrication of 2D Semiconducting Crystals for Future Electronics

    Directory of Open Access Journals (Sweden)

    Jiantong Li

    2015-12-01

    Full Text Available Two-dimensional (2D layered materials are anticipated to be promising for future electronics. However, their electronic applications are severely restricted by the availability of such materials with high quality and at a large scale. In this review, we introduce systematically versatile scalable synthesis techniques in the literature for high-crystallinity large-area 2D semiconducting materials, especially transition metal dichalcogenides, and 2D material-based advanced structures, such as 2D alloys, 2D heterostructures and 2D material devices engineered at the wafer scale. Systematic comparison among different techniques is conducted with respect to device performance. The present status and the perspective for future electronics are discussed.

  5. A Modular, Scalable, Extensible, and Transparent Optical Packet Buffer

    Science.gov (United States)

    Small, Benjamin A.; Shacham, Assaf; Bergman, Keren

    2007-04-01

    We introduce a novel optical packet switching buffer architecture that is composed of multiple building-block modules, allowing for a large degree of scalability. The buffer supports independent and simultaneous read and write processes without packet rejection or misordering and can be considered a fully functional packet buffer. It can easily be programmed to support two prioritization schemes: first-in first-out (FIFO) and last-in first-out (LIFO). Because the system leverages semiconductor optical amplifiers as switching elements, wideband packets can be routed transparently. The operation of the system is discussed with illustrative packet sequences, which are then verified on an actual implementation composed of conventional fiber-optic componentry.

  6. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  7. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    Directory of Open Access Journals (Sweden)

    Kae Nemoto

    2014-08-01

    Full Text Available Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  8. Scalable Creation of Long-Lived Multipartite Entanglement

    Science.gov (United States)

    Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.

    2017-10-01

    We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in 40Ca+, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ ⟩=(1 /√{2 })(|0000 ⟩+|1111 ⟩) , and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.

  9. A Practical and Scalable Tool to Find Overlaps between Sequences

    Science.gov (United States)

    Haj Rachid, Maan

    2015-01-01

    The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment. PMID:25961045

  10. A Practical and Scalable Tool to Find Overlaps between Sequences

    Directory of Open Access Journals (Sweden)

    Maan Haj Rachid

    2015-01-01

    Full Text Available The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment.

  11. CloudETL: Scalable Dimensional ETL for Hadoop and Hive

    DEFF Research Database (Denmark)

    Xiufeng, Liu; Thomsen, Christian; Pedersen, Torben Bach

    Extract-Transform-Load (ETL) programs process data from sources into data warehouses (DWs). Due to the rapid growth of data volumes, there is an increasing demand for systems that can scale on demand. Recently, much attention has been given to MapReduce which is a framework for highly parallel...... handling of massive data sets in cloud environments. The MapReduce-based Hive has been proposed as a DBMS-like system for DWs and provides good and scalable analytical features. It is,however, still challenging to do proper dimensional ETL processing with Hive; for example, UPDATEs are not supported which...... makes handling of slowly changing dimensions (SCDs) very difficult. To remedy this, we here present the cloud-enabled ETL framework CloudETL. CloudETL uses the open source MapReduce implementation Hadoop to parallelize the ETL execution and to process data into Hive. The user defines the ETL process...

  12. A Software and Hardware IPTV Architecture for Scalable DVB Distribution

    Directory of Open Access Journals (Sweden)

    Georg Acher

    2009-01-01

    Full Text Available Many standards and even more proprietary technologies deal with IP-based television (IPTV. But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders.

  13. Adaptive Streaming of Scalable Videos over P2PTV

    Directory of Open Access Journals (Sweden)

    Youssef Lahbabi

    2015-01-01

    Full Text Available In this paper, we propose a new Scalable Video Coding (SVC quality-adaptive peer-to-peer television (P2PTV system executed at the peers and at the network. The quality adaptation mechanisms are developed as follows: on one hand, the Layer Level Initialization (LLI is used for adapting the video quality with the static resources at the peers in order to avoid long startup times. On the other hand, the Layer Level Adjustment (LLA is invoked periodically to adjust the SVC layer to the fluctuation of the network conditions with the aim of predicting the possible stalls before their occurrence. Our results demonstrate that our mechanisms allow quickly adapting the video quality to various system changes while providing best Quality of Experience (QoE that matches current resources of the peer devices and instantaneous throughput available at the network state.

  14. Scalable Quantum Circuit and Control for a Superconducting Surface Code

    Science.gov (United States)

    Versluis, R.; Poletto, S.; Khammassi, N.; Tarasinski, B.; Haider, N.; Michalak, D. J.; Bruno, A.; Bertels, K.; DiCarlo, L.

    2017-09-01

    We present a scalable scheme for executing the error-correction cycle of a monolithic surface-code fabric composed of fast-flux-tunable transmon qubits with nearest-neighbor coupling. An eight-qubit unit cell forms the basis for repeating both the quantum hardware and coherent control, enabling spatial multiplexing. This control uses three fixed frequencies for all single-qubit gates and a unique frequency-detuning pattern for each qubit in the cell. By pipelining the interaction and readout steps of ancilla-based X - and Z -type stabilizer measurements, we can engineer detuning patterns that avoid all second-order transmon-transmon interactions except those exploited in controlled-phase gates, regardless of fabric size. Our scheme is applicable to defect-based and planar logical qubits, including lattice surgery.

  15. Final Report. Center for Scalable Application Development Software

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  16. Scalable and Flexible SLA Management Approach for Cloud

    Directory of Open Access Journals (Sweden)

    SHAUKAT MEHMOOD

    2017-01-01

    Full Text Available Cloud Computing is a cutting edge technology in market now a days. In Cloud Computing environment the customer should pay bills to use computing resources. Resource allocation is a primary task in a cloud environment. Significance of resources allocation and availability increase many fold because income of the cloud depends on how efficiently it provides the rented services to the clients. SLA (Service Level Agreement is signed between the cloud Services Provider and the Cloud Services Consumer to maintain stipulated QoS (Quality of Service. It is noted that SLAs are violated due to several reasons. These may include system malfunctions and change in workload conditions. Elastic and adaptive approaches are required to prevent SLA violations. We propose an application level monitoring novel scheme to prevent SLA violations. It is based on elastic and scalable characteristics. It is easy to deploy and use. It focuses on application level monitoring.

  17. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou

    2007-07-01

    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  18. MSDLSR: Margin Scalable Discriminative Least Squares Regression for Multicategory Classification.

    Science.gov (United States)

    Wang, Lingfeng; Zhang, Xu-Yao; Pan, Chunhong

    2016-12-01

    In this brief, we propose a new margin scalable discriminative least squares regression (MSDLSR) model for multicategory classification. The main motivation behind the MSDLSR is to explicitly control the margin of DLSR model. We first prove that the DLSR is a relaxation of the traditional L2 -support vector machine. Based on this fact, we further provide a theorem on the margin of DLSR. With this theorem, we add an explicit constraint on DLSR to restrict the number of zeros of dragging values, so as to control the margin of DLSR. The new model is called MSDLSR. Theoretically, we analyze the determination of the margin and support vectors of MSDLSR. Extensive experiments illustrate that our method outperforms the current state-of-the-art approaches on various machine leaning and real-world data sets.

  19. Optimization of Hierarchical Modulation for Use of Scalable Media

    Science.gov (United States)

    Liu, Yongheng; Heneghan, Conor

    2010-12-01

    This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP) and Lower Priority (LP) stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR) for the particular examples shown.

  20. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.