WorldWideScience

Sample records for network coverage high

  1. A High-Efficiency Uneven Cluster Deployment Algorithm Based on Network Layered for Event Coverage in UWSNs

    Directory of Open Access Journals (Sweden)

    Shanen Yu

    2016-12-01

    Full Text Available Most existing deployment algorithms for event coverage in underwater wireless sensor networks (UWSNs usually do not consider that network communication has non-uniform characteristics on three-dimensional underwater environments. Such deployment algorithms ignore that the nodes are distributed at different depths and have different probabilities for data acquisition, thereby leading to imbalances in the overall network energy consumption, decreasing the network performance, and resulting in poor and unreliable late network operation. Therefore, in this study, we proposed an uneven cluster deployment algorithm based network layered for event coverage. First, according to the energy consumption requirement of the communication load at different depths of the underwater network, we obtained the expected value of deployment nodes and the distribution density of each layer network after theoretical analysis and deduction. Afterward, the network is divided into multilayers based on uneven clusters, and the heterogeneous communication radius of nodes can improve the network connectivity rate. The recovery strategy is used to balance the energy consumption of nodes in the cluster and can efficiently reconstruct the network topology, which ensures that the network has a high network coverage and connectivity rate in a long period of data acquisition. Simulation results show that the proposed algorithm improves network reliability and prolongs network lifetime by significantly reducing the blind movement of overall network nodes while maintaining a high network coverage and connectivity rate.

  2. CDMA coverage under mobile heterogeneous network load

    NARCIS (Netherlands)

    Saban, D.; van den Berg, Hans Leo; Boucherie, Richardus J.; Endrayanto, A.I.

    2002-01-01

    We analytically investigate coverage (determined by the uplink) under non-homogeneous and moving traffic load of third generation UMTS mobile networks. In particular, for different call assignment policies, we investigate cell breathing and the movement of the coverage gap occurring between cells

  3. Network television news coverage of environmental risks

    International Nuclear Information System (INIS)

    Greenberg, M.R.; Sandman, P.M.; Sachsman, D.V.; Salomone, K.L.

    1989-01-01

    Despite the criticisms that surround television coverage of environmental risk, there have been relatively few attempts to measure what and whom television shows. Most research has focused analysis on a few weeks of coverage of major stories like the gas leak at Bhopal, the Three Mile Island nuclear accident, or the Mount St. Helen's eruption. To advance the research into television coverage of environmental risk, an analysis has been made of all environmental risk coverage by the network nightly news broadcasts for a period of more than two years. Researchers have analyzed all environmental risk coverage-564 stories in 26 months-presented on ABC, CBS, and NBC's evening news broadcasts from January 1984 through February 1986. The quantitative information from the 564 stories was balanced by a more qualitative analysis of the television coverage of two case studies-the dioxin contamination in Times Beach, Missouri, and the suspected methyl isocyanate emissions from the Union Carbide plant in Institute, West Virginia. Both qualitative and quantitative data contributed to the analysis of the role played by experts and environmental advocacy sources in coverage of environmental risk and to the suggestions for increasing that role

  4. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  5. Extending Coverage and Lifetime of K-coverage Wireless Sensor Networks Using Improved Harmony Search

    Directory of Open Access Journals (Sweden)

    Shohreh Ebrahimnezhad

    2011-07-01

    Full Text Available K-coverage wireless sensor networks try to provide facilities such that each hotspot region is covered by at least k sensors. Because, the fundamental evaluation metrics of such networks are coverage and lifetime, proposing an approach that extends both of them simultaneously has a lot of interests. In this article, it is supposed that two kinds of nodes are available: static and mobile. The proposed method, at first, tries to balance energy among sensor nodes using Improved Harmony Search (IHS algorithm in a k-coverage and connected wireless sensor network in order to achieve a sensor node deployment. Also, this method proposes a suitable place for a gateway node (Sink that collects data from all sensors. Second, in order to prolong the network lifetime, some of the high energy-consuming mobile nodes are moved to the closest positions of low energy-consuming ones and vice versa after a while. This leads increasing the lifetime of network while connectivity and k-coverage are preserved. Through computer simulations, experimental results verified that the proposed IHS-based algorithm found better solution compared to some related methods.

  6. Functional architecture and global properties of the Corynebacterium glutamicum regulatory network: Novel insights from a dataset with a high genomic coverage.

    Science.gov (United States)

    Freyre-González, Julio A; Tauch, Andreas

    2017-09-10

    Corynebacterium glutamicum is a Gram-positive, anaerobic, rod-shaped soil bacterium able to grow on a diversity of carbon sources like sugars and organic acids. It is a biotechnological relevant organism because of its highly efficient ability to biosynthesize amino acids, such as l-glutamic acid and l-lysine. Here, we reconstructed the most complete C. glutamicum regulatory network to date and comprehensively analyzed its global organizational properties, systems-level features and functional architecture. Our analyses show the tremendous power of Abasy Atlas to study the functional organization of regulatory networks. We created two models of the C. glutamicum regulatory network: all-evidences (containing both weak and strong supported interactions, genomic coverage=73%) and strongly-supported (only accounting for strongly supported evidences, genomic coverage=71%). Using state-of-the-art methodologies, we prove that power-law behaviors truly govern the connectivity and clustering coefficient distributions. We found a non-previously reported circuit motif that we named complex feed-forward motif. We highlighted the importance of feedback loops for the functional architecture, beyond whether they are statistically over-represented or not in the network. We show that the previously reported top-down approach is inadequate to infer the hierarchy governing a regulatory network because feedback bridges different hierarchical layers, and the top-down approach disregards the presence of intermodular genes shaping the integration layer. Our findings all together further support a diamond-shaped, three-layered hierarchy exhibiting some feedback between processing and coordination layers, which is shaped by four classes of systems-level elements: global regulators, locally autonomous modules, basal machinery and intermodular genes. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Achieving Crossed Strong Barrier Coverage in Wireless Sensor Network.

    Science.gov (United States)

    Han, Ruisong; Yang, Wei; Zhang, Li

    2018-02-10

    Barrier coverage has been widely used to detect intrusions in wireless sensor networks (WSNs). It can fulfill the monitoring task while extending the lifetime of the network. Though barrier coverage in WSNs has been intensively studied in recent years, previous research failed to consider the problem of intrusion in transversal directions. If an intruder knows the deployment configuration of sensor nodes, then there is a high probability that it may traverse the whole target region from particular directions, without being detected. In this paper, we introduce the concept of crossed barrier coverage that can overcome this defect. We prove that the problem of finding the maximum number of crossed barriers is NP-hard and integer linear programming (ILP) is used to formulate the optimization problem. The branch-and-bound algorithm is adopted to determine the maximum number of crossed barriers. In addition, we also propose a multi-round shortest path algorithm (MSPA) to solve the optimization problem, which works heuristically to guarantee efficiency while maintaining near-optimal solutions. Several conventional algorithms for finding the maximum number of disjoint strong barriers are also modified to solve the crossed barrier problem and for the purpose of comparison. Extensive simulation studies demonstrate the effectiveness of MSPA.

  8. Improved Differential Evolution Algorithm for Wireless Sensor Network Coverage Optimization

    Directory of Open Access Journals (Sweden)

    Xing Xu

    2014-04-01

    Full Text Available In order to serve for the ecological monitoring efficiency of Poyang Lake, an improved hybrid algorithm, mixed with differential evolution and particle swarm optimization, is proposed and applied to optimize the coverage problem of wireless sensor network. And then, the affect of the population size and the number of iterations on the coverage performance are both discussed and analyzed. The four kinds of statistical results about the coverage rate are obtained through lots of simulation experiments.

  9. Coverage and Connectivity Issue in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Rachit Trivedi

    2013-04-01

    Full Text Available Wireless sensor networks (WSNs are an emerging area of interest in research and development. It finds use in military surveillance, health care, environmental monitoring, forest fire detection and smart environments. An important research issue in WSNs is the coverage since cost, area and lifetime are directly validated to it.In this paper we present an overview of WSNs and try to refine the coverage and connectivity issues in wireless sensor networks.

  10. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    Science.gov (United States)

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  11. Sleep scheduling with expected common coverage in wireless sensor networks

    OpenAIRE

    Bulut, Eyuphan; Korpeoglu, Ibrahim

    2011-01-01

    Sleep scheduling, which is putting some sensor nodes into sleep mode without harming network functionality, is a common method to reduce energy consumption in dense wireless sensor networks. This paper proposes a distributed and energy efficient sleep scheduling and routing scheme that can be used to extend the lifetime of a sensor network while maintaining a user defined coverage and connectivity. The scheme can activate and deactivate the three basic units of a sensor node (sensing, proces...

  12. Design and Performance Analysis of Multi-tier Heterogeneous Network through Coverage, Throughput and Energy Efficiency

    Directory of Open Access Journals (Sweden)

    A. Shabbir,

    2017-12-01

    Full Text Available The unprecedented acceleration in wireless industry strongly compels wireless operators to increase their data network throughput, capacity and coverage on emergent basis. In upcoming 5G heterogeneous networks inclusion of low power nodes (LPNs like pico cells and femto cells for increasing network’s throughput, capacity and coverage are getting momentum. Addition of LPNs in such a massive level will eventually make a network populated in terms of base stations (BSs.The dense deployments of BSs will leads towards high operating expenditures (Op-Ex, capital expenditure (Cap-Ex and most importantly high energy consumption in future generation networks. Recognizing theses networks issues this research work investigates data throughput and energy efficiency of 5G multi-tier heterogeneous network. The network is modeled using tools from stochastic geometry. Monte Carlo results confirmed that rational deployment of LPNs can contribute towards increased throughput along with better energy efficiency of overall network.

  13. Communication-Free Distributed Coverage for Networked Systems

    KAUST Repository

    Yazicioglu, A. Yasin

    2016-01-15

    In this paper, we present a communication-free algorithm for distributed coverage of an arbitrary network by a group of mobile agents with local sensing capabilities. The network is represented as a graph, and the agents are arbitrarily deployed on some nodes of the graph. Any node of the graph is covered if it is within the sensing range of at least one agent. The agents are mobile devices that aim to explore the graph and to optimize their locations in a decentralized fashion by relying only on their sensory inputs. We formulate this problem in a game theoretic setting and propose a communication-free learning algorithm for maximizing the coverage.

  14. Detecting Boundary Nodes and Coverage Holes in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Li-Hui Zhao

    2016-01-01

    Full Text Available The emergence of coverage holes in wireless sensor networks (WSNs means that some special events have broken out and the function of WSNs will be seriously influenced. Therefore, the issues of coverage holes have attracted considerable attention. In this paper, we focus on the identification of boundary nodes and coverage holes, which is crucially important to preventing the enlargement of coverage holes and ensuring the transmission of data. We define the problem of coverage holes and propose two novel algorithms to identify the coverage holes in WSNs. The first algorithm, Distributed Sector Cover Scanning (DSCS, can be used to identify the nodes on hole borders and the outer boundary of WSNs. The second scheme, Directional Walk (DW, can locate the coverage holes based on the boundary nodes identified with DSCS. We implement the algorithms in various scenarios and fully evaluate their performance. The simulation results show that the boundary nodes can be accurately detected by DSCS and the holes enclosed by the detected boundary nodes can be identified by DW. The comparisons confirm that the proposed algorithms outperform the existing ones.

  15. Coverage-maximization in networks under resource constraints.

    Science.gov (United States)

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  16. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm

  17. Change of mobile network coverage in France from 29 August

    CERN Multimedia

    IT Department

    2016-01-01

    The change of mobile network coverage on the French part of the CERN site will take effect on 29 August and not on 11 July as previously announced.    From 29 August, the Swisscom transmitters in France will be deactivated and Orange France will thenceforth provide coverage on the French part of the CERN site.  This switch will result in changes to billing. You should also ensure that you can still be contacted by your colleagues when you are on the French part of the CERN site. Please consult the information and instructions in this official communication.

  18. Increasing cellular coverage within integrated terrestrial/satellite mobile networks

    Science.gov (United States)

    Castro, Jonathan P.

    1995-01-01

    When applying the hierarchical cellular concept, the satellite acts as giant umbrella cell covering a region with some terrestrial cells. If a mobile terminal traversing the region arrives to the border-line or limits of a regular cellular ground service, network transition occurs and the satellite system continues the mobile coverage. To adequately assess the boundaries of service of a mobile satellite system an a cellular network within an integrated environment, this paper provides an optimized scheme to predict when a network transition may be necessary. Under the assumption of a classified propagation phenomenon and Lognormal shadowing, the study applies an analytical approach to estimate the location of a mobile terminal based on a reception of the signal strength emitted by a base station.

  19. Directional Bias and Pheromone for Discovery and Coverage on Networks

    Energy Technology Data Exchange (ETDEWEB)

    Fink, Glenn A.; Berenhaut, Kenneth S.; Oehmen, Christopher S.

    2012-09-11

    Natural multi-agent systems often rely on “correlated random walks” (random walks that are biased toward a current heading) to distribute their agents over a space (e.g., for foraging, search, etc.). Our contribution involves creation of a new movement and pheromone model that applies the concept of heading bias in random walks to a multi-agent, digital-ants system designed for cyber-security monitoring. We examine the relative performance effects of both pheromone and heading bias on speed of discovery of a target and search-area coverage in a two-dimensional network layout. We found that heading bias was unexpectedly helpful in reducing search time and that it was more influential than pheromone for improving coverage. We conclude that while pheromone is very important for rapid discovery, heading bias can also greatly improve both performance metrics.

  20. Unmanned Ground Vehicle Navigation and Coverage Hole Patching in Wireless Sensor Networks

    Science.gov (United States)

    Zhang, Guyu

    2013-01-01

    This dissertation presents a study of an Unmanned Ground Vehicle (UGV) navigation and coverage hole patching in coordinate-free and localization-free Wireless Sensor Networks (WSNs). Navigation and coverage maintenance are related problems since coverage hole patching requires effective navigation in the sensor network environment. A…

  1. Increasing the coverage area through relay node deployment in long term evolution advanced cellular networks

    Science.gov (United States)

    Aldhaibani, Jaafar A.; Ahmad, R. B.; Yahya, A.; Azeez, Suzan A.

    2015-05-01

    Wireless multi-hop relay networks have become very important technologies in mobile communications. These networks ensure high throughput and coverage extension with a low cost. The poor capacity at cell edges is not enough to meet with growing demand of high capacity and throughput irrespective of user's placement in the cellular network. In this paper we propose optimal placement of relay node that provides maximum achievable rate at users and enhances the throughput and coverage at cell edge region. The proposed scheme is based on the outage probability at users and taken on account the interference between nodes. Numerical analyses along with simulation results indicated there are an improvement in capacity for users at the cell edge is 40% increment from all cell capacity.

  2. A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks

    Science.gov (United States)

    Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan

    2014-01-01

    Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747

  3. Coverage Improvement for Wireless Sensor Networks using Grid Quorum based Node Mobility

    DEFF Research Database (Denmark)

    Mathur, Prateek; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    Coverage of wireless sensor networks (WSNs) is an important quality of service (QoS) metric and often the desired coverage is not attainable at the initial deployment, but node mobility can be used to improve the coverage by relocating sensor nodes. Unconstrained node mobility is considered infea...

  4. A Novel Nonlinear Multitarget k-Degree Coverage Preservation Protocol in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zeyu Sun

    2016-01-01

    Full Text Available Due to the existence of a large number of redundant data in the process of covering multiple targets, the effective coverage of monitored region decreases, causing the network to consume more energy. To solve this problem, this paper proposes a multitarget k-degree coverage preservation protocol. Firstly, the affiliation between the sensor nodes and target nodes is established in the network model; meanwhile the method used to calculate the coverage expectation value of the monitored region is put forward; secondly, in the aspect of the network energy conversion, use scheduling mechanisms on the sensor nodes to balance the network energy and achieve different network coverage quality with energy conversion between different nodes. Finally, simulation results show that NMCP can improve the network lifetime by effectively reducing the number of active nodes to meet certain coverage requirements.

  5. Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network

    Science.gov (United States)

    Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan

    2011-01-01

    Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792

  6. A Two-Phase Coverage-Enhancing Algorithm for Hybrid Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qingguo Zhang

    2017-01-01

    Full Text Available Providing field coverage is a key task in many sensor network applications. In certain scenarios, the sensor field may have coverage holes due to random initial deployment of sensors; thus, the desired level of coverage cannot be achieved. A hybrid wireless sensor network is a cost-effective solution to this problem, which is achieved by repositioning a portion of the mobile sensors in the network to meet the network coverage requirement. This paper investigates how to redeploy mobile sensor nodes to improve network coverage in hybrid wireless sensor networks. We propose a two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. In phase one, we use a differential evolution algorithm to compute the candidate’s target positions in the mobile sensor nodes that could potentially improve coverage. In the second phase, we use an optimization scheme on the candidate’s target positions calculated from phase one to reduce the accumulated potential moving distance of mobile sensors, such that the exact mobile sensor nodes that need to be moved as well as their final target positions can be determined. Experimental results show that the proposed algorithm provided significant improvement in terms of area coverage rate, average moving distance, area coverage–distance rate and the number of moved mobile sensors, when compare with other approaches.

  7. CNNdel: Calling Structural Variations on Low Coverage Data Based on Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2017-01-01

    Full Text Available Many structural variations (SVs detection methods have been proposed due to the popularization of next-generation sequencing (NGS. These SV calling methods use different SV-property-dependent features; however, they all suffer from poor accuracy when running on low coverage sequences. The union of results from these tools achieves fairly high sensitivity but still produces low accuracy on low coverage sequence data. That is, these methods contain many false positives. In this paper, we present CNNdel, an approach for calling deletions from paired-end reads. CNNdel gathers SV candidates reported by multiple tools and then extracts features from aligned BAM files at the positions of candidates. With labeled feature-expressed candidates as a training set, CNNdel trains convolutional neural networks (CNNs to distinguish true unlabeled candidates from false ones. Results show that CNNdel works well with NGS reads from 26 low coverage genomes of the 1000 Genomes Project. The paper demonstrates that convolutional neural networks can automatically assign the priority of SV features and reduce the false positives efficaciously.

  8. Building high-coverage monolayers of covalently bound magnetic nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Mackenzie G.; Teplyakov, Andrew V., E-mail: andrewt@udel.edu

    2016-12-01

    Graphical abstract: - Highlights: • A method for forming a layer of covalently bound nanoparticles is offered. • A nearly perfect monolayer of covalently bound magnetic nanoparticles was formed on gold. • Spectroscopic techniques confirmed covalent binding by the “click” reaction. • The influence of the functionalization scheme on surface coverage was investigated. - Abstract: This work presents an approach for producing a high-coverage single monolayer of magnetic nanoparticles using “click chemistry” between complementarily functionalized nanoparticles and a flat substrate. This method highlights essential aspects of the functionalization scheme for substrate surface and nanoparticles to produce exceptionally high surface coverage without sacrificing selectivity or control over the layer produced. The deposition of one single layer of magnetic particles without agglomeration, over a large area, with a nearly 100% coverage is confirmed by electron microscopy. Spectroscopic techniques, supplemented by computational predictions, are used to interrogate the chemistry of the attachment and to confirm covalent binding, rather than attachment through self-assembly or weak van der Waals bonding. Density functional theory calculations for the surface intermediate of this copper-catalyzed process provide mechanistic insight into the effects of the functionalization scheme on surface coverage. Based on this analysis, it appears that steric limitations of the intermediate structure affect nanoparticle coverage on a flat solid substrate; however, this can be overcome by designing a functionalization scheme in such a way that the copper-based intermediate is formed on the spherical nanoparticles instead. This observation can be carried over to other approaches for creating highly controlled single- or multilayered nanostructures of a wide range of materials to result in high coverage and possibly, conformal filling.

  9. Surface Coverage in Wireless Sensor Networks Based on Delaunay Tetrahedralization

    International Nuclear Information System (INIS)

    Ribeiro, M G; Neves, L A; Zafalon, G F D; Valêncio, C; Pinto, A R; Nascimento, M Z

    2015-01-01

    In this work is presented a new method for sensor deployment on 3D surfaces. The method was structured on different steps. The first one aimed discretizes the relief of interest with Delaunay algorithm. The tetrahedra and relative values (spatial coordinates of each vertex and faces) were input to construction of 3D Voronoi diagram. Each circumcenter was calculated as a candidate position for a sensor node: the corresponding circular coverage area was calculated based on a radius r. The r value can be adjusted to simulate different kinds of sensors. The Dijkstra algorithm and a selection method were applied to eliminate candidate positions with overlapped coverage areas or beyond of surface of interest. Performance evaluations measures were defined using coverage area and communication as criteria. The results were relevant, once the mean coverage rate achieved on three different surfaces were among 91% and 100%

  10. The Coverage of the Holocaust in High School History Textbooks

    Science.gov (United States)

    Lindquist, David

    2009-01-01

    The Holocaust is now a regular part of high school history curricula throughout the United States and, as a result, coverage of the Holocaust has become a standard feature of high school textbooks. As with any major event, it is important for textbooks to provide a rigorously accurate and valid historical account. In dealing with the Holocaust,…

  11. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks

    OpenAIRE

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-01-01

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical d...

  12. A QoS-Guaranteed Coverage Precedence Routing Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jiun-Chuan Lin

    2011-03-01

    Full Text Available For mission-critical applications of wireless sensor networks (WSNs involving extensive battlefield surveillance, medical healthcare, etc., it is crucial to have low-power, new protocols, methodologies and structures for transferring data and information in a network with full sensing coverage capability for an extended working period. The upmost mission is to ensure that the network is fully functional providing reliable transmission of the sensed data without the risk of data loss. WSNs have been applied to various types of mission-critical applications. Coverage preservation is one of the most essential functions to guarantee quality of service (QoS in WSNs. However, a tradeoff exists between sensing coverage and network lifetime due to the limited energy supplies of sensor nodes. In this study, we propose a routing protocol to accommodate both energy-balance and coverage-preservation for sensor nodes in WSNs. The energy consumption for radio transmissions and the residual energy over the network are taken into account when the proposed protocol determines an energy-efficient route for a packet. The simulation results demonstrate that the proposed protocol is able to increase the duration of the on-duty network and provide up to 98.3% and 85.7% of extra service time with 100% sensing coverage ratio comparing with LEACH and the LEACH-Coverage-U protocols, respectively.

  13. A fuzzy neural network model to forecast the percent cloud coverage and cloud top temperature maps

    Directory of Open Access Journals (Sweden)

    Y. Tulunay

    2008-12-01

    Full Text Available Atmospheric processes are highly nonlinear. A small group at the METU in Ankara has been working on a fuzzy data driven generic model of nonlinear processes. The model developed is called the Middle East Technical University Fuzzy Neural Network Model (METU-FNN-M. The METU-FNN-M consists of a Fuzzy Inference System (METU-FIS, a data driven Neural Network module (METU-FNN of one hidden layer and several neurons, and a mapping module, which employs the Bezier Surface Mapping technique. In this paper, the percent cloud coverage (%CC and cloud top temperatures (CTT are forecast one month ahead of time at 96 grid locations. The probable influence of cosmic rays and sunspot numbers on cloudiness is considered by using the METU-FNN-M.

  14. On Connected Target k-Coverage in Heterogeneous Wireless Sensor Networks.

    Science.gov (United States)

    Yu, Jiguo; Chen, Ying; Ma, Liran; Huang, Baogui; Cheng, Xiuzhen

    2016-01-15

    Coverage and connectivity are two important performance evaluation indices for wireless sensor networks (WSNs). In this paper, we focus on the connected target k-coverage (CTC k) problem in heterogeneous wireless sensor networks (HWSNs). A centralized connected target k-coverage algorithm (CCTC k) and a distributed connected target k-coverage algorithm (DCTC k) are proposed so as to generate connected cover sets for energy-efficient connectivity and coverage maintenance. To be specific, our proposed algorithms aim at achieving minimum connected target k-coverage, where each target in the monitored region is covered by at least k active sensor nodes. In addition, these two algorithms strive to minimize the total number of active sensor nodes and guarantee that each sensor node is connected to a sink, such that the sensed data can be forwarded to the sink. Our theoretical analysis and simulation results show that our proposed algorithms outperform a state-of-art connected k-coverage protocol for HWSNs.

  15. Impact of Mechanical down Tilt and Height on the Pilot Coverage of UMTS Networks

    Directory of Open Access Journals (Sweden)

    N. Faruk

    2012-06-01

    Full Text Available The task of planning a network can be very challenging as it involves many careful studies with a lot of considerations and, at times, trial and error. In this paper, the impacts of antenna mechanical down tilt and antenna height on UMTS network performance are studied. First, we used ASSET3G simulation software to design 3G pilot coverage. Optimization techniques were deployed to study the performance of the network. Simulation results show about 2.6% increase in the coverage area when the antenna height was increased from 15 m to 25 m at the same tilt angle of 0 ° The coverage drops by 24% when transiting from 0° to 6° tilt angle was made for 15 m height antenna. The results also indicated that, pilot pollution could be reduced by choosing optimum down tilt angle.

  16. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks

    Science.gov (United States)

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-01-01

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network. PMID:27754405

  17. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks.

    Science.gov (United States)

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-10-14

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network.

  18. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Binbin Shi

    2016-10-01

    Full Text Available In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network.

  19. Decentralized coverage control problems for mobile robotic sensor and actuator networks

    CERN Document Server

    Savkin, A; Xi, Z; Javed, F; Matveev, A; Nguyen, H

    2015-01-01

    This book introduces various coverage control problems for mobile sensor networks including barrier, sweep and blanket. Unlike many existing algorithms, all of the robotic sensor and actuator motion algorithms developed in the book are fully decentralized or distributed, computationally efficient, easily implementable in engineering practice and based only on information on the closest neighbours of each mobile sensor and actuator and local information about the environment. Moreover, the mobile robotic sensors have no prior information about the environment in which they operation. These various types of coverage problems have never been covered before by a single book in a systematic way. Another topic of this book is the study of mobile robotic sensor and actuator networks. Many modern engineering applications include the use of sensor and actuator networks to provide efficient and effective monitoring and control of industrial and environmental processes. Such mobile sensor and actuator networks are abl...

  20. High-Performance Networking

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  1. Handoff Rate and Coverage Analysis in Multi-tier Heterogeneous Networks

    OpenAIRE

    Sadr, Sanam; Adve, Raviraj S.

    2015-01-01

    This paper analyzes the impact of user mobility in multi-tier heterogeneous networks. We begin by obtaining the handoff rate for a mobile user in an irregular cellular network with the access point locations modeled as a homogeneous Poisson point process. The received signal-to-interference-ratio (SIR) distribution along with a chosen SIR threshold is then used to obtain the probability of coverage. To capture potential connection failures due to mobility, we assume that a fraction of handoff...

  2. Efficient Deployment of Key Nodes for Optimal Coverage of Industrial Mobile Wireless Networks

    Science.gov (United States)

    Li, Xiaomin; Li, Di; Dong, Zhijie; Hu, Yage; Liu, Chengliang

    2018-01-01

    In recent years, industrial wireless networks (IWNs) have been transformed by the introduction of mobile nodes, and they now offer increased extensibility, mobility, and flexibility. Nevertheless, mobile nodes pose efficiency and reliability challenges. Efficient node deployment and management of channel interference directly affect network system performance, particularly for key node placement in clustered wireless networks. This study analyzes this system model, considering both industrial properties of wireless networks and their mobility. Then, static and mobile node coverage problems are unified and simplified to target coverage problems. We propose a novel strategy for the deployment of clustered heads in grouped industrial mobile wireless networks (IMWNs) based on the improved maximal clique model and the iterative computation of new candidate cluster head positions. The maximal cliques are obtained via a double-layer Tabu search. Each cluster head updates its new position via an improved virtual force while moving with full coverage to find the minimal inter-cluster interference. Finally, we develop a simulation environment. The simulation results, based on a performance comparison, show the efficacy of the proposed strategies and their superiority over current approaches. PMID:29439439

  3. Efficient Deployment of Key Nodes for Optimal Coverage of Industrial Mobile Wireless Networks.

    Science.gov (United States)

    Li, Xiaomin; Li, Di; Dong, Zhijie; Hu, Yage; Liu, Chengliang

    2018-02-10

    In recent years, industrial wireless networks (IWNs) have been transformed by the introduction of mobile nodes, and they now offer increased extensibility, mobility, and flexibility. Nevertheless, mobile nodes pose efficiency and reliability challenges. Efficient node deployment and management of channel interference directly affect network system performance, particularly for key node placement in clustered wireless networks. This study analyzes this system model, considering both industrial properties of wireless networks and their mobility. Then, static and mobile node coverage problems are unified and simplified to target coverage problems. We propose a novel strategy for the deployment of clustered heads in grouped industrial mobile wireless networks (IMWNs) based on the improved maximal clique model and the iterative computation of new candidate cluster head positions. The maximal cliques are obtained via a double-layer Tabu search. Each cluster head updates its new position via an improved virtual force while moving with full coverage to find the minimal inter-cluster interference. Finally, we develop a simulation environment. The simulation results, based on a performance comparison, show the efficacy of the proposed strategies and their superiority over current approaches.

  4. A novel beamforming based model of coverage and transmission costing in IEEE 802.11 WLAN networks

    Directory of Open Access Journals (Sweden)

    Mehdi Guessous

    2017-09-01

    Full Text Available IEEE 802.11 WLAN indoor networks face major inherent and environmental issues such as interference, noise, and obstacles. At the same time, they must provide a maximal service performance in highly changing radio environments and conformance to various applications’ requirements. For this purpose, they require a solid design approach that considers both inputs from the radio interface and the upper-layer services at every design step. The modelization of radio area coverage is a key component in this process and must build on feasible work hypotheses. It should be able also to interpret highly varying characteristics of dense indoor environments, technology advances, service design best practices, end-to-end integration with other network parts: Local Area Network (LAN, Wide Area Network (WAN or Data Center Network (DCN. This work focuses on Radio Resource Management (RRM as a key tool to achieve a solid design in WLAN indoor environments by planning frequency channel assignment, transmit directions and corresponding power levels. Its scope is limited to tackle co-channel interference but can be easily extended to address cross-channel ones. In this paper, we consider beamforming and costing techniques to augment conventional RRM’s Transmit Power Control (TPC procedures that market-leading vendors has implemented and related research has worked on. We present a novel approach of radio coverage modelization and prove its additions to the cited related-work’s models. Our solution model runs three algorithms to evaluate transmission opportunities of Wireless Devices (WD under the coverage area. It builds on realistic hypotheses and a thorough system operation’s understanding to evaluate such an opportunity to transmit, overcomes limitations from compared related-work’s models, and integrates a hierarchical costing system to match Service Level Agreement (SLA expectations. The term “opportunity” in this context relates also to the new

  5. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Jiang

    2017-01-01

    Full Text Available For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA. After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS. The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime.

  6. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    Science.gov (United States)

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  7. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng-Fei Wu

    2017-06-01

    Full Text Available Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI. To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  8. LTE-A cellular networks multi-hop relay for coverage, capacity and performance enhancement

    CERN Document Server

    Yahya, Abid

    2017-01-01

    In this book, three different methods are presented to enhance the capacity and coverage area in LTE-A cellular networks. The scope involves the evaluation of the effect of the RN location in terms of capacity and the determination of the optimum location of the relay that provides maximum achievable data rate for users with limited interference at the cell boundaries. This book presents a new model to enhance both capacity and coverage area in LTE-A cellular network by determining the optimum location for the RN with limited interference. The new model is designed to enhance the capacity of the relay link by employing two antennas in RN. This design enables the relay link to absorb more users at cell edge regions. An algorithm called the Balance Power Algorithm (BPA) is developed to reduce MR power consumption. The book pertains to postgraduate students and researchers in wireless & mobile communications. Provides a variety of methods for enhancing capacity and coverage in LTE-A cellular networks Develop...

  9. High speed network sampling

    OpenAIRE

    Rindalsholt, Ole Arild

    2005-01-01

    Master i nettverks- og systemadministrasjon Classical Sampling methods play an important role in the current practice of Internet measurement. With today’s high speed networks, routers cannot manage to generate complete Netflow data for every packet. They have to perform restricted sampling. This thesis summarizes some of the most important sampling schemes and their applications before diving into an analysis on the effect of sampling Netflow records.

  10. Event Coverage Detection and Event Source Determination in Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhangbing Zhou

    2015-12-01

    Full Text Available With the advent of the Internet of Underwater Things, smart things are deployed in the ocean space and establish underwater wireless sensor networks for the monitoring of vast and dynamic underwater environments. When events are found to have possibly occurred, accurate event coverage should be detected, and potential event sources should be determined for the enactment of prompt and proper responses. To address this challenge, a technique that detects event coverage and determines event sources is developed in this article. Specifically, the occurrence of possible events corresponds to a set of neighboring sensor nodes whose sensory data may deviate from a normal sensing range in a collective fashion. An appropriate sensor node is selected as the relay node for gathering and routing sensory data to sink node(s. When sensory data are collected at sink node(s, the event coverage is detected and represented as a weighted graph, where the vertices in this graph correspond to sensor nodes and the weight specified upon the edges reflects the extent of sensory data deviating from a normal sensing range. Event sources are determined, which correspond to the barycenters in this graph. The results of the experiments show that our technique is more energy efficient, especially when the network topology is relatively steady.

  11. A Coral Reef Algorithm Based on Learning Automata for the Coverage Control Problem of Heterogeneous Directional Sensor Networks.

    Science.gov (United States)

    Li, Ming; Miao, Chunyan; Leung, Cyril

    2015-12-04

    Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches.

  12. An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.

    Science.gov (United States)

    Yoon, Yourim; Kim, Yong-Hyuk

    2013-10-01

    Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.

  13. Temporal trends in coverage of historical cardiac arrests using a volunteer-based network of automated external defibrillators accessible to laypersons and emergency dispatch centers.

    Science.gov (United States)

    Hansen, Carolina Malta; Lippert, Freddy Knudsen; Wissenberg, Mads; Weeke, Peter; Zinckernagel, Line; Ruwald, Martin H; Karlsson, Lena; Gislason, Gunnar Hilmar; Nielsen, Søren Loumann; Køber, Lars; Torp-Pedersen, Christian; Folke, Fredrik

    2014-11-18

    Although increased dissemination of automated external defibrillators (AEDs) has been associated with more frequent AED use, the trade-off between the number of deployed AEDs and coverage of cardiac arrests remains unclear. We investigated how volunteer-based AED dissemination affected public cardiac arrest coverage in high- and low-risk areas. All public cardiac arrests (1994-2011) and all registered AEDs (2007-2011) in Copenhagen, Denmark, were identified and geocoded. AED coverage of cardiac arrests was defined as historical arrests ≤100 m from an AED. High-risk areas were defined as those with ≥1 arrest every 2 years and accounted for 1.0% of the total city area. Of 1864 cardiac arrests, 18.0% (n=335) occurred in high-risk areas throughout the study period. From 2007 to 2011, the number of AEDs and the corresponding coverage of cardiac arrests increased from 36 to 552 and from 2.7% to 32.6%, respectively. The corresponding increase for high-risk areas was from 1 to 30 AEDs and coverage from 5.7% to 51.3%, respectively. Since the establishment of the AED network (2007-2011), few arrests (n=55) have occurred ≤100 m from an AED with only 14.5% (n=8) being defibrillated before the arrival of emergency medical services. Despite the lack of a coordinated public access defibrillation program, the number of AEDs increased 15-fold with a corresponding increase in cardiac arrest coverage from 2.7% to 32.6% over a 5-year period. The highest increase in coverage was observed in high-risk areas (from 5.7% to 51.3%). AED networks can be used as useful tools to optimize AED placement in community settings. © 2014 American Heart Association, Inc.

  14. On Coverage and Capacity for Disaster Area Wireless Networks Using Mobile Relays

    Directory of Open Access Journals (Sweden)

    Guo Wenxuan

    2009-01-01

    Full Text Available Public safety organizations increasingly rely on wireless technology to provide effective communications during emergency and disaster response operations. This paper presents a comprehensive study on dynamic placement of relay nodes (RNs in a disaster area wireless network. It is based on our prior work of mobility model that characterizes the spatial movement of the first responders as mobile nodes (MNs during their operations. We first investigate the COverage-oriented Relay Placement (CORP problem that is to maximize the total number of MNs connected with the relays. Considering the network throughput, we then study the CApacity-oriented Relay Placement (CARP problem that is to maximize the aggregated data rate of all MNs. For both coverage and capacity studies, we provide each the optimal and the greedy algorithms with computational complexity analysis. Furthermore, simulation results are presented to compare the performance between the greedy and the optimal solutions for the CORP and CARP problems, respectively. It is shown that the greedy algorithms can achieve near optimal performance but at significantly lower computational complexity.

  15. Influenza vaccination coverage and reasons to refrain among high-risk persons in four European countries.

    NARCIS (Netherlands)

    Kroneman, M.; Essen, G.A. van; Paget, J.

    2006-01-01

    This paper examines influenza vaccine coverage using a population base of an average of 2300 persons in each of four European countries (Germany, Spain, Poland and Sweden). The reasons for non-vaccination of those in the high-risk groups were explored by questionnaire. The vaccine coverage rate

  16. Method of analysis of populations exposure and radio coverage of GSM and UMTS mobile phone networks

    International Nuclear Information System (INIS)

    Gaudaire, Francois; Noe, Nicolas; Dufour, Jean Benoit; De Seze, Rene; Cagnon, Patrice; Selmaoui, Brahim; Mauger, Samuel; Thuroczy, Georges; Mazet, Paul

    2012-01-01

    We present a description and preliminary results of a large-scale project (COMOP) started in 2009 by the Ministry for Ecology, Sustainable Development and Energy and the National Agency of Frequencies (ANFR). This work is collaborative between the government, municipalities, associations, network operators and agencies. This project concerns GSM and UMTS mobile phone networks. Sixteen voluntary pilot communes were selected to carry out the experiments. This work is based on numerical modeling methods associated with various innovative measurement campaigns. It consists of: - determination of current status of exposure to radiofrequency electromagnetic fields emitted by base station antennas by modeling and measurements. - investigations on reducing this exposure while assessing the impact of parallel territorial coverage and quality of service of mobile networks. This study has established a comprehensive scientific realization of human exposure to radio-frequencies from base station antennas. These results constitute an innovative approach and are relevant in terms of dialogue in the current debate about positioning of base stations. (authors)

  17. High Performance Networks for High Impact Science

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  18. A Network Coverage Information-Based Sensor Registry System for IoT Environments.

    Science.gov (United States)

    Jung, Hyunjun; Jeong, Dongwon; Lee, Sukhoon; On, Byung-Won; Baik, Doo-Kwon

    2016-07-25

    The Internet of Things (IoT) is expected to provide better services through the interaction of physical objects via the Internet. However, its limitations cause an interoperability problem when the sensed data are exchanged between the sensor nodes in wireless sensor networks (WSNs), which constitute the core infrastructure of the IoT. To address this problem, a Sensor Registry System (SRS) is used. By using a SRS, the information of the heterogeneous sensed data remains pure. If users move along a road, their mobile devices predict their next positions and obtain the sensed data for that position from the SRS. If the WSNs in the location in which the users move are unstable, the sensed data will be lost. Consider a situation where the user passes through dangerous areas. If the user's mobile device cannot receive information, they cannot be warned about the dangerous situation. To avoid this, two novel SRSs that use network coverage information have been proposed: one uses OpenSignal and the other uses the probabilistic distribution of the users accessing SRS. The empirical study showed that the proposed method can seamlessly provide services related to sensing data under any abnormal circumstance.

  19. Improving Spectral Capacity and Wireless Network Coverage by Cognitive Radio Technology and Relay Nodes in Cellular Systems

    DEFF Research Database (Denmark)

    Frederiksen, Flemming Bjerge

    2008-01-01

    Methods to enhance the use of the frequency spectrum by automatical spectrum sensing plus spectrum sharing in a cognitive radio technology context have been presented and discussed in this paper. Ideas to improve the wireless transmission by orthogonal OFDM-based communication and to increase the...... the coverage of cellular systems by future wireless networks, relay channels, relay stations and collaborate radio have been presented as well. A revised hierarchical deployment of the future wireless and wired networks are shortly discussed....

  20. CoCMA: Energy-Efficient Coverage Control in Cluster-Based Wireless Sensor Networks Using a Memetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yung-Chung Wang

    2009-06-01

    Full Text Available Deployment of wireless sensor networks (WSNs has drawn much attention in recent years. Given the limited energy for sensor nodes, it is critical to implement WSNs with energy efficiency designs. Sensing coverage in networks, on the other hand, may degrade gradually over time after WSNs are activated. For mission-critical applications, therefore, energy-efficient coverage control should be taken into consideration to support the quality of service (QoS of WSNs. Usually, coverage-controlling strategies present some challenging problems: (1 resolving the conflicts while determining which nodes should be turned off to conserve energy; (2 designing an optimal wake-up scheme that avoids awakening more nodes than necessary. In this paper, we implement an energy-efficient coverage control in cluster-based WSNs using a Memetic Algorithm (MA-based approach, entitled CoCMA, to resolve the challenging problems. The CoCMA contains two optimization strategies: a MA-based schedule for sensor nodes and a wake-up scheme, which are responsible to prolong the network lifetime while maintaining coverage preservation. The MA-based schedule is applied to a given WSN to avoid unnecessary energy consumption caused by the redundant nodes. During the network operation, the wake-up scheme awakens sleeping sensor nodes to recover coverage hole caused by dead nodes. The performance evaluation of the proposed CoCMA was conducted on a cluster-based WSN (CWSN under either a random or a uniform deployment of sensor nodes. Simulation results show that the performance yielded by the combination of MA and wake-up scheme is better than that in some existing approaches. Furthermore, CoCMA is able to activate fewer sensor nodes to monitor the required sensing area.

  1. Influence of Landscape Coverage on Measuring Spatial and Length Properties of Rock Fracture Networks: Insights from Numerical Simulation

    Science.gov (United States)

    Cao, Wenzhuo; Lei, Qinghua

    2018-01-01

    Natural fractures are ubiquitous in the Earth's crust and often deeply buried in the subsurface. Due to the difficulty in accessing to their three-dimensional structures, the study of fracture network geometry is usually achieved by sampling two-dimensional (2D) exposures at the Earth's surface through outcrop mapping or aerial photograph techniques. However, the measurement results can be considerably affected by the coverage of forests and other plant species over the exposed fracture patterns. We quantitatively study such effects using numerical simulation. We consider the scenario of nominally isotropic natural fracture systems and represent them using 2D discrete fracture network models governed by fractal and length scaling parameters. The groundcover is modelled as random patches superimposing onto the 2D fracture patterns. The effects of localisation and total coverage of landscape patches are further investigated. The fractal dimension and length exponent of the covered fracture networks are measured and compared with those of the original non-covered patterns. The results show that the measured length exponent increases with the reduced localisation and increased coverage of landscape patches, which is more evident for networks dominated by very large fractures (i.e. small underlying length exponent). However, the landscape coverage seems to have a minor impact on the fractal dimension measurement. The research findings of this paper have important implications for field survey and statistical analysis of geological systems.

  2. Root coverage procedures improve patient aesthetics. A systematic review and Bayesian network meta-analysis.

    Science.gov (United States)

    Cairo, Francesco; Pagliaro, Umberto; Buti, Jacopo; Baccini, Michela; Graziani, Filippo; Tonelli, Paolo; Pagavino, Gabriella; Tonetti, Maurizio S

    2016-11-01

    The aim of this study was to perform a systematic review (SR) of randomized controlled trials (RCTs) to explore if periodontal plastic surgery procedures for the treatment of single and multiple gingival recessions (Rec) may improve aesthetics at patient and professional levels. In order to combine evidence from direct and indirect comparisons by different trials a Bayesian network meta-analysis (BNM) was planned. A literature search on PubMed, Cochrane libraries, EMBASE, and hand-searched journals until January 2016 was conducted to identify RCTs presenting aesthetic outcomes after root coverage using standardized evaluations at patient and professional level. A total of 16 RCTs were selected in the SR; three RTCs presenting professional aesthetic evaluation with Root coverage Aesthetic Score (RES) and three showing final self-perception using the Visual Analogue Scale (VAS Est) could be included in a BNM model. Coronally Advanced Flap plus Connective Tissue Graft (CAF + CTG) and CAF + Acellular Dermal Matrix (ADM) and Autologous Fibroblasts (AF) were associated with the best RES outcomes (best probability = 24% and 64%, respectively), while CAF + CTG and CAF + CTG + Enamel matrix Derivatives (EMD) obtained highest values of VAS Est score (best probability = 44% and 26%, respectively). Periodontal Plastic Surgery (PPS) techniques applying grafts underneath CAF with or without the adding of EMD are associated with improved aesthetics assessed by final patient perception and RES as professional evaluation system. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Greedy Sparse Approaches for Homological Coverage in Location Unaware Sensor Networks

    Science.gov (United States)

    2017-12-08

    problems (e.g., coverage hole detection, coverage verification , hole local- ization, and so on; see Section 2 for more details). The sparse coverage...10,17,20–25 2. detection or verification of coverage (i.e., ensuring there is no coverage gap or hole),11,12,26–29 3 Approved for public release...v))) = 0 then Broadcast self as candidate for collapse to neighbors if All neighboring nodes broadcast themselves as non-candidates then v not needed

  4. Measles vaccination coverage in high-incidence areas of the ...

    African Journals Online (AJOL)

    including cold-chain maintenance.[5] ... Methods. Households were consecutively sampled in high-incidence areas identified using measles epidemic surveillance data. ... ratio in under-5s was 6.9/1 000 (Department of Health, Provincial.

  5. Methods used for immunization coverage assessment in Canada, a Canadian Immunization Research Network (CIRN) study.

    Science.gov (United States)

    Wilson, Sarah E; Quach, Susan; MacDonald, Shannon E; Naus, Monika; Deeks, Shelley L; Crowcroft, Natasha S; Mahmud, Salaheddin M; Tran, Dat; Kwong, Jeff; Tu, Karen; Gilbert, Nicolas L; Johnson, Caitlin; Desai, Shalini

    2017-08-03

    Accurate and complete immunization data are necessary to assess vaccine coverage, safety and effectiveness. Across Canada, different methods and data sources are used to assess vaccine coverage, but these have not been systematically described. Our primary objective was to examine and describe the methods used to determine immunization coverage in Canada. The secondary objective was to compare routine infant and childhood coverage estimates derived from the Canadian 2013 Childhood National Immunization Coverage Survey (cNICS) with estimates collected from provinces and territories (P/Ts). We collected information from key informants regarding their provincial, territorial or federal methods for assessing immunization coverage. We also collected P/T coverage estimates for select antigens and birth cohorts to determine absolute differences between these and estimates from cNICS. Twenty-six individuals across 16 public health organizations participated between April and August 2015. Coverage surveys are conducted regularly for toddlers in Quebec and in one health authority in British Columbia. Across P/Ts, different methodologies for measuring coverage are used (e.g., valid doses, grace periods). Most P/Ts, except Ontario, measure up-to-date (UTD) coverage and 4 P/Ts also assess on-time coverage. The degree of concordance between P/T and cNICS coverage estimates varied by jurisdiction, antigen and age group. In addition to differences in the data sources and processes used for coverage assessment, there are also differences between Canadian P/Ts in the methods used for calculating immunization coverage. Comparisons between P/T and cNICS estimates leave remaining questions about the proportion of children fully vaccinated in Canada.

  6. Coverage extension and balancing the transmitted power of the moving relay node at LTE-A cellular network.

    Science.gov (United States)

    Aldhaibani, Jaafar A; Yahya, Abid; Ahmad, R Badlishah

    2014-01-01

    The poor capacity at cell boundaries is not enough to meet the growing demand and stringent design which required high capacity and throughput irrespective of user's location in the cellular network. In this paper, we propose new schemes for an optimum fixed relay node (RN) placement in LTE-A cellular network to enhance throughput and coverage extension at cell edge region. The proposed approach mitigates interferences between all nodes and ensures optimum utilization with the optimization of transmitted power. Moreover, we proposed a new algorithm to balance the transmitted power of moving relay node (MR) over cell size and providing required SNR and throughput at the users inside vehicle along with reducing the transmitted power consumption by MR. The numerical analysis along with the simulation results indicates that an improvement in capacity for users is 40% increment at downlink transmission from cell capacity. Furthermore, the results revealed that there is saving nearly 75% from transmitted power in MR after using proposed balancing algorithm. ATDI simulator was used to verify the numerical results, which deals with real digital cartographic and standard formats for terrain.

  7. Health Insurance without Single Crossing : Why Healthy People have High Coverage

    NARCIS (Netherlands)

    Boone, J.; Schottmuller, C.

    2011-01-01

    Standard insurance models predict that people with high (health) risks have high insurance coverage. It is empirically documented that people with high income have lower health risks and are better insured. We show that income differences between risk types lead to a violation of single crossing in

  8. K Coverage Probability of 5G Wireless Cognitive Radio Network under Shadow Fading Effects

    Directory of Open Access Journals (Sweden)

    Ankur S. Kang

    2016-09-01

    Full Text Available Land mobile communication is burdened with typical propagation constraints due to the channel characteristics in radio systems.Also,the propagation characteristics vary form place to place and also as the mobile unit moves,from time to time.Hence,the tramsmission path between transmitter and receiver varies from simple direct LOS to the one which is severely obstructed by buildings,foliage and terrain.Multipath propagation and shadow fading effects affect the signal strength of an arbitrary Transmitter-Receiver due to the rapid fluctuations in the phase and amplitude of signal which also determines the average power over an area of tens or hundreds of meters.Shadowing introduces additional fluctuations,so the received local mean power varies around the area –mean.The present section deals with the performance analysis of fifth generation wireless cognitive radio network on the basis of signal and interference level based k coverage probability under the shadow fading effects.

  9. Experimental high-speed network

    Science.gov (United States)

    McNeill, Kevin M.; Klein, William P.; Vercillo, Richard; Alsafadi, Yasser H.; Parra, Miguel V.; Dallas, William J.

    1993-09-01

    Many existing local area networking protocols currently applied in medical imaging were originally designed for relatively low-speed, low-volume networking. These protocols utilize small packet sizes appropriate for text based communication. Local area networks of this type typically provide raw bandwidth under 125 MHz. These older network technologies are not optimized for the low delay, high data traffic environment of a totally digital radiology department. Some current implementations use point-to-point links when greater bandwidth is required. However, the use of point-to-point communications for a total digital radiology department network presents many disadvantages. This paper describes work on an experimental multi-access local area network called XFT. The work includes the protocol specification, and the design and implementation of network interface hardware and software. The protocol specifies the Physical and Data Link layers (OSI layers 1 & 2) for a fiber-optic based token ring providing a raw bandwidth of 500 MHz. The protocol design and implementation of the XFT interface hardware includes many features to optimize image transfer and provide flexibility for additional future enhancements which include: a modular hardware design supporting easy portability to a variety of host system buses, a versatile message buffer design providing 16 MB of memory, and the capability to extend the raw bandwidth of the network to 3.0 GHz.

  10. Nanoparticle layer deposition for highly controlled multilayer formation based on high-coverage monolayers of nanoparticles

    International Nuclear Information System (INIS)

    Liu, Yue; Williams, Mackenzie G.; Miller, Timothy J.; Teplyakov, Andrew V.

    2016-01-01

    This paper establishes a strategy for chemical deposition of functionalized nanoparticles onto solid substrates in a layer-by-layer process based on self-limiting surface chemical reactions leading to complete monolayer formation within the multilayer system without any additional intermediate layers — nanoparticle layer deposition (NPLD). This approach is fundamentally different from previously established traditional layer-by-layer deposition techniques and is conceptually more similar to well-known atomic and molecular layer deposition processes. The NPLD approach uses efficient chemical functionalization of the solid substrate material and complementary functionalization of nanoparticles to produce a nearly 100% coverage of these nanoparticles with the use of “click chemistry”. Following this initial deposition, a second complete monolayer of nanoparticles is deposited using a copper-catalyzed “click reaction” with the azide-terminated silica nanoparticles of a different size. This layer-by-layer growth is demonstrated to produce stable covalently-bound multilayers of nearly perfect structure over macroscopic solid substrates. The formation of stable covalent bonds is confirmed spectroscopically and the stability of the multilayers produced is tested by sonication in a variety of common solvents. The 1-, 2- and 3-layer structures are interrogated by electron microscopy and atomic force microscopy and the thickness of the multilayers formed is fully consistent with that expected for highly efficient monolayer formation with each cycle of growth. This approach can be extended to include a variety of materials deposited in a predesigned sequence on different substrates with a highly conformal filling. - Highlights: • We investigate the formation of high-coverage monolayers of nanoparticles. • We use “click chemistry” to form these monolayers. • We form multiple layers based on the same strategy. • We confirm the formation of covalent bonds

  11. Future High Capacity Backbone Networks

    DEFF Research Database (Denmark)

    Wang, Jiayuan

    are proposed. The work focuses on energy efficient routing algorithms in a dynamic optical core network environment, with Generalized MultiProtocol Label Switching (GMPLS) as the control plane. Energy ef- ficient routing algorithms for energy savings and CO2 savings are proposed, and their performance...... aiming for reducing the dynamic part of the energy consumption of the network may increase the fixed part of the energy consumption meanwhile. In the second half of the thesis, the conflict between energy efficiency and Quality of Service (QoS) is addressed by introducing a novel software defined......This thesis - Future High Capacity Backbone Networks - deals with the energy efficiency problems associated with the development of future optical networks. In the first half of the thesis, novel approaches for using multiple/single alternative energy sources for improving energy efficiency...

  12. Markets, voucher subsidies and free nets combine to achieve high bed net coverage in rural Tanzania

    Directory of Open Access Journals (Sweden)

    Gerrets Rene PM

    2008-06-01

    Full Text Available Abstract Background Tanzania has a well-developed network of commercial ITN retailers. In 2004, the government introduced a voucher subsidy for pregnant women and, in mid 2005, helped distribute free nets to under-fives in small number of districts, including Rufiji on the southern coast, during a child health campaign. Contributions of these multiple insecticide-treated net delivery strategies existing at the same time and place to coverage in a poor rural community were assessed. Methods Cross-sectional household survey in 6,331 members of randomly selected 1,752 households of 31 rural villages of Demographic Surveillance System in Rufiji district, Southern Tanzania was conducted in 2006. A questionnaire was administered to every consenting respondent about net use, treatment status and delivery mechanism. Findings Net use was 62.7% overall, 87.2% amongst infants (0 to1 year, 81.8% amongst young children (>1 to 5 years, 54.5% amongst older children (6 to 15 years and 59.6% amongst adults (>15 years. 30.2% of all nets had been treated six months prior to interview. The biggest source of nets used by infants was purchase from the private sector with a voucher subsidy (41.8%. Half of nets used by young children (50.0% and over a third of those used by older children (37.2% were obtained free of charge through the vaccination campaign. The largest source of nets amongst the population overall was commercial purchase (45.1% use and was the primary means for protecting adults (60.2% use. All delivery mechanisms, especially sale of nets at full market price, under-served the poorest but no difference in equity was observed between voucher-subsidized and freely distributed nets. Conclusion All three delivery strategies enabled a poor rural community to achieve net coverage high enough to yield both personal and community level protection for the entire population. Each of them reached their relevant target group and free nets only temporarily

  13. Energy-efficient area coverage for intruder detection in sensor networks

    CERN Document Server

    He, Shibo; Li, Junkun

    2014-01-01

    This Springer Brief presents recent research results on area coverage for intruder detection from an energy-efficient perspective. These results cover a variety of topics, including environmental surveillance and security monitoring. The authors also provide the background and range of applications for area coverage and elaborate on system models such as the formal definition of area coverage and sensing models. Several chapters focus on energy-efficient intruder detection and intruder trapping under the well-known binary sensing model, along with intruder trapping under the probabilistic sens

  14. When Langmuir is too simple: H-2 dissociation on Pd(111) at high coverage

    DEFF Research Database (Denmark)

    Lopez, Nuria; Lodziana, Zbigniew; Illas, F.

    2004-01-01

    Recent experiments of H-2 adsorption on Pd(111) [T. Mitsui et al., Nature (London) 422, 705 (2003)] have questioned the classical Langmuir picture of second order adsorption kinetics at high surface coverage requiring pairs of empty sites for the dissociative chemisorption. Experiments find that ...

  15. High speed all optical networks

    Science.gov (United States)

    Chlamtac, Imrich; Ganz, Aura

    1990-01-01

    An inherent problem of conventional point-to-point wide area network (WAN) architectures is that they cannot translate optical transmission bandwidth into comparable user available throughput due to the limiting electronic processing speed of the switching nodes. The first solution to wavelength division multiplexing (WDM) based WAN networks that overcomes this limitation is presented. The proposed Lightnet architecture takes into account the idiosyncrasies of WDM switching/transmission leading to an efficient and pragmatic solution. The Lightnet architecture trades the ample WDM bandwidth for a reduction in the number of processing stages and a simplification of each switching stage, leading to drastically increased effective network throughputs. The principle of the Lightnet architecture is the construction and use of virtual topology networks, embedded in the original network in the wavelength domain. For this construction Lightnets utilize the new concept of lightpaths which constitute the links of the virtual topology. Lightpaths are all-optical, multihop, paths in the network that allow data to be switched through intermediate nodes using high throughput passive optical switches. The use of the virtual topologies and the associated switching design introduce a number of new ideas, which are discussed in detail.

  16. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP

  17. High voltage power network construction

    CERN Document Server

    Harker, Keith

    2018-01-01

    This book examines the key requirements, considerations, complexities and constraints relevant to the task of high voltage power network construction, from design, finance, contracts and project management to installation and commissioning, with the aim of providing an overview of the holistic end to end construction task in a single volume.

  18. Studies of high coverage oxidation of the Cu(100) surface using low energy positrons

    Science.gov (United States)

    Fazleev, N. G.; Maddox, W. B.; Weiss, A. H.

    2012-02-01

    The study of oxidation of single crystal metal surfaces is important in understanding the corrosive and catalytic processes associated with thin film metal oxides. The structures formed on oxidized transition metal surfaces vary from simple adlayers of chemisorbed oxygen to more complex structures which result from the diffusion of oxygen into subsurface regions. In this work we present the results of theoretical studies of positron surface and bulk states and annihilation probabilities of surface-trapped positrons with relevant core electrons at the oxidized Cu(100) surface under conditions of high oxygen coverage. Calculations are performed for various high coverage missing row structures ranging between 0.50 and 1.50 ML oxygen coverage. The results of calculations of positron binding energy, positron work function, and annihilation characteristics of surface trapped positrons with relevant core electrons as function of oxygen coverage are compared with experimental data obtained from studies of oxidation of the Cu(100) surface using positron annihilation induced Auger electron spectroscopy (PAES).

  19. Extending the coverage of the internet of things with low-cost nanosatellite networks

    Science.gov (United States)

    Almonacid, Vicente; Franck, Laurent

    2017-09-01

    Recent technology advances have made CubeSats not only an affordable means of access to space, but also promising platforms to develop a new variety of space applications. In this paper, we explore the idea of using nanosatellites as access points to provide extended coverage to the Internet of Things (IoT) and Machine-to-Machine (M2M) communications. This study is mainly motivated by two facts: on the one hand, it is already obvious that the number of machine-type devices deployed globally will experiment an exponential growth over the forthcoming years. This trend is pushed by the available terrestrial cellular infrastructure, which allows adding support for M2M connectivity at marginal costs. On the other hand, the same growth is not observed in remote areas that must rely on space-based connectivity. In such environments, the demand for M2M communications is potentially large, yet it is challenged by the lack of cost-effective service providers. The traffic characteristics of typical M2M applications translate into the requirement for an extremely low cost per transmitted message. Under these strong economical constraints, we expect that nanosatellites in the low Earth orbit will play a fundamental role in overcoming what we may call the IoT digital divide. The objective of this paper is therefore to provide a general analysis of a nanosatellite-based, global IoT/M2M network. We put emphasis in the engineering challenges faced in designing the Earth-to-Space communication link, where the adoption of an efficient multiple-access scheme is paramount for ensuring connectivity to a large number of terminal nodes. In particular, the trade-offs energy efficiency-access delay and energy efficiency-throughput are discussed, and a novel access approach suitable for delay-tolerant applications is proposed. Thus, by keeping a system-level standpoint, we identify key issues and discuss perspectives towards energy efficient and cost-effective solutions.

  20. Collaborative Event-Driven Coverage and Rate Allocation for Event Miss-Ratio Assurances in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ozgur Sanli H

    2010-01-01

    Full Text Available Wireless sensor networks are often required to provide event miss-ratio assurance for a given event type. To meet such assurances along with minimum energy consumption, this paper shows how a node's activation and rate assignment is dependent on its distance to event sources, and proposes a practical coverage and rate allocation (CORA protocol to exploit this dependency in realistic environments. Both uniform event distribution and nonuniform event distribution are considered and the notion of ideal correlation distance around a clusterhead is introduced for on-duty node selection. In correlation distance guided CORA, rate assignment assists coverage scheduling by determining which nodes should be activated for minimizing data redundancy in transmission. Coverage scheduling assists rate assignment by controlling the amount of overlap among sensing regions of neighboring nodes, thereby providing sufficient data correlation for rate assignment. Extensive simulation results show that CORA meets the required event miss-ratios in realistic environments. CORA's joint coverage scheduling and rate allocation reduce the total energy expenditure by 85%, average battery energy consumption by 25%, and the overhead of source coding up to 90% as compared to existing rate allocation techniques.

  1. High Availability in Optical Networks

    Science.gov (United States)

    Grover, Wayne D.; Wosinska, Lena; Fumagalli, Andrea

    2005-09-01

    Call for Papers: High Availability in Optical Networks Submission Deadline: 1 January 2006 The Journal of Optical Networking (JON) is soliciting papers for a feature Issue pertaining to all aspects of reliable components and systems for optical networks and concepts, techniques, and experience leading to high availability of services provided by optical networks. Most nations now recognize that telecommunications in all its forms -- including voice, Internet, video, and so on -- are "critical infrastructure" for the society, commerce, government, and education. Yet all these services and applications are almost completely dependent on optical networks for their realization. "Always on" or apparently unbreakable communications connectivity is the expectation from most users and for some services is the actual requirement as well. Achieving the desired level of availability of services, and doing so with some elegance and efficiency, is a meritorious goal for current researchers. This requires development and use of high-reliability components and subsystems, but also concepts for active reconfiguration and capacity planning leading to high availability of service through unseen fast-acting survivability mechanisms. The feature issue is also intended to reflect some of the most important current directions and objectives in optical networking research, which include the aspects of integrated design and operation of multilevel survivability and realization of multiple Quality-of-Protection service classes. Dynamic survivable service provisioning, or batch re-provisioning is an important current theme, as well as methods that achieve high availability at far less investment in spare capacity than required by brute force service path duplication or 100% redundant rings, which is still the surprisingly prevalent practice. Papers of several types are envisioned in the feature issue, including outlook and forecasting types of treatments, optimization and analysis, new

  2. Offline High pH Reversed-Phase Peptide Fractionation for Deep Phosphoproteome Coverage

    DEFF Research Database (Denmark)

    Batth, Tanveer S; Olsen, Jesper V

    2016-01-01

    Protein phosphorylation, a process in which kinases modify serines, threonines, and tyrosines with phosphoryl groups is of major importance in eukaryotic biology. Protein phosphorylation events are key initiators of signaling responses which determine cellular outcomes after environmental...... and metabolic stimuli, and are thus highly regulated. Therefore, studying the mechanism of regulation by phosphorylation, and pinpointing the exact site of phosphorylation on proteins is of high importance. This protocol describes in detail a phosphoproteomics workflow for ultra-deep coverage by fractionating...

  3. Defining the essential anatomical coverage provided by military body armour against high energy projectiles.

    Science.gov (United States)

    Breeze, John; Lewis, E A; Fryer, R; Hepper, A E; Mahoney, Peter F; Clasper, Jon C

    2016-08-01

    Body armour is a type of equipment worn by military personnel that aims to prevent or reduce the damage caused by ballistic projectiles to structures within the thorax and abdomen. Such injuries remain the leading cause of potentially survivable deaths on the modern battlefield. Recent developments in computer modelling in conjunction with a programme to procure the next generation of UK military body armour has provided the impetus to re-evaluate the optimal anatomical coverage provided by military body armour against high energy projectiles. A systematic review of the literature was undertaken to identify those anatomical structures within the thorax and abdomen that if damaged were highly likely to result in death or significant long-term morbidity. These structures were superimposed upon two designs of ceramic plate used within representative body armour systems using a computerised representation of human anatomy. Those structures requiring essential medical coverage by a plate were demonstrated to be the heart, great vessels, liver and spleen. For the 50th centile male anthropometric model used in this study, the front and rear plates from the Enhanced Combat Body Armour system only provide limited coverage, but do fulfil their original requirement. The plates from the current Mark 4a OSPREY system cover all of the structures identified in this study as requiring coverage except for the abdominal sections of the aorta and inferior vena cava. Further work on sizing of plates is recommended due to its potential to optimise essential medical coverage. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Amount of newspaper coverage of high school athletics for boys and girls on sports page and newspaper circulation.

    Science.gov (United States)

    Pedersen, Paul M; Whisenant, Warren A

    2002-02-01

    This study analyzed the amount of coverage for high school athletics in 43 newspapers with small circulation by devoting 40% of their interscholastic athletics coverage to girls in athletics, printed significantly more articles about girls' athletics than did the newspapers with medium (33%) or large (32%) circulation. Therefore, the smaller the newspaper circulation, the more equitable the coverage of athletics for girls and boys. This finding was consistent with some prior work but not all.

  5. Interference Impact on Coverage and Capacity for Low Power Wide Area IoT Networks

    DEFF Research Database (Denmark)

    Vejlgaard, Benny; Lauridsen, Mads; Nguyen, Huan Cong

    2017-01-01

    In this paper we analyze and discuss the coverage and capacity of Sigfox and LoRaWAN in a large scale urban environments covering 150 km2 in Northern Denmark. First, the study measures and analyzes interference in the European 868 MHz license free industrial, scientific, and medical band, creating...

  6. A High-Coverage Yersinia pestis Genome from a Sixth-Century Justinianic Plague Victim.

    Science.gov (United States)

    Feldman, Michal; Harbeck, Michaela; Keller, Marcel; Spyrou, Maria A; Rott, Andreas; Trautmann, Bernd; Scholz, Holger C; Päffgen, Bernd; Peters, Joris; McCormick, Michael; Bos, Kirsten; Herbig, Alexander; Krause, Johannes

    2016-11-01

    The Justinianic Plague, which started in the sixth century and lasted to the mid eighth century, is thought to be the first of three historically documented plague pandemics causing massive casualties. Historical accounts and molecular data suggest the bacterium Yersinia pestis as its etiological agent. Here we present a new high-coverage (17.9-fold) Y. pestis genome obtained from a sixth-century skeleton recovered from a southern German burial site close to Munich. The reconstructed genome enabled the detection of 30 unique substitutions as well as structural differences that have not been previously described. We report indels affecting a lacl family transcription regulator gene as well as nonsynonymous substitutions in the nrdE, fadJ, and pcp genes, that have been suggested as plague virulence determinants or have been shown to be upregulated in different models of plague infection. In addition, we identify 19 false positive substitutions in a previously published lower-coverage Y. pestis genome from another archaeological site of the same time period and geographical region that is otherwise genetically identical to the high-coverage genome sequence reported here, suggesting low-genetic diversity of the plague during the sixth century in rural southern Germany. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  7. A GIS-Based Evaluation of the Effectiveness and Spatial Coverage of Public Transport Networks in Tourist Destinations

    Directory of Open Access Journals (Sweden)

    Antoni Domènech

    2017-03-01

    Full Text Available This article develops a methodology for evaluating the effectiveness and spatial coverage of public transport in tourist cities. The proposed methodology is applied and validated in Cambrils municipality, in the central part of the Costa Daurada in Catalonia, a coastal destination characterised by the concentration of tourism flows during summer. The application of GIS spatial analysis tools allows for the development of a system of territorial indicators that spatially correlate the public transport network and the distribution of the population. The main novelty of our work is that this analysis not only includes the registered resident population, but also incorporates the population that temporarily inhabits the municipality (tourists. The results of the study firstly permit the detection of unequal spatial accessibility and coverage in terms of public transport in the municipality, with significant differences between central neighbourhoods and peripheral urban areas of lower population density. Secondly, they allow observation of how the degree of public transport coverage differs significantly in areas with a higher concentration of tourist accommodation establishments.

  8. Seasonal influenza vaccine coverage among high-risk populations in Thailand, 2010-2012.

    Science.gov (United States)

    Owusu, Jocelynn T; Prapasiri, Prabda; Ditsungnoen, Darunee; Leetongin, Grit; Yoocharoen, Pornsak; Rattanayot, Jarowee; Olsen, Sonja J; Muangchana, Charung

    2015-01-29

    The Advisory Committee on Immunization Practice of Thailand prioritizes seasonal influenza vaccinations for populations who are at highest risk for serious complications (pregnant women, children 6 months-2 years, persons ≥65 years, persons with chronic diseases, obese persons), and healthcare personnel and poultry cullers. The Thailand government purchases seasonal influenza vaccine for these groups. We assessed vaccination coverage among high-risk groups in Thailand from 2010 to 2012. National records on persons who received publicly purchased vaccines from 2010 to 2012 were analyzed by high-risk category. Denominator data from multiple sources were compared to calculate coverage. Vaccine coverage was defined as the proportion of individuals in each category who received the vaccine. Vaccine wastage was defined as the proportion of publicly purchased vaccines that were not used. From 2010 to 2012, 8.18 million influenza vaccines were publicly purchased (range, 2.37-3.29 million doses/year), and vaccine purchases increased 39% over these years. Vaccine wastage was 9.5%. Approximately 5.7 million (77%) vaccine doses were administered to persons ≥65 years and persons with chronic diseases, 1.4 million (19%) to healthcare personnel/poultry cullers, 82,570 (1.1%) to children 6 months-2 years, 78,885 (1.1%) to obese persons, 26,481 (0.4%) to mentally disabled persons, and 17,787 (0.2%) to pregnant women. Between 2010 and 2012, coverage increased among persons with chronic diseases (8.6% versus 14%; pThailand. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Geographical accessibility and spatial coverage modeling of the primary health care network in the Western Province of Rwanda

    Directory of Open Access Journals (Sweden)

    Huerta Munoz Ulises

    2012-09-01

    Full Text Available Abstract Background Primary health care is essential in improving and maintaining the health of populations. It has the potential to accelerate achievement of the Millennium Development Goals and fulfill the “Health for All” doctrine of the Alma-Ata Declaration. Understanding the performance of the health system from a geographic perspective is important for improved health planning and evidence-based policy development. The aims of this study were to measure geographical accessibility, model spatial coverage of the existing primary health facility network, estimate the number of primary health facilities working under capacity and the population underserved in the Western Province of Rwanda. Methods This study uses health facility, population and ancillary data for the Western Province of Rwanda. Three different travel scenarios utilized by the population to attend the nearest primary health facility were defined with a maximum travelling time of 60 minutes: Scenario 1 – walking; Scenario 2 – walking and cycling; and Scenario 3 – walking and public transportation. Considering these scenarios, a raster surface of travel time between primary health facilities and population was developed. To model spatial coverage and estimate the number of primary health facilities working under capacity, the catchment area of each facility was calculated by taking into account population coverage capacity, the population distribution, the terrain topography and the travelling modes through the different land categories. Results Scenario 2 (walking and cycling has the highest degree of geographical accessibility followed by Scenario 3 (walking and public transportation. The lowest level of accessibility can be observed in Scenario 1 (walking. The total population covered differs depending on the type of travel scenario. The existing primary health facility network covers only 26.6% of the population in Scenario 1. In Scenario 2, the use of a bicycle

  10. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham

    2017-04-07

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.

  11. 75 FR 33303 - Comment Sought on Measurement of Mobile Broadband Network Performance and Coverage

    Science.gov (United States)

    2010-06-11

    ... on fixed residential and small business Internet broadband services. With that public notice, the...., smartphone, feature phones, laptop, wireless modem) itself have on end-user experienced network performance...

  12. Stretchable GaAs photovoltaics with designs that enable high areal coverage

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jongho; Yoon, Jongseung; Park, Sang-Il [Department of Materials Science and Engineering, Frederick Seitz Materials Research Laboratory, University of Illinois, Urbana-Champaign, IL (United States); Wu, Jian [Department of Civil and Environmental Engineering, Northwestern University, Evanston, IL (United States); Shi, Mingxing; Liu, Zhuangjian [Institute of High Performance Computing, Singapore (Singapore); Li, Ming [Department of Civil and Environmental Engineering, Northwestern University, Evanston, IL (United States); Department of Engineering Mechanics, Dalian University of Technology, Dalian (China); Huang, Yonggang [Departments of Civil and Environmental Engineering and Mechanical Engineering, Northwestern University, Evanston, IL (United States); Rogers, John A. [Department of Materials Science and Engineering, Chemistry, Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana-Champaign, IL (United States)

    2011-02-22

    Strategies are presented for achieving, simultaneously, both large areal coverage and high stretchability by using elastomeric substrates with surface relief in geometries that confine strains at the locations of the interconnections, and away from the devices. The studies involve a combination of theory and experiment to reveal the essential mechanics, and include demonstrations of the ideas in stretchable solar modules that use ultrathin, single junction GaAs solar cells. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  13. Pearls and tips in coverage of the tibia after a high energy trauma

    Directory of Open Access Journals (Sweden)

    Rios-Luna Antonio

    2008-01-01

    Full Text Available Coverage of soft-tissue defects in the lower limbs, especially open tibial fractures, is currently a frequently done procedure because of the high incidence of high-energy trauma, which affects this location. The skilled orthopedic surgeon should be able to carry out an integral treatment of these lesions, which include not only the open reduction and internal fixation of the fracture fragments but also the management of complications such as local wound problems that may arise. There is a wide variety of muscular or pedicled flaps available for reconstruction of lower limb soft-tissue defects. These techniques are not commonly used by orthopedic surgeons because of the lack of familiarity with them and the potential for flap failure and problems derived from morbidity of the donor site. We present a coverage management update for orthopedic surgeons for complications after an open tibial fracture. We choose and describe the most adequate flap depending on the region injured and the reliable surgical procedure. For proximal third of the tibia, we use gastrocnemius muscle flap. Middle third of the tibia could be covered by soleus muscle flap. Distal third of the tibia could be reconstructed by sural flaps, lateral supramalleolar skin flap, and posterior tibial perforator flap. Free flaps can be used in all regions. We describe the advantages and disadvantages, pearls, and tips of every flap. The coverage of the tibia after a major injury constitutes a reliable and versatile technique that should form part of the therapeutic arsenal of all the orthopedic surgeons, facilitating the integral treatment of complex lower limb injuries with exposed defects.

  14. Cooperative Cloud Service Aware Mobile Internet Coverage Connectivity Guarantee Protocol Based on Sensor Opportunistic Coverage Mechanism

    Directory of Open Access Journals (Sweden)

    Qin Qin

    2015-01-01

    Full Text Available In order to improve the Internet coverage ratio and provide connectivity guarantee, based on sensor opportunistic coverage mechanism and cooperative cloud service, we proposed the coverage connectivity guarantee protocol for mobile Internet. In this scheme, based on the opportunistic covering rules, the network coverage algorithm of high reliability and real-time security was achieved by using the opportunity of sensor nodes and the Internet mobile node. Then, the cloud service business support platform is created based on the Internet application service management capabilities and wireless sensor network communication service capabilities, which is the architecture of the cloud support layer. The cooperative cloud service aware model was proposed. Finally, we proposed the mobile Internet coverage connectivity guarantee protocol. The results of experiments demonstrate that the proposed algorithm has excellent performance, in terms of the security of the Internet and the stability, as well as coverage connectivity ability.

  15. Temporal Trends in Coverage of Historical Cardiac Arrests Using a Volunteer-Based Network of Automated External Defibrillators Accessible to Laypersons and Emergency Dispatch Centers

    DEFF Research Database (Denmark)

    Hansen, Carolina Malta; Lippert, Freddy Knudsen; Wissenberg, Mads

    2014-01-01

    public cardiac arrest coverage in high- and low-risk areas. METHODS AND RESULTS: All public cardiac arrests (1994-2011) and all registered AEDs (2007-2011) in Copenhagen, Denmark, were identified and geocoded. AED coverage of cardiac arrests was defined as historical arrests ≤100 m from an AED. High...

  16. A Survey on Sensor Coverage and Visual Data Capturing/Processing/Transmission in Wireless Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Florence G. H. Yap

    2014-02-01

    Full Text Available Wireless Visual Sensor Networks (WVSNs where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs that can only transmit scalar information (e.g., temperature, the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/ transmit visual data in limited resources (hardware capability and bandwidth WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/ processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs.

  17. A Power Balance Aware Wireless Charger Deployment Method for Complete Coverage in Wireless Rechargeable Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tu-Liang Lin

    2016-08-01

    Full Text Available Traditional sensor nodes are usually battery powered, and the limited battery power constrains the overall lifespan of the sensors. Recently, wireless power transmission technology has been applied in wireless sensor networks (WSNs to transmit wireless power from the chargers to the sensor nodes and solve the limited battery power problem. The combination of wireless sensors and wireless chargers forms a new type of network called wireless rechargeable sensor networks (WRSNs. In this research, we focus on how to effectively deploy chargers to maximize the lifespan of a network. In WSNs, the sensor nodes near the sink consume more power than nodes far away from the sink because of frequent data forwarding. This important power unbalanced factor has not been considered, however, in previous charger deployment research. In this research, a power balance aware deployment (PBAD method is proposed to address the power unbalance in WRSNs and to design the charger deployment with maximum charging efficiency. The proposed deployment method is effectively aware of the existence of the sink node that would cause unbalanced power consumption in WRSNs. The simulation results show that the proposed PBAD algorithm performs better than other deployment methods, and fewer chargers are deployed as a result.

  18. Stochastic Geometric Coverage Analysis in mmWave Cellular Networks with a Realistic Channel Model

    DEFF Research Database (Denmark)

    Rebato, Mattia; Park, Jihong; Popovski, Petar

    2017-01-01

    Millimeter-wave (mmWave) bands have been attracting growing attention as a possible candidate for next-generation cellular networks, since the available spectrum is orders of magnitude larger than in current cellular allocations. To precisely design mmWave systems, it is important to examine mmWa...

  19. Women In The United States Experience High Rates Of Coverage 'Churn' In Months Before And After Childbirth.

    Science.gov (United States)

    Daw, Jamie R; Hatfield, Laura A; Swartz, Katherine; Sommers, Benjamin D

    2017-04-01

    Insurance transitions-sometimes referred to as "churn"-before and after childbirth can adversely affect the continuity and quality of care. Yet little is known about coverage patterns and changes for women giving birth in the United States. Using nationally representative survey data for the period 2005-13, we found high rates of insurance transitions before and after delivery. Half of women who were uninsured nine months before delivery had acquired Medicaid or CHIP coverage by the month of delivery, but 55 percent of women with that coverage at delivery experienced a coverage gap in the ensuing six months. Risk factors associated with insurance loss after delivery include not speaking English at home, being unmarried, having Medicaid or CHIP coverage at delivery, living in the South, and having a family income of 100-185 percent of the poverty level. To minimize the adverse effects of coverage disruptions, states should consider policies that promote the continuity of coverage for childbearing women, particularly those with pregnancy-related Medicaid eligibility. Project HOPE—The People-to-People Health Foundation, Inc.

  20. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  1. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  2. Optimal 2-Coverage of a Polygonal Region in a Sensor Network

    Directory of Open Access Journals (Sweden)

    Inês Matos

    2009-09-01

    Full Text Available Wireless sensor networks are a relatively new area where technology is developing fast and are used to solve a great diversity of problems that range from museums’ security to wildlife protection. The geometric optimisation problem solved in this paper is aimed at minimising the sensors’ range so that every point on a polygonal region R is within the range of at least two sensors. Moreover, it is also shown how to minimise the sensors’ range to assure the existence of a path within R that stays as close to two sensors as possible.

  3. Climate Feedback: a worldwide network of scientists collaborating to peer-review the media and foster more accurate climate coverage

    Science.gov (United States)

    Vincent, E. M.

    2016-12-01

    The public remains largely unaware of the pervasive impacts of climate change and this has been commonly attributed to the often inaccurate or misleading reporting of climate issues by mainstream media. Given the large influence of the media, using scientists' outreach time to try and improve the accuracy of climate news is an impactful leverage towards supporting science-based policies about climate change. Climate Feedback is a worldwide network of scientists who are working with journalists and editors to improve the accuracy of climate reporting. When a breaking climate news gets published, Climate Feedback invites scientists to collectively review the scientific credibility of the story using a method based on critical thinking theory that measures its accuracy, reasoning and objectivity. The use of web-annotation allows scientists with complementary expertise to collectively review the article and allows readers and authors to see precisely where and why the coverage is -or is not- based on science. Building on these reviews, we highlight best practices to help journalists and editors create more accurate content and share pedagogical resources to help readers identify claims that are consistent with current scientific knowledge and find the most reliable sources of information. In this talk, we will present the results we have obtained so far, which includes 1) identifying the most common pitfalls scientists have reported in climate coverage and 2) identifying the first trends and impacts of our actions. Beyond the publication of simply inaccurate information, we identified more subtle issues such as misrepresenting sources (either scientists or studies), lack of context or understanding of scientific concepts, logical flaws, over-hyping results/exaggeration... Our results increasingly allow to highlight that certain news sources (outlets, journalists, editors) are generally more trustworthy than others and we will show how some news outlets now take

  4. Tuberculin reactivity in a population of schoolchildren with high BCG vaccination coverage

    Directory of Open Access Journals (Sweden)

    Bierrenbach Ana L.

    2003-01-01

    Full Text Available OBJECTIVE: To investigate the influence of BCG vaccination or revaccination on tuberculin skin test reactivity, in order to guide the correct interpretation of this test in a setting of high neonatal BCG vaccination coverage and an increasing BCG revaccination coverage at school age. METHODS: We conducted tuberculin skin testing and BCG scar reading in 1148 children aged 7-14 years old in the city of Salvador, Bahia, Brazil. We measured the positive effect of the presence of one or two BCG scars on the proportion of tuberculin skin test results above different cut-off levels (induration sizes of > 5 mm, > 10 mm, and > 15 mm and also using several ranges of induration size (0, 1-4, 5-9, 10-14, and > 15 mm. We also measured the effects that age, gender, and the school where the child was enrolled had on these proportions. RESULTS: The proportion of tuberculin results > 10 mm was 14.2% (95% confidence interval (CI = 8.0%-20.3% for children with no BCG scar, 21.3% (95% CI = 18.5%-24.1% for children with one BCG scar, and 45.0% (95% CI = 32.0%-58.0% for children with two BCG scars. There was evidence for an increasing positive effect of the presence of one and two BCG scars on the proportion of results > 5 mm and > 10 mm. Similarly, there was evidence for an increasing positive effect of the presence of one and two scars on the proportion of tuberculin skin test results in the ranges of 5-9 mm and of 10-14 mm. The BCG scar effect on the proportion of results > 5 mm and > 10 mm did not vary with age. There was no evidence for BCG effect on the results > 15 mm. CONCLUSIONS: In Brazilian schoolchildren, BCG-induced tuberculin reactivity is indistinguishable, for results under 15 mm, from reactivity induced by Mycobacterium tuberculosis infection. BCG revaccination at school age increases the degree of BCG-induced tuberculin reactivity found among schoolchildren. This information should be taken into account in tuberculin skin test surveys intended to

  5. SWATHtoMRM: Development of High-Coverage Targeted Metabolomics Method Using SWATH Technology for Biomarker Discovery.

    Science.gov (United States)

    Zha, Haihong; Cai, Yuping; Yin, Yandong; Wang, Zhuozhong; Li, Kang; Zhu, Zheng-Jiang

    2018-03-20

    The complexity of metabolome presents a great analytical challenge for quantitative metabolite profiling, and restricts the application of metabolomics in biomarker discovery. Targeted metabolomics using multiple-reaction monitoring (MRM) technique has excellent capability for quantitative analysis, but suffers from the limited metabolite coverage. To address this challenge, we developed a new strategy, namely, SWATHtoMRM, which utilizes the broad coverage of SWATH-MS technology to develop high-coverage targeted metabolomics method. Specifically, SWATH-MS technique was first utilized to untargeted profile one pooled biological sample and to acquire the MS 2 spectra for all metabolites. Then, SWATHtoMRM was used to extract the large-scale MRM transitions for targeted analysis with coverage as high as 1000-2000 metabolites. Then, we demonstrated the advantages of SWATHtoMRM method in quantitative analysis such as coverage, reproducibility, sensitivity, and dynamic range. Finally, we applied our SWATHtoMRM approach to discover potential metabolite biomarkers for colorectal cancer (CRC) diagnosis. A high-coverage targeted metabolomics method with 1303 metabolites in one injection was developed to profile colorectal cancer tissues from CRC patients. A total of 20 potential metabolite biomarkers were discovered and validated for CRC diagnosis. In plasma samples from CRC patients, 17 out of 20 potential biomarkers were further validated to be associated with tumor resection, which may have a great potential in assessing the prognosis of CRC patients after tumor resection. Together, the SWATHtoMRM strategy provides a new way to develop high-coverage targeted metabolomics method, and facilitates the application of targeted metabolomics in disease biomarker discovery. The SWATHtoMRM program is freely available on the Internet ( http://www.zhulab.cn/software.php ).

  6. Somatosensory neuron types identified by high-coverage single-cell RNA-sequencing and functional heterogeneity

    Science.gov (United States)

    Li, Chang-Lin; Li, Kai-Cheng; Wu, Dan; Chen, Yan; Luo, Hao; Zhao, Jing-Rong; Wang, Sa-Shuang; Sun, Ming-Ming; Lu, Ying-Jin; Zhong, Yan-Qing; Hu, Xu-Ye; Hou, Rui; Zhou, Bei-Bei; Bao, Lan; Xiao, Hua-Sheng; Zhang, Xu

    2016-01-01

    Sensory neurons are distinguished by distinct signaling networks and receptive characteristics. Thus, sensory neuron types can be defined by linking transcriptome-based neuron typing with the sensory phenotypes. Here we classify somatosensory neurons of the mouse dorsal root ganglion (DRG) by high-coverage single-cell RNA-sequencing (10 950 ± 1 218 genes per neuron) and neuron size-based hierarchical clustering. Moreover, single DRG neurons responding to cutaneous stimuli are recorded using an in vivo whole-cell patch clamp technique and classified by neuron-type genetic markers. Small diameter DRG neurons are classified into one type of low-threshold mechanoreceptor and five types of mechanoheat nociceptors (MHNs). Each of the MHN types is further categorized into two subtypes. Large DRG neurons are categorized into four types, including neurexophilin 1-expressing MHNs and mechanical nociceptors (MNs) expressing BAI1-associated protein 2-like 1 (Baiap2l1). Mechanoreceptors expressing trafficking protein particle complex 3-like and Baiap2l1-marked MNs are subdivided into two subtypes each. These results provide a new system for cataloging somatosensory neurons and their transcriptome databases. PMID:26691752

  7. A high-coverage Neandertal genome from Vindija Cave in Croatia.

    Science.gov (United States)

    Prüfer, Kay; de Filippo, Cesare; Grote, Steffi; Mafessoni, Fabrizio; Korlević, Petra; Hajdinjak, Mateja; Vernot, Benjamin; Skov, Laurits; Hsieh, Pinghsun; Peyrégne, Stéphane; Reher, David; Hopfe, Charlotte; Nagel, Sarah; Maricic, Tomislav; Fu, Qiaomei; Theunert, Christoph; Rogers, Rebekah; Skoglund, Pontus; Chintalapati, Manjusha; Dannemann, Michael; Nelson, Bradley J; Key, Felix M; Rudan, Pavao; Kućan, Željko; Gušić, Ivan; Golovanova, Liubov V; Doronichev, Vladimir B; Patterson, Nick; Reich, David; Eichler, Evan E; Slatkin, Montgomery; Schierup, Mikkel H; Andrés, Aida M; Kelso, Janet; Meyer, Matthias; Pääbo, Svante

    2017-11-03

    To date, the only Neandertal genome that has been sequenced to high quality is from an individual found in Southern Siberia. We sequenced the genome of a female Neandertal from ~50,000 years ago from Vindija Cave, Croatia, to ~30-fold genomic coverage. She carried 1.6 differences per 10,000 base pairs between the two copies of her genome, fewer than present-day humans, suggesting that Neandertal populations were of small size. Our analyses indicate that she was more closely related to the Neandertals that mixed with the ancestors of present-day humans living outside of sub-Saharan Africa than the previously sequenced Neandertal from Siberia, allowing 10 to 20% more Neandertal DNA to be identified in present-day humans, including variants involved in low-density lipoprotein cholesterol concentrations, schizophrenia, and other diseases. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  8. Analysis of transposable elements in the genome of Asparagus officinalis from high coverage sequence data.

    Science.gov (United States)

    Li, Shu-Fen; Gao, Wu-Jun; Zhao, Xin-Peng; Dong, Tian-Yu; Deng, Chuan-Liang; Lu, Long-Dou

    2014-01-01

    Asparagus officinalis is an economically and nutritionally important vegetable crop that is widely cultivated and is used as a model dioecious species to study plant sex determination and sex chromosome evolution. To improve our understanding of its genome composition, especially with respect to transposable elements (TEs), which make up the majority of the genome, we performed Illumina HiSeq2000 sequencing of both male and female asparagus genomes followed by bioinformatics analysis. We generated 17 Gb of sequence (12×coverage) and assembled them into 163,406 scaffolds with a total cumulated length of 400 Mbp, which represent about 30% of asparagus genome. Overall, TEs masked about 53% of the A. officinalis assembly. Majority of the identified TEs belonged to LTR retrotransposons, which constitute about 28% of genomic DNA, with Ty1/copia elements being more diverse and accumulated to higher copy numbers than Ty3/gypsy. Compared with LTR retrotransposons, non-LTR retrotransposons and DNA transposons were relatively rare. In addition, comparison of the abundance of the TE groups between male and female genomes showed that the overall TE composition was highly similar, with only slight differences in the abundance of several TE groups, which is consistent with the relatively recent origin of asparagus sex chromosomes. This study greatly improves our knowledge of the repetitive sequence construction of asparagus, which facilitates the identification of TEs responsible for the early evolution of plant sex chromosomes and is helpful for further studies on this dioecious plant.

  9. A high-coverage draft genome of the mycalesine butterfly Bicyclus anynana.

    Science.gov (United States)

    Nowell, Reuben W; Elsworth, Ben; Oostra, Vicencio; Zwaan, Bas J; Wheat, Christopher W; Saastamoinen, Marjo; Saccheri, Ilik J; Van't Hof, Arjen E; Wasik, Bethany R; Connahs, Heidi; Aslam, Muhammad L; Kumar, Sujai; Challis, Richard J; Monteiro, Antónia; Brakefield, Paul M; Blaxter, Mark

    2017-07-01

    The mycalesine butterfly Bicyclus anynana, the "Squinting bush brown," is a model organism in the study of lepidopteran ecology, development, and evolution. Here, we present a draft genome sequence for B. anynana to serve as a genomics resource for current and future studies of this important model species. Seven libraries with insert sizes ranging from 350 bp to 20 kb were constructed using DNA from an inbred female and sequenced using both Illumina and PacBio technology; 128 Gb of raw Illumina data was filtered to 124 Gb and assembled to a final size of 475 Mb (∼×260 assembly coverage). Contigs were scaffolded using mate-pair, transcriptome, and PacBio data into 10 800 sequences with an N50 of 638 kb (longest scaffold 5 Mb). The genome is comprised of 26% repetitive elements and encodes a total of 22 642 predicted protein-coding genes. Recovery of a BUSCO set of core metazoan genes was almost complete (98%). Overall, these metrics compare well with other recently published lepidopteran genomes. We report a high-quality draft genome sequence for Bicyclus anynana. The genome assembly and annotated gene models are available at LepBase (http://ensembl.lepbase.org/index.html). © The Authors 2017. Published by Oxford University Press.

  10. High intensity proton accelerator controls network upgrade

    International Nuclear Information System (INIS)

    Krempaska, R.; Bertrand, A.; Lendzian, F.; Lutz, H.

    2012-01-01

    The High Intensity Proton Accelerator (HIPA) control system network is spread through a vast area in PSI and it was grown historically in an unorganized way. The miscellaneous network hardware infrastructure and the lack of the documentation and components overview could no longer guarantee the reliability of the control system and the facility operation. Therefore, a new network, based on modern network topology, PSI standard hardware with monitoring and detailed documentation and overview was needed. The number of active components has been reduced from 25 to 9 Cisco Catalyst 24- or 48-port switches. They are the same type as other PSI switches, thus a replacement emergency stock is not an issue anymore. We would like to present how we successfully achieved this goal and the advantages of the clean and well documented network infrastructure. (authors)

  11. Towards green high capacity optical networks

    Science.gov (United States)

    Glesk, I.; Mohd Warip, M. N.; Idris, S. K.; Osadola, T. B.; Andonovic, I.

    2011-09-01

    The demand for fast, secure, energy efficient high capacity networks is growing. It is fuelled by transmission bandwidth needs which will support among other things the rapid penetration of multimedia applications empowering smart consumer electronics and E-businesses. All the above trigger unparallel needs for networking solutions which must offer not only high-speed low-cost "on demand" mobile connectivity but should be ecologically friendly and have low carbon footprint. The first answer to address the bandwidth needs was deployment of fibre optic technologies into transport networks. After this it became quickly obvious that the inferior electronic bandwidth (if compared to optical fiber) will further keep its upper hand on maximum implementable serial data rates. A new solution was found by introducing parallelism into data transport in the form of Wavelength Division Multiplexing (WDM) which has helped dramatically to improve aggregate throughput of optical networks. However with these advancements a new bottleneck has emerged at fibre endpoints where data routers must process the incoming and outgoing traffic. Here, even with the massive and power hungry electronic parallelism routers today (still relying upon bandwidth limiting electronics) do not offer needed processing speeds networks demands. In this paper we will discuss some novel unconventional approaches to address network scalability leading to energy savings via advance optical signal processing. We will also investigate energy savings based on advanced network management through nodes hibernation proposed for Optical IP networks. The hibernation reduces the network overall power consumption by forming virtual network reconfigurations through selective nodes groupings and by links segmentations and partitionings.

  12. Pertussis epidemic despite high levels of vaccination coverage with acellular pertussis vaccine.

    Science.gov (United States)

    Sala-Farré, Maria-Rosa; Arias-Varela, César; Recasens-Recasens, Assumpta; Simó-Sanahuja, Maria; Muñoz-Almagro, Carmen; Pérez-Jové, Josefa

    2015-01-01

    We describe the pertussis epidemic, based only on confirmed whooping cough cases. We have analyzed data on the diagnosis, epidemiology and vaccine history in order to understand the factors that might explain the trends of the disease. A descriptive study of the confirmed pertussis cases reported during 2011 in the Vallès region (population 1,283,000). Laboratory criteria for confirmed pertussis cases include isolation of Bordetella pertussis from a clinical specimen or detection of B. pertussis by PCR in nasopharyngeal swabs. A total of 421 pertussis confirmed cases were reported, which was the highest incidence reported in the last decade (33 cases/100,000 people/year in 2011). The highest incidence rate was among infants less than 1 year old (448/100,000), followed by children 5-9 years old (154/100,000). Pertussis cases aged 2 months-1 year were 90% vaccinated following the current DTaP schedule for their age group in Catalonia, and cases of 5-9 years were 87% fully vaccinated with 5 doses of DTaP vaccine. There were no deaths, although 8% of cases were hospitalized. Pertussis was more severe in infants, 30% required hospitalization despite having received the vaccine doses corresponding to their age. Children of 5-9 years were most often identified as primary cases in households or school clusters. Despite high levels of vaccination coverage, pertussis circulation cannot be controlled at all. The results question the efficacy of the present immunization programmes. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  13. High-speed quantum networking by ship

    Science.gov (United States)

    Devitt, Simon J.; Greentree, Andrew D.; Stephens, Ashley M.; van Meter, Rodney

    2016-11-01

    Networked entanglement is an essential component for a plethora of quantum computation and communication protocols. Direct transmission of quantum signals over long distances is prevented by fibre attenuation and the no-cloning theorem, motivating the development of quantum repeaters, designed to purify entanglement, extending its range. Quantum repeaters have been demonstrated over short distances, but error-corrected, global repeater networks with high bandwidth require new technology. Here we show that error corrected quantum memories installed in cargo containers and carried by ship can provide a exible connection between local networks, enabling low-latency, high-fidelity quantum communication across global distances at higher bandwidths than previously proposed. With demonstrations of technology with sufficient fidelity to enable topological error-correction, implementation of the quantum memories is within reach, and bandwidth increases with improvements in fabrication. Our approach to quantum networking avoids technological restrictions of repeater deployment, providing an alternate path to a worldwide Quantum Internet.

  14. High Tempo Knowledge Collaboration in Wikipedia's Coverage of Breaking News Events

    Science.gov (United States)

    Keegan, Brian C.

    2012-01-01

    When major news breaks in our hyper-connected society, we increasingly turn to an encyclopedia for the latest information. Wikipedia's coverage of breaking news events attracts unique levels of attention; the articles with the most page views, edits, and contributors in any given month since 2003 are related to current events. Extant…

  15. Prolongation of SMAP to Spatiotemporally Seamless Coverage of Continental U.S. Using a Deep Learning Neural Network

    Science.gov (United States)

    Fang, Kuai; Shen, Chaopeng; Kifer, Daniel; Yang, Xiao

    2017-11-01

    The Soil Moisture Active Passive (SMAP) mission has delivered valuable sensing of surface soil moisture since 2015. However, it has a short time span and irregular revisit schedules. Utilizing a state-of-the-art time series deep learning neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 moisture product with atmospheric forcings, model-simulated moisture, and static physiographic attributes as inputs. The system removes most of the bias with model simulations and improves predicted moisture climatology, achieving small test root-mean-square errors (0.87 for over 75% of Continental United States, including the forested southeast. As the first application of LSTM in hydrology, we show the proposed network avoids overfitting and is robust for both temporal and spatial extrapolation tests. LSTM generalizes well across regions with distinct climates and environmental settings. With high fidelity to SMAP, LSTM shows great potential for hindcasting, data assimilation, and weather forecasting.

  16. Advanced Functionalities for Highly Reliable Optical Networks

    DEFF Research Database (Denmark)

    An, Yi

    This thesis covers two research topics concerning optical solutions for networks e.g. avionic systems. One is to identify the applications for silicon photonic devices for cost-effective solutions in short-range optical networks. The other one is to realise advanced functionalities in order...... to increase the availability of highly reliable optical networks. A cost-effective transmitter based on a directly modulated laser (DML) using a silicon micro-ring resonator (MRR) to enhance its modulation speed is proposed, analysed and experimentally demonstrated. A modulation speed enhancement from 10 Gbit...... interconnects and network-on-chips. A novel concept of all-optical protection switching scheme is proposed, where fault detection and protection trigger are all implemented in the optical domain. This scheme can provide ultra-fast establishment of the protection path resulting in a minimum loss of data...

  17. HIGH: A Hexagon-based Intelligent Grouping Approach in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    FAN, C.-S.

    2016-02-01

    Full Text Available In a random deployment or uniform deployment strategy, sensor nodes are scattered randomly or uniformly in the sensing field, respectively. Hence, the coverage ratio cannot be guaranteed. The coverage ratio of uniform deployment, in general, is larger than that of the random deployment strategy. However, a random deployment or uniform deployment strategy may cause unbalanced traffic pattern in wireless sensor networks (WSNs. Therefore, cluster heads (CHs around the sink have larger loads than those farther away from the sink. That is, CHs close to the sink exhaust their energy earlier. In order to overcome the above problem, we propose a Hexagon-based Intelligent Grouping approacH in WSNs (called HIGH. The coverage, energy consumption and data routing issues are well investigated and taken into consideration in the proposed HIGH scheme. The simulation results validate our theoretical analysis and show that the proposed HIGH scheme achieves a satisfactory coverage ratio, balances the energy consumption among sensor nodes, and extends network lifetime significantly.

  18. High Temporal and Spatial Resolution Coverage of Earth from Commercial AVSTAR Systems in Geostationary Orbit

    Science.gov (United States)

    Lecompte, M. A.; Heaps, J. F.; Williams, F. H.

    Imaging the earth from Geostationary Earth Orbit (GEO) allows frequent updates of environmental conditions within an observable hemisphere at time and spatial scales appropriate to the most transient observable terrestrial phenomena. Coverage provided by current GEO Meteorological Satellites (METSATS) fails to fully exploit this advantage due primarily to obsolescent technology and also institutional inertia. With the full benefit of GEO based imaging unrealized, rapidly evolving phenomena, occurring at the smallest spatial and temporal scales that frequently have significant environmental impact remain unobserved. These phenomena may be precursors for the most destructive natural processes that adversely effect society. Timely distribution of information derived from "real-time" observations thus may provide opportunities to mitigate much of the damage to life and property that would otherwise occur. AstroVision International's AVStar Earth monitoring system is designed to overcome the current limitations if GEO Earth coverage and to provide real time monitoring of changes to the Earth's complete atmospheric, land and marine surface environments including fires, volcanic events, lightning and meteoritic events on a "live," true color, and multispectral basis. The understanding of severe storm dynamics and its coupling to the earth's electro-sphere will be greatly enhanced by observations at unprecedented sampling frequencies and spatial resolution. Better understanding of these natural phenomena and AVStar operational real-time coverage may also benefit society through improvements in severe weather prediction and warning. AstroVision's AVStar system, designed to provide this capability with the first of a constellation of GEO- based commercial environmental monitoring satellites to be launched in late 2003 will be discussed, including spatial and temporal resolution, spectral coverage with applications and an inventory of the potential benefits to society

  19. Dynamics of High-Resolution Networks

    DEFF Research Database (Denmark)

    Sekara, Vedran

    the unprecedented amounts of information collected by mobile phones to gain detailed insight into the dynamics of social systems. This dissertation presents an unparalleled data collection campaign, collecting highly detailed traces for approximately 1000 people over the course of multiple years. The availability...... are we all affected by an ever changing network structure? Answering these questions will enrich our understanding of ourselves, our organizations, and our societies. Yet, mapping the dynamics of social networks has traditionally been an arduous undertaking. Today, however, it is possible to use...... of such dynamic maps allows us to probe the underlying social network and understand how individuals interact and form lasting friendships. More importantly, these highly detailed dynamic maps provide us new perspectives at traditional problems and allow us to quantify and predict human life....

  20. OTDM Networking for Short Range High-Capacity Highly Dynamic Networks

    DEFF Research Database (Denmark)

    Medhin, Ashenafi Kiros

    This PhD thesis aims at investigating the possibility of designing energy-efficient high-capacity (up to Tbit/s) optical network scenarios, leveraging on the effect of collective switching of many bits simultaneously, as is inherent in high bit rate serial optical data signals. The focus...... is on short range highly dynamic networks, catering to data center needs. The investigation concerns optical network scenarios, and experimental implementations of high bit rate serial data packet generation and reception, scalable optical packet labeling, simple optical label extraction and stable ultra...

  1. Malaria in pregnant women in an area with sustained high coverage of insecticide-treated bed nets

    Directory of Open Access Journals (Sweden)

    Mshinda Hassan

    2008-07-01

    Full Text Available Abstract Background Since 2000, the World Health Organization has recommended a package of interventions to prevent malaria during pregnancy and its sequelae that includes the promotion of insecticide-treated bed nets (ITNs, intermittent preventive treatment in pregnancy (IPTp, and effective case management of malarial illness. It is recommended that pregnant women in malaria-endemic areas receive at least two doses of sulphadoxine-pyrimethamine in the second and third trimesters of pregnancy. This study assessed the prevalence of placental malaria at delivery in women during 1st or 2nd pregnancy, who did not receive intermittent preventive treatment for malaria (IPTp in a malaria-endemic area with high bed net coverage. Methods A hospital-based cross-sectional study was done in Ifakara, Tanzania, where bed net coverage is high. Primi- and secundigravid women, who presented to the labour ward and who reported not using IPTp were included in the study. Self-report data were collected by questionnaire; whereas neonatal birth weight and placenta parasitaemia were measured directly at the time of delivery. Results Overall, 413 pregnant women were enrolled of which 91% reported to have slept under a bed net at home the previous night, 43% reported history of fever and 62% were primigravid. Malaria parasites were detected in 8% of the placenta samples; the geometric mean (95%CI placental parasite density was 3,457 (1,060–11,271 parasites/μl in primigravid women and 2,178 (881–5,383 parasites/μl in secundigravid women. Fifteen percent of newborns weighed Conclusion The observed incidence of LBW and prevalence of placental parasitaemia at delivery suggests that malaria remains a problem in pregnancy in this area with high bed net coverage when eligible women do not receive IPTp. Delivery of IPTp should be emphasized at all levels of implementation to achieve maximum community coverage.

  2. Status of (US) High Energy Physics Networking

    International Nuclear Information System (INIS)

    Montgomery, H.E.

    1987-02-01

    The current status of Networking to and between computers used by the High Energy Physics community is discussed. Particular attention is given to developments over the last year and to future prospects. Comparison between the current status and that of two years ago indicates that considerable strides have been made but that much remains to be done to achieve an acceptable level of functionality

  3. Resume: networking in high energy physics

    International Nuclear Information System (INIS)

    Hutton, J.S.

    1985-11-01

    Networking in High Energy Physics covers communications inside the experiment and internationally. Inside the experiment the need for agreed 'codes of practice' is now accepted. Within Europe it is accepted that a common infrastructure based on the use of the ISO OSI protocols should be used. In the USA a community initiative has been proposed. The background to these approaches is discussed. (author)

  4. IdentiCS – Identification of coding sequence and in silico reconstruction of the metabolic network directly from unannotated low-coverage bacterial genome sequence

    Directory of Open Access Journals (Sweden)

    Zeng An-Ping

    2004-08-01

    Full Text Available Abstract Background A necessary step for a genome level analysis of the cellular metabolism is the in silico reconstruction of the metabolic network from genome sequences. The available methods are mainly based on the annotation of genome sequences including two successive steps, the prediction of coding sequences (CDS and their function assignment. The annotation process takes time. The available methods often encounter difficulties when dealing with unfinished error-containing genomic sequence. Results In this work a fast method is proposed to use unannotated genome sequence for predicting CDSs and for an in silico reconstruction of metabolic networks. Instead of using predicted genes or CDSs to query public databases, entries from public DNA or protein databases are used as queries to search a local database of the unannotated genome sequence to predict CDSs. Functions are assigned to the predicted CDSs simultaneously. The well-annotated genome of Salmonella typhimurium LT2 is used as an example to demonstrate the applicability of the method. 97.7% of the CDSs in the original annotation are correctly identified. The use of SWISS-PROT-TrEMBL databases resulted in an identification of 98.9% of CDSs that have EC-numbers in the published annotation. Furthermore, two versions of sequences of the bacterium Klebsiella pneumoniae with different genome coverage (3.9 and 7.9 fold, respectively are examined. The results suggest that a 3.9-fold coverage of the bacterial genome could be sufficiently used for the in silico reconstruction of the metabolic network. Compared to other gene finding methods such as CRITICA our method is more suitable for exploiting sequences of low genome coverage. Based on the new method, a program called IdentiCS (Identification of Coding Sequences from Unfinished Genome Sequences is delivered that combines the identification of CDSs with the reconstruction, comparison and visualization of metabolic networks (free to download

  5. Scanning Tunneling Microscopic Observation of Adatom-Mediated Motifs on Gold-Thiol Self-assembled Monolayers at High Coverage

    DEFF Research Database (Denmark)

    Wang, Yun; Chi, Qijin; Hush, Noel S.

    2009-01-01

    the structural motifs observed on surfaces at low coverage and on gold nanoparticles to the observed spectroscopic properties of high-coverage SAMs formed by methanethiol. However, the significant role attributed to intermolecular steric packing effects suggests a lack of generality for the adatom-mediated motif......Self-assembled monolayers (SAMs) formed by chemisorption of a branched-chain alkanethiol, 2-methyl-1-propanethiol, on Au(111) surfaces were studied by in situ scanning tunneling microscopy (STM) under electrochemical potential control and analyzed using extensive density functional theory (DFT...... two R−S−Au−S−R adatom-mediated motifs per surface cell, with steric-induced variations in the adsorbate alignment inducing the observed STM image contrasts. Observed pits covering 5.6 ± 0.5% of the SAM surface are consistent with this structure. These results provide the missing link from...

  6. IODP Expedition 360: Analyzing the Media Coverage of a High Profile Research Project

    Science.gov (United States)

    Kavanagh, L.; Martinez, A. O.; Burgio, M.; Zhang, J.; Expedition 360 Scientists, I.

    2016-12-01

    During Expedition 360 of the International Ocean Discovery Program (IODP), the JOIDES Resolution drilled 789 meters of lower crustal gabbro in the Southwest Indian Ocean. This hole began a multi-expedition project with the goal of one day drilling across the crust-mantle boundary for the first time. This simplified narrative of the research objectives struck a chord with media and the project received worldwide coverage in the form of over 50 stories with a total audience in the millions. This expedition is presented as a case study in science communication. A four-member education and outreach team onboard the ship acted as the point of contact for interested reporters. Major outlets that ran stories include the Australian Broadcasting Corporation, British Broadcasting Corporation, Boston Globe, Daily Express, Fox News, Nature, Smithsonian, and Chinese based Xinhua News Agency who sailed a reporter on the ship for the duration of the expedition. The majority of stories published provided accurate and favourable coverage of the project; however, a few contained critical errors and cast the expedition in a less positive light. Public reaction varied greatly depending on the article. Positive themes include interest in the scientific outcomes and encouragement of human exploration. Negative themes include the project being an inefficient use of money and a perceived risk of the drilling triggering an earthquake or volcano. Through a review of published articles and online comments, the successes and challenges faced by Expedition 360 are identified. Despite minimal preparation for media relations, the team successfully maintained a public profile while working in one of the most remote locations on Earth. Interviews were facilitated and videos, articles, and podcasts were produced onboard the ship. A simple, catchy narrative resulted in a large volume of coverage; however, this simplicity also formed the root of a number of misconceptions and issues of public concern.

  7. Network Based High Speed Product Innovation

    DEFF Research Database (Denmark)

    Lindgren, Peter

    In the first decade of the 21st century, New Product Development has undergone major changes in the way NPD is managed and organised. This is due to changes in technology, market demands, and in the competencies of companies. As a result NPD organised in different forms of networks is predicted...... to be of ever-increasing importance to many different kinds of companies. This happens at the same times as the share of new products of total turnover and earnings is increasing at unprecedented speed in many firms and industries. The latter results in the need for very fast innovation and product development...... - a need that can almost only be resolved by organising NPD in some form of network configuration. The work of Peter Lindgren is on several aspects of network based high speed product innovation and contributes to a descriptive understanding of this phenomenon as well as with normative theory on how NPD...

  8. High entomological inoculation rate of malaria vectors in area of high coverage of interventions in southwest Ethiopia: Implication for residual malaria transmission

    Directory of Open Access Journals (Sweden)

    Misrak Abraham

    2017-05-01

    Finally, there was an indoor residual malaria transmission in a village of high coverage of bed nets and where the principal malaria vector is susceptibility to propoxur and bendiocarb; insecticides currently in use for indoor residual spraying. The continuing indoor transmission of malaria in such village implies the need for new tools to supplement the existing interventions and to reduce indoor malaria transmission.

  9. Successful Control of Winter Pyrexias Caused by Equine Herpesvirus Type 1 in Japanese Training Centers by Achieving High Vaccination Coverage

    Science.gov (United States)

    Mae, Naomi; Ode, Hirotaka; Nemoto, Manabu; Tsujimura, Koji; Yamanaka, Takashi; Kondo, Takashi; Matsumura, Tomio

    2014-01-01

    Equine herpesvirus type 1 (EHV-1) is a major cause of winter pyrexia in racehorses in two training centers (Ritto and Miho) in Japan. Until the epizootic period of 2008-2009, a vaccination program using a killed EHV-1 vaccine targeted only susceptible 3-year-old horses with low antibody levels to EHV-1 antigens. However, because the protective effect was not satisfactory, in 2009-2010 the vaccination program was altered to target all 3-year-old horses. To evaluate the vaccine's efficacy, we investigated the number of horses with pyrexia due to EHV-1 or equine herpesvirus type 4 (EHV-4) infection or both and examined the vaccination coverage in the 3-year-old population and in the whole population before and after changes in the program. The mean (± standard deviation [SD]) estimated numbers of horses infected with EHV-1 or EHV-4 or both, among pyretic horses from 1999-2000 to 2008-2009 were 105 ± 47 at Ritto and 66 ± 44 at Miho. Although the estimated number of infected horses did not change greatly in the first period of the current program, it decreased from the second period, with means (±SD) of 21 ± 12 at Ritto and 14 ± 15 at Miho from 2010-2011 to 2012-2013. Vaccination coverage in the 3-year-old population was 99.4% at Ritto and 99.8% at Miho in the first period, and similar values were maintained thereafter. Coverage in the whole population increased more gradually than that in the 3-year-old population. The results suggest that EHV-1 epizootics can be suppressed by maintaining high vaccination coverage, not only in the 3-year-old population but also in the whole population. PMID:24872513

  10. Networking for High Energy and Nuclear Physics

    Science.gov (United States)

    Newman, Harvey B.

    2007-07-01

    This report gives an overview of the status and outlook for the world's research networks and major international links used by the high energy physics and other scientific communities, network technology advances on which our community depends and in which we have an increasingly important role, and the problem of the Digital Divide, which is a primary focus of ICFA's Standing Committee on Inter-regional Connectivity (SCIC). Wide area networks of sufficient, and rapidly increasing end-to-end capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of more than 1000 over the past decade, and the outlook is for a similar increase over the next decade. This striking exponential growth trend, outstripping the growth rates in other areas of information technology, has continued in the past year, with many of the major national, continental and transoceanic networks supporting research and education progressing from a 10 Gigabits/sec (Gbps) backbone to multiple 10 Gbps links in their core. This is complemented by the use of point-to-point "light paths" to support the most demanding applications, including high energy physics, in a growing list of cases. As we approach the era of LHC physics, the growing need to access and transport Terabyte-scale and later 10 to 100 Terabyte datasets among more than 100 "Tier1" and "Tier2" centers at universities and laboratories spread throughout the world has brought the key role of networks, and the ongoing need for their development, sharply into focus. Bandwidth itself on an increasing scale is not enough. Realizing the scientific wealth of the LHC and our other major scientific programs depends crucially on our ability to use the bandwidth efficiently and reliably, with reliable high rates of data throughput, and effectively, where many parallel large-scale data

  11. Ffuzz: Towards full system high coverage fuzz testing on binary executables.

    Directory of Open Access Journals (Sweden)

    Bin Zhang

    Full Text Available Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.

  12. Ffuzz: Towards full system high coverage fuzz testing on binary executables.

    Science.gov (United States)

    Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing

    2018-01-01

    Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.

  13. Measles incidence, vaccine efficacy, and mortality in two urban African areas with high vaccination coverage

    DEFF Research Database (Denmark)

    Aaby, Peter; Knudsen, K; Jensen, T G

    1990-01-01

    Measles incidence, vaccine efficacy, and mortality were examined prospectively in two districts in Bissau where vaccine coverage for children aged 12-23 months was 81% (Bandim 1) and 61% (Bandim 2). There was little difference in cumulative measles incidence before 9 months of age (6.1% and 7.......6%, respectively). Between 9 months and 2 years of age, however, 6.1% contracted measles in Bandim 1 and 13.7% in Bandim 2. Even adjusting for vaccination status, incidence was significantly higher in Bandim 2 (relative risk 1.6, P = .04). Even though 95% of the children had measles antibodies after vaccination......, vaccine efficacy was not more than 68% (95% confidence interval [CI] 39%-84%) and was unrelated to age at vaccination. Unvaccinated children had a mortality hazard ratio of 3.0 compared with vaccinated children (P = .002), indicating a protective efficacy against death of 66% (CI 32%-83%) of measles...

  14. Percent Coverage

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Percent Coverage is a spreadsheet that keeps track of and compares the number of vessels that have departed with and without observers to the numbers of vessels...

  15. Variation in adult vaccination policies across Europe: an overview from VENICE network on vaccine recommendations, funding and coverage.

    Science.gov (United States)

    Kanitz, Elisabeth E; Wu, Lauren A; Giambi, Cristina; Strikas, Raymond A; Levy-Bruhl, Daniel; Stefanoff, Pawel; Mereckiene, Jolita; Appelgren, Eva; D'Ancona, Fortunato

    2012-07-27

    In 2010-2011, in the framework of the VENICE project, we surveyed European Union (EU) and Economic Area (EEA) countries to fill the gap of information regarding vaccination policies in adults. This project was carried out in collaboration with the United States National Vaccine Program Office, who conducted a similar survey in all developed countries. VENICE representatives of all 29 EU/EEA-countries received an online questionnaire including vaccination schedule, recommendations, funding and coverage in adults for 17 vaccine-preventable diseases. The response rate was 100%. The definition of age threshold for adulthood for the purpose of vaccination ranged from 15 to 19 years (median=18 years). EU/EEA-countries recommend between 4 and 16 vaccines for adults (median=11 vaccines). Tetanus and diphtheria vaccines are recommended to all adults in 22 and 21 countries respectively. The other vaccines are mostly recommended to specific risk groups; recommendations for seasonal influenza and hepatitis B exist in all surveyed countries. Six countries have a comprehensive summary document or schedule describing all vaccines which are recommended for adults. None of the surveyed countries was able to provide coverage estimates for all the recommended adult vaccines. Vaccination policies for adults are not consistent across Europe, including the meaning of "recommended vaccine" which is not comparable among countries. Coverage data for adults should be collected routinely like for children vaccination. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Prolongation of SMAP to Spatio-temporally Seamless Coverage of Continental US Using a Deep Learning Neural Network

    Science.gov (United States)

    Fang, K.; Shen, C.; Kifer, D.; Yang, X.

    2017-12-01

    The Soil Moisture Active Passive (SMAP) mission has delivered high-quality and valuable sensing of surface soil moisture since 2015. However, its short time span, coarse resolution, and irregular revisit schedule have limited its use. Utilizing a state-of-the-art deep-in-time neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 soil moisture data using climate forcing, model-simulated moisture, and static physical attributes as inputs. The system removes most of the bias with model simulations and also improves predicted moisture climatology, achieving a testing accuracy of 0.025 to 0.03 in most parts of Continental United States (CONUS). As the first application of LSTM in hydrology, we show that it is more robust than simpler methods in either temporal or spatial extrapolation tests. We also discuss roles of different predictors, the effectiveness of regularization algorithms and impacts of training strategies. With high fidelity to SMAP products, our data can aid various applications including data assimilation, weather forecasting, and soil moisture hindcasting.

  17. Silicon-embedded copper nanostructure network for high energy storage

    Science.gov (United States)

    Yu, Tianyue

    2016-03-15

    Provided herein are nanostructure networks having high energy storage, electrochemically active electrode materials including nanostructure networks having high energy storage, as well as electrodes and batteries including the nanostructure networks having high energy storage. According to various implementations, the nanostructure networks have high energy density as well as long cycle life. In some implementations, the nanostructure networks include a conductive network embedded with electrochemically active material. In some implementations, silicon is used as the electrochemically active material. The conductive network may be a metal network such as a copper nanostructure network. Methods of manufacturing the nanostructure networks and electrodes are provided. In some implementations, metal nanostructures can be synthesized in a solution that contains silicon powder to make a composite network structure that contains both. The metal nanostructure growth can nucleate in solution and on silicon nanostructure surfaces.

  18. Silicon-embedded copper nanostructure network for high energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Tianyue

    2018-01-23

    Provided herein are nanostructure networks having high energy storage, electrochemically active electrode materials including nanostructure networks having high energy storage, as well as electrodes and batteries including the nanostructure networks having high energy storage. According to various implementations, the nanostructure networks have high energy density as well as long cycle life. In some implementations, the nanostructure networks include a conductive network embedded with electrochemically active material. In some implementations, silicon is used as the electrochemically active material. The conductive network may be a metal network such as a copper nanostructure network. Methods of manufacturing the nanostructure networks and electrodes are provided. In some implementations, metal nanostructures can be synthesized in a solution that contains silicon powder to make a composite network structure that contains both. The metal nanostructure growth can nucleate in solution and on silicon nanostructure surfaces.

  19. High speed VLSI neural network for high energy physics

    NARCIS (Netherlands)

    Masa, P.; Masa, P.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    A CMOS neural network IC is discussed which was designed for very high speed applications. The parallel architecture, analog computing and digital weight storage provides unprecedented computing speed combined with ease of use. The circuit classifies up to 70 dimensional vectors within 20

  20. High coverage needle/syringe programs for people who inject drugs in low and middle income countries: a systematic review

    Directory of Open Access Journals (Sweden)

    Des Jarlais Don C

    2013-01-01

    Full Text Available Abstract Background Persons who inject drugs (PWID are at an elevated risk for human immunodeficiency virus (HIV and hepatitis C virus (HCV infection. In many high-income countries, needle and syringe exchange programs (NSP have been associated with reductions in blood-borne infections. However, we do not have a good understanding of the effectiveness of NSP in low/middle-income and transitional-economy countries. Methods A systematic literature review based on PRISMA guidelines was utilized to collect primary study data on coverage of NSP programs and changes in HIV and HCV infection over time among PWID in low-and middle-income and transitional countries (LMICs. Included studies reported laboratory measures of either HIV or HCV and at least 50% coverage of the local injecting population (through direct use or through secondary exchange. We also included national reports on newly reported HIV cases for countries that had national level data for PWID in conjunction with NSP scale-up and implementation. Results Studies of 11 NSPs with high-coverage from Bangladesh, Brazil, China, Estonia, Iran, Lithuania, Taiwan, Thailand and Vietnam were included in the review. In five studies HIV prevalence decreased (range −3% to −15% and in three studies HCV prevalence decreased (range −4.2% to −10.2%. In two studies HIV prevalence increased (range +5.6% to +14.8%. HCV incidence remained stable in one study. Of the four national reports of newly reported HIV cases, three reported decreases during NSP expansion, ranging from −30% to −93.3%, while one national report documented an increase in cases (+37.6%. Estimated incidence among new injectors decreased in three studies, with reductions ranging from −11/100 person years at risk to −16/100 person years at risk. Conclusions While not fully consistent, the data generally support the effectiveness of NSP in reducing HIV and HCV infection in low/middle-income and transitional-economy countries. If

  1. Networking for high energy physics in Japan

    International Nuclear Information System (INIS)

    Karita, Yukio; Abe, Fumio; Hirose, Hitoshi; Goto, Hiroyuki; Ogasawara, Ryusuke; Yuasa, Fukuko; Banno, Yoshiaki; Yasu, Yoshiji

    1989-01-01

    The computer network for high energy physics in Japan has grown over the last five or six years and is still expanding. Its original purpose was to provide the collaborators in universities access to the computing resources in KEK. Adding to the remote login from terminals, VAXs or Fujitsu computers located in universities have been connected to KEK's computers by DECnet or FNA (Fujitsu's SNA) and have formed the ''Japanese HEPnet''. Since the link between LBL and KEK was established in June 1987, the Japanese HEPnet is combined with the American HEPnet and is an indispensable tool for international collaboration. The current communication media for Japanese HEPnet, leased lines and public X.25, are being replaced by Gakujo-net (Monbusho's inter-university private X.25 network). DECnet, FNA, IP and Ethernet-bridge will run on Gakujo-net for the Japanese HEPnet. (orig.)

  2. Media Coverage, Journal Press Releases and Editorials Associated with Randomized and Observational Studies in High-Impact Medical Journals: A Cohort Study.

    Directory of Open Access Journals (Sweden)

    Michael T M Wang

    Full Text Available Publication of clinical research findings in prominent journals influences health beliefs and medical practice, in part by engendering news coverage. Randomized controlled trials (RCTs should be most influential in guiding clinical practice. We determined whether study design of clinical research published in high-impact journals influences media coverage.We compared the incidence and amount of media coverage of RCTs with that of observational studies published in the top 7 medical journals between 1 January 2013 and 31 March 2013. We specifically assessed media coverage of the most rigorous RCTs, those with >1000 participants that reported 'hard' outcomes. There was no difference between RCTs and observational studies in coverage by major newspapers or news agencies, or in total number of news stories generated (all P>0.63. Large RCTs reporting 'hard' outcomes did not generate more news coverage than small RCTs that reported surrogate outcomes and observational studies (all P>0.32. RCTs were more likely than observational studies to attract a journal editorial (70% vs 46%, P = 0.003, but less likely to be the subject of a journal press release (17% vs 50%, P0.99, nor were they more likely to be the subject of a journal press release (14% vs 38%, P = 0.14.The design of clinical studies whose results are published in high-impact medical journals is not associated with the likelihood or amount of ensuing news coverage.

  3. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-hoc Networks.

    Science.gov (United States)

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-04-18

    Using mobile vehicles as "data mules" to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.

  4. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-hoc Networks

    Directory of Open Access Journals (Sweden)

    Yixuan Xu

    2017-04-01

    Full Text Available Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D, but also vehicle to vehicle transmission (V2V. Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%.

  5. A Latency and Coverage Optimized Data Collection Scheme for Smart Cities Based on Vehicular Ad-Hoc Networks

    Science.gov (United States)

    Xu, Yixuan; Chen, Xi; Liu, Anfeng; Hu, Chunhua

    2017-01-01

    Using mobile vehicles as “data mules” to collect data generated by a huge number of sensing devices that are widely spread across smart city is considered to be an economical and effective way of obtaining data about smart cities. However, currently most research focuses on the feasibility of the proposed methods instead of their final performance. In this paper, a latency and coverage optimized data collection (LCODC) scheme is proposed to collect data on smart cities through opportunistic routing. Compared with other schemes, the efficiency of data collection is improved since the data flow in LCODC scheme consists of not only vehicle to device transmission (V2D), but also vehicle to vehicle transmission (V2V). Besides, through data mining on patterns hidden in the smart city, waste and redundancy in the utilization of public resources are mitigated, leading to the easy implementation of our scheme. In detail, no extra supporting device is needed in the LCODC scheme to facilitate data transmission. A large-scale and real-world dataset on Beijing is used to evaluate the LCODC scheme. Results indicate that with very limited costs, the LCODC scheme enables the average latency to decrease from several hours to around 12 min with respect to schemes where V2V transmission is disabled while the coverage rate is able to reach over 30%. PMID:28420218

  6. Heap: a highly sensitive and accurate SNP detection tool for low-coverage high-throughput sequencing data

    KAUST Repository

    Kobayashi, Masaaki

    2017-04-20

    Recent availability of large-scale genomic resources enables us to conduct so called genome-wide association studies (GWAS) and genomic prediction (GP) studies, particularly with next-generation sequencing (NGS) data. The effectiveness of GWAS and GP depends on not only their mathematical models, but the quality and quantity of variants employed in the analysis. In NGS single nucleotide polymorphism (SNP) calling, conventional tools ideally require more reads for higher SNP sensitivity and accuracy. In this study, we aimed to develop a tool, Heap, that enables robustly sensitive and accurate calling of SNPs, particularly with a low coverage NGS data, which must be aligned to the reference genome sequences in advance. To reduce false positive SNPs, Heap determines genotypes and calls SNPs at each site except for sites at the both ends of reads or containing a minor allele supported by only one read. Performance comparison with existing tools showed that Heap achieved the highest F-scores with low coverage (7X) restriction-site associated DNA sequencing reads of sorghum and rice individuals. This will facilitate cost-effective GWAS and GP studies in this NGS era. Code and documentation of Heap are freely available from https://github.com/meiji-bioinf/heap (29 March 2017, date last accessed) and our web site (http://bioinf.mind.meiji.ac.jp/lab/en/tools.html (29 March 2017, date last accessed)).

  7. Functional enrichment analyses and construction of functional similarity networks with high confidence function prediction by PFP

    Directory of Open Access Journals (Sweden)

    Kihara Daisuke

    2010-05-01

    Full Text Available Abstract Background A new paradigm of biological investigation takes advantage of technologies that produce large high throughput datasets, including genome sequences, interactions of proteins, and gene expression. The ability of biologists to analyze and interpret such data relies on functional annotation of the included proteins, but even in highly characterized organisms many proteins can lack the functional evidence necessary to infer their biological relevance. Results Here we have applied high confidence function predictions from our automated prediction system, PFP, to three genome sequences, Escherichia coli, Saccharomyces cerevisiae, and Plasmodium falciparum (malaria. The number of annotated genes is increased by PFP to over 90% for all of the genomes. Using the large coverage of the function annotation, we introduced the functional similarity networks which represent the functional space of the proteomes. Four different functional similarity networks are constructed for each proteome, one each by considering similarity in a single Gene Ontology (GO category, i.e. Biological Process, Cellular Component, and Molecular Function, and another one by considering overall similarity with the funSim score. The functional similarity networks are shown to have higher modularity than the protein-protein interaction network. Moreover, the funSim score network is distinct from the single GO-score networks by showing a higher clustering degree exponent value and thus has a higher tendency to be hierarchical. In addition, examining function assignments to the protein-protein interaction network and local regions of genomes has identified numerous cases where subnetworks or local regions have functionally coherent proteins. These results will help interpreting interactions of proteins and gene orders in a genome. Several examples of both analyses are highlighted. Conclusion The analyses demonstrate that applying high confidence predictions from PFP

  8. Immunization Coverage

    Science.gov (United States)

    ... room/fact-sheets/detail/immunization-coverage","@context":"http://schema.org","@type":"Article"}; العربية 中文 français русский español ... Plan Global Health Observatory (GHO) data - Immunization More information on vaccines and immunization News 1 in 10 ...

  9. Functional coverages

    NARCIS (Netherlands)

    Donchyts, G.; Baart, F.; Jagers, H.R.A.; Van Dam, A.

    2011-01-01

    A new Application Programming Interface (API) is presented which simplifies working with geospatial coverages as well as many other data structures of a multi-dimensional nature. The main idea extends the Common Data Model (CDM) developed at the University Corporation for Atmospheric Research

  10. [HPV prophylactic vaccine coverage in France: Results of a survey among high school and university students in Marseilles' area].

    Science.gov (United States)

    Sabiani, L; Bremond, A; Mortier, I; Lecuyer, M; Boubli, L; Carcopino, X

    2012-04-01

    To assess HPV prophylactic vaccine coverage among French high school and university students as well as their level of education about this vaccine. An anonymous survey was conducted among 2500 high school and university students from the area of Marseilles, France, from December 2009 to April 2010. A total of 2018 questionnaires were collected (80.7% participation rate). Mean age of participants was 20 years (range, 15-45 years). Only 671 (35.4%) participants reported having been vaccinated against HPV, of whom 510 (73.4%) had completed the three injections scheme. Practice of cytological cervical cancer screening was not significantly influenced by vaccination status. Thus, 578 (45.2%) participants who had not been vaccinated already had had a cervical cytology performed, versus 295 (43.3%) vaccinated ones (P=0.445). Among those not being vaccinated, 671 (49.8%) fulfilled criteria for a catch-up vaccination, of whom only 325 (48.4%) agreed for such a catch-up. Main reasons given for refusal for a catch-up vaccination were the lack of information about HPV vaccine and fear of side effects. In total, 1722 (90%) considered themselves as educated about the HPV vaccine. Source of education was attributed to doctors and media by 54.4% and 53.7% of participants, respectively. Educational role attributed to school and university was poor (3.4%). Despite apparent satisfactory level of education, HPV prophylactic vaccine coverage among high school and university students appears to be insufficient. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  11. Linking high parity and maternal and child mortality: what is the impact of lower health services coverage among higher order births?

    Science.gov (United States)

    Sonneveldt, Emily; DeCormier Plosky, Willyanne; Stover, John

    2013-01-01

    A number of data sets show that high parity births are associated with higher child mortality than low parity births. The reasons for this relationship are not clear. In this paper we investigate whether high parity is associated with lower coverage of key health interventions that might lead to increased mortality. We used DHS data from 10 high fertility countries to examine the relationship between parity and coverage for 8 child health intervention and 9 maternal health interventions. We also used the LiST model to estimate the effect on maternal and child mortality of the lower coverage associated with high parity births. Our results show a significant relationship between coverage of maternal and child health services and birth order, even when controlling for poverty. The association between coverage and parity for maternal health interventions was more consistently significant across countries all countries, while for child health interventions there were fewer overall significant relationships and more variation both between and within countries. The differences in coverage between children of parity 3 and those of parity 6 are large enough to account for a 12% difference in the under-five mortality rate and a 22% difference in maternal mortality ratio in the countries studied. This study shows that coverage of key health interventions is lower for high parity children and the pattern is consistent across countries. This could be a partial explanation for the higher mortality rates associated with high parity. Actions to address this gap could help reduce the higher mortality experienced by high parity birth.

  12. Just Images. Television News Coverage of High-Profile Criminal Trials.

    Science.gov (United States)

    Trossman, Mindy S.

    This guide describes "Just Images" a series of television programs and exhibitions that offers a public forum for analyzing television's influential portrayals of trials, lawyers, and the legal system. Contending that television portrayals of high-profile trials has altered the public's perception of law and the role of lawyers in the legal…

  13. Cost-effectiveness of HPV vaccination in the context of high cervical cancer incidence and low screening coverage.

    Science.gov (United States)

    Võrno, Triin; Lutsar, Katrin; Uusküla, Anneli; Padrik, Lee; Raud, Terje; Reile, Rainer; Nahkur, Oliver; Kiivet, Raul-Allan

    2017-11-01

    Estonia has high cervical cancer incidence and low screening coverage. We modelled the impact of population-based bivalent, quadrivalent or nonavalent HPV vaccination alongside cervical cancer screening. A Markov cohort model of the natural history of HPV infection was used to assess the cost-effectiveness of vaccinating a cohort of 12-year-old girls with bivalent, quadrivalent or nonavalent vaccine in two doses in a national, school-based vaccination programme. The model followed the natural progression of HPV infection into subsequent genital warts (GW); premalignant lesions (CIN1-3); cervical, oropharyngeal, vulvar, vaginal and anal cancer. Vaccine coverage was assumed to be 70%. A time horizon of 88years (up to 100years of age) was used to capture all lifetime vaccination costs and benefits. Costs and utilities were discounted using an annual discount rate of 5%. Vaccination of 12-year-old girls alongside screening compared to screening alone had an incremental cost-effectiveness ratio (ICER) of €14,007 (bivalent), €14,067 (quadrivalent) and €11,633 (nonavalent) per quality-adjusted life-year (QALY) in the base-case scenario and ranged between €5367-21,711, €5142-21,800 and €4563-18,142, respectively, in sensitivity analysis. The results were most sensitive to changes in discount rate, vaccination regimen, vaccine prices and cervical cancer screening coverage. Vaccination of 12-year-old girls alongside current cervical cancer screening can be considered a cost-effective intervention in Estonia. Adding HPV vaccination to the national immunisation schedule is expected to prevent a considerable number of HPV infections, genital warts, premalignant lesions, HPV related cancers and deaths. Although in our model ICERs varied slightly depending on the vaccine used, they generally fell within the same range. Cost-effectiveness of HPV vaccination was found to be most dependent on vaccine cost and duration of vaccine immunity, but not on the type of vaccine

  14. A Novel Beamforming Technique for Highways Coverage Using High-Altitude Platforms

    Directory of Open Access Journals (Sweden)

    Yasser Albagory

    2007-01-01

    Full Text Available This paper proposes a novel beamforming technique to form an arbitrary-shaped cell for the high-altitude platforms (HAPs mobile communications. The new technique is based on pattern summation of individual low-sidelobe, narrow beams which constitute the desired cell pattern weighted by an amplitude correcting function. The new cell pattern can be adapted to cover the main highways forming worm-shaped cells which may cover the highway for long distances up to 100 km and it will have an important role in reducing frequent handoffs and signaling traffic of location updating from moving users over the long highways.

  15. Rethinking Medicaid Coverage and Payment Policy to Promote High Value Care: The Case of Long-Acting Reversible Contraception.

    Science.gov (United States)

    Vela, Veronica X; Patton, Elizabeth W; Sanghavi, Darshak; Wood, Susan F; Shin, Peter; Rosenbaum, Sara

    Long-acting reversible contraception (LARC) is the most effective reversible method to prevent unplanned pregnancies. Variability in state-level policies and the high cost of LARC could create substantial inconsistencies in Medicaid coverage, despite federal guidance aimed at enhancing broad access. This study surveyed state Medicaid payment policies and outreach activities related to LARC to explore the scope of services covered. Using publicly available information, we performed a content analysis of state Medicaid family planning and LARC payment policies. Purposeful sampling led to a selection of nine states with diverse geographic locations, political climates, Medicaid expansion status, and the number of women covered by Medicaid. All nine states' Medicaid programs covered some aspects of LARC. However, only a single state's payment structure incorporated all core aspects of high-quality LARC service delivery, including counseling, device, insertion, removal, and follow-up care. Most states did not explicitly address counseling, device removal, or follow-up care. Some states had strategies to enhance access, including policies to increase device reimbursement, stocking and delivery programs to remove cost barriers, and covering devices and insertion after an abortion. Although Medicaid policy encourages LARC methods, state payment policies frequently fail to address key aspects of care, including counseling, follow-up care, and removal, resulting in highly variable state-level practices. Although some states include payment policy innovations to support LARC access, significant opportunities remain. Published by Elsevier Inc.

  16. High latrine coverage is not reducing the prevalence of soil-transmitted helminthiasis in Hoa Binh province, Vietnam.

    Science.gov (United States)

    Yajima, Aya; Jouquet, Pascal; Do, Trung Dung; Dang, Thi Cam Thach; Tran, Cong Dai; Orange, Didier; Montresor, Antonio

    2009-03-01

    A baseline epidemiological survey for parasite infections was conducted between December 2007 and January 2008 in 155 villagers in a rural commune in Hoa Binh province, Vietnam. The prevalence of Ascaris lumbricoides, Trichuris trichiura and hookworm infection was 13.5%, 45.2% and 58.1%, respectively. At least one of the parasites was detected in 72.3% of the samples. We found no association between infection with A. lumbricoides or T. trichiura and engagement in agriculture, while hookworm infection was more prevalent in populations having frequent contact with soil. Agricultural use of human faeces was not correlated with any of the infections. We suggest that the consumption of vegetables that are commonly fertilized with human faeces in the community has led to the high infection rates with A. lumbricoides and T. trichiura, rather than the manipulation of faeces in farming activity. This also explains the high infection prevalence, despite high latrine coverage (98.1%) in the study population. The presence of latrines alone is not sufficient to reduce the prevalence of helminthiasis in a rural agricultural community if fresh faeces are used as fertilizer.

  17. Research in high speed fiber optics local area networks

    Science.gov (United States)

    Tobagi, F. A.

    1986-01-01

    The design of high speed local area networks (HSLAN) for communication among distributed devices requires solving problems in three areas: the network medium and its topology, the medium access control, and the network interface. Considerable progress was already made in the first two areas. Accomplishments are divided into two groups according to their theoretical or experimental nature. A brief summary is given.

  18. Networks and their use in high energy physics

    International Nuclear Information System (INIS)

    Poutissou, R.

    1990-01-01

    Networks of all kinds have taken an ever increasing importance in many facets of the work of high energy physicists. With the rapid progress experienced in bandwidth and connectivity, wide area and local area networking have vastly improved communications and made distributed processing a reality. In this paper, the characteristics and possible applications of various networks and protocols are reviewed

  19. Accurate and High-Coverage Immune Repertoire Sequencing Reveals Characteristics of Antibody Repertoire Diversification in Young Children with Malaria

    Science.gov (United States)

    Jiang, Ning

    Accurately measuring the immune repertoire sequence composition, diversity, and abundance is important in studying repertoire response in infections, vaccinations, and cancer immunology. Using molecular identifiers (MIDs) to tag mRNA molecules is an effective method in improving the accuracy of immune repertoire sequencing (IR-seq). However, it is still difficult to use IR-seq on small amount of clinical samples to achieve a high coverage of the repertoire diversities. This is especially challenging in studying infections and vaccinations where B cell subpopulations with fewer cells, such as memory B cells or plasmablasts, are often of great interest to study somatic mutation patterns and diversity changes. Here, we describe an approach of IR-seq based on the use of MIDs in combination with a clustering method that can reveal more than 80% of the antibody diversity in a sample and can be applied to as few as 1,000 B cells. We applied this to study the antibody repertoires of young children before and during an acute malaria infection. We discovered unexpectedly high levels of somatic hypermutation (SHM) in infants and revealed characteristics of antibody repertoire development in young children that would have a profound impact on immunization in children.

  20. Large-size, high-uniformity, random silver nanowire networks as transparent electrodes for crystalline silicon wafer solar cells.

    Science.gov (United States)

    Xie, Shouyi; Ouyang, Zi; Jia, Baohua; Gu, Min

    2013-05-06

    Metal nanowire networks are emerging as next generation transparent electrodes for photovoltaic devices. We demonstrate the application of random silver nanowire networks as the top electrode on crystalline silicon wafer solar cells. The dependence of transmittance and sheet resistance on the surface coverage is measured. Superior optical and electrical properties are observed due to the large-size, highly-uniform nature of these networks. When applying the nanowire networks on the solar cells with an optimized two-step annealing process, we achieved as large as 19% enhancement on the energy conversion efficiency. The detailed analysis reveals that the enhancement is mainly caused by the improved electrical properties of the solar cells due to the silver nanowire networks. Our result reveals that this technology is a promising alternative transparent electrode technology for crystalline silicon wafer solar cells.

  1. Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter

    Science.gov (United States)

    Ashizawa, Kunitaka; Sato, Takehiro; Tokuhashi, Kazumasa; Ishii, Daisuke; Okamoto, Satoru; Yamanaka, Naoaki; Oki, Eiji

    This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30msec, which is acceptable for on-demand broadcast services.

  2. Media coverage of violence against women in India: a systematic study of a high profile rape case.

    Science.gov (United States)

    Phillips, Mark; Mostofian, Fargol; Jetly, Rajeev; Puthukudy, Nazar; Madden, Kim; Bhandari, Mohit

    2015-01-22

    On December 16, 2012 a 23 year old female was gang-raped on a bus in Delhi. We systematically reviewed professional online media sources used to inform the timing, breadth of coverage, opinions and consistency in the depiction of events surrounding the gang-rape. We searched two news databases (LexisNexis Academic and Factivia) and individual newspapers for English-language published media reports covering the gang-rape. Two reviewers screened the media reports and extracted data regarding the time, location and content of each report. Results were summarized qualitatively. We identified 534 published media reports. Of these, 351 met our eligibility criteria. Based on a time chart, the total number of reports published increased steadily through December, but plateaued to a steady rate of articles per day by the first week of January. Content analysis revealed significant discrepancies between various media reports. From the 57 articles which discussed opinions about the victim, 56% applauded her bravery, 40% discussed outrage over the events and 11% discussed cases of victim-blaming. The global media response of the December 16th gang-rape in India resulted in highly inconsistent depiction of the events. These findings suggest that although the spread of information through media is fast, it has major limitations.

  3. Highly dynamic animal contact network and implications on disease transmission

    OpenAIRE

    Shi Chen; Brad J. White; Michael W. Sanderson; David E. Amrine; Amiyaal Ilany; Cristina Lanzas

    2014-01-01

    Contact patterns among hosts are considered as one of the most critical factors contributing to unequal pathogen transmission. Consequently, networks have been widely applied in infectious disease modeling. However most studies assume static network structure due to lack of accurate observation and appropriate analytic tools. In this study we used high temporal and spatial resolution animal position data to construct a high-resolution contact network relevant to infectious disease transmissio...

  4. The Importance of Loosely Systematized Game Phases in Sports: The Case of Attack Coverage Systems in High-Level Women’s Volleyball

    Directory of Open Access Journals (Sweden)

    Lorenzo Laporta

    2015-03-01

    Full Text Available Change is ubiquitous, but its degree and rate often affords detection of emerging patterns and establishing behavioral dynamics based on expected regularities. Match analysis capitalizes on such regularities, capturing information relevant for enhancing structure and reducing improvisation to a minimum. However, what if a game phase is only loosely regular, defying pattern systematization? Is it still possible to unfold principles of behavior capable of abstracting over-arching patterns? Our research focused on analysis of complex IV (KIV or attack coverage in volleyball. Fourteen matches from the 2013 Volleyball Women’s World Grand Champions Cup were analyzed. Results showed the occurrence of KIV corresponded to fewer than 5% of the total number of actions, and plays where a team successfully conquered a point after attack coverage was circa 1%, meaning this game complex will only make a difference in balanced matches. Overall, twenty-nine attack coverage structures emerged, denoting very high organizational variability. Attack coverage therefore provides an example of principle-based and not structured-based game phase. Associative analysis showed that quick attack tempos constrain the emergence of more complex attack coverage structures. The search for principle-based instead of structure-based game phases may provide useful insights for comprehension of game dynamics and for informing training processes.

  5. Hepatitis A vaccination coverage among adults 18–49 years traveling to a country of high or intermediate endemicity, United States

    Science.gov (United States)

    Lu, Peng-jun; Byrd, Kathy K.; Murphy, Trudy V.

    2018-01-01

    Background Since 1996, hepatitis A vaccine (HepA) has been recommended for adults at increased risk for infection including travelers to high or intermediate hepatitis A endemic countries. In 2009, travel outside the United States and Canada was the most common exposure nationally reported for persons with hepatitis A virus (HAV) infection. Objective To assess HepA vaccination coverage among adults 18–49 years traveling to a country of high or intermediate endemicity in the United States. Methods We analyzed data from the 2010 National Health Interview Survey (NHIS), to determine self-reported HepA vaccination coverage (≥1 dose) and series completion (≥2 dose) among persons 18–49 years who traveled, since 1995, to a country of high or intermediate HAV endemicity. Multivariable logistic regression and predictive marginal analyses were conducted to identify factors independently associated with HepA vaccine receipt. Results In 2010, approximately 36.6% of adults 18–49 years reported traveling to high or intermediate hepatitis A endemic countries; among this group unadjusted HepA vaccination coverage was 26.6% compared to 12.7% among non-travelers (P-values hepatitis A endemicity was associated with higher likelihood of HepA vaccination in 2010 among adults 18–49 years, self-reported HepA vaccination coverage was low among adult travelers to these areas. Healthcare providers should ask their patients’ upcoming travel plans and recommend and offer travel related vaccinations to their patients. PMID:23523408

  6. High speed all-optical networks

    Science.gov (United States)

    Chlamtac, Imrich

    1993-01-01

    An inherent problem of conventional point-to-point WAN architectures is that they cannot translate optical transmission bandwidth into comparable user available throughput due to the limiting electronic processing speed of the switching nodes. This report presents the first solution to WDM based WAN networks that overcomes this limitation. The proposed Lightnet architecture takes into account the idiosyncrasies of WDM switching/transmission leading to an efficient and pragmatic solution. The Lightnet architecture trades the ample WDM bandwidth for a reduction in the number of processing stages and a simplification of each switching stage, leading to drastically increased effective network throughputs.

  7. The persistent gap in health-care coverage between low- and high-income workers in Washington State: BRFSS, 2003-2007.

    Science.gov (United States)

    Fan, Z Joyce; Anderson, Naomi J; Foley, Michael; Rauser, Eddy; Silverstein, Barbara A

    2011-01-01

    We examined the disparities in health-care coverage between low- and high-income workers in Washington State (WA) to provide support for possible policy decisions for uninsured workers. We examined data from the WA Behavioral Risk Factor Surveillance System 2003-2007 and compared workers aged 18-64 years of low income (annual household income income (annual household income ≥$35,000) on proportions and sources of health-care coverage. We conducted multivariable logistic regression analyses on factors that were associated with the uninsured. Of the 54,536 survey respondents who were working-age adults in WA, 13,922 (25.5%) were low-income workers. The proportions of uninsured were 38.2% for low-income workers and 6.3% for high-income workers. While employment-based health benefits remained a dominant source of health insurance coverage, they covered only 40.2% of low-income workers relative to 81.5% of high-income workers. Besides income, workers were more likely to be uninsured if they were younger; male; Hispanic; less educated; not married; current smokers; self-employed; or employed in agriculture/forestry/fisheries, construction, and retail. More low-income workers (28.7%) reported cost as an issue in paying for health services than did their high-income counterparts (6.7%). A persistent gap in health-care coverage exists between low- and high-income workers. The identified characteristics of these workers can be used to implement policies to expand health insurance coverage.

  8. Hepatitis B vaccination coverage among adults aged ≥18 years traveling to a country of high or intermediate endemicity, United States, 2015.

    Science.gov (United States)

    Lu, Peng-Jun; O'Halloran, Alissa C; Williams, Walter W; Nelson, Noele P

    2018-04-28

    Persons from the United States who travel to developing countries are at substantial risk for hepatitis B virus (HBV) infection. Hepatitis B vaccine has been recommended for adults at increased risk for infection, including travelers to high or intermediate hepatitis B endemic countries. To assess hepatitis B vaccination coverage among adults ≥18 years traveling to a country of high or intermediate endemicity from the United States. Data from the 2015 National Health Interview Survey (NHIS) were analyzed to determine hepatitis B vaccination coverage (≥1 dose) and series completion (≥3 doses) among persons aged ≥18 years who reported traveling to a country of high or intermediate hepatitis B endemicity. Multivariable logistic regression and predictive marginal analyses were conducted to identify factors independently associated with hepatitis B vaccination. In 2015, hepatitis B vaccination coverage (≥1 dose) among adults aged ≥18 years who reported traveling to high or intermediate hepatitis B endemic countries was 38.6%, significantly higher compared with 25.9% among non-travelers. Series completion (≥3 doses) was 31.7% and 21.2%, respectively (P travel status was significantly associated with hepatitis B vaccination coverage and series completion. Other characteristics independently associated with vaccination (≥1 dose, and ≥3 doses) among travelers included age, race/ethnicity, educational level, duration of US residence, number of physician contacts in the past year, status of ever being tested for HIV, and healthcare personnel status. Although travel to a country of high or intermediate hepatitis B endemicity was associated with higher likelihood of hepatitis B vaccination, hepatitis B vaccination coverage was low among adult travelers to these areas. Healthcare providers should ask their patients about travel plans and recommend and offer travel related vaccinations to their patients or refer them to alternate sites for vaccination

  9. Hepatitis B vaccination coverage among adults aged ≥ 18 years traveling to a country of high or intermediate endemicity, United States, 2015.

    Science.gov (United States)

    Lu, Peng-Jun; O'Halloran, Alissa C; Williams, Walter W; Nelson, Noele P

    2018-04-25

    Persons from the United States who travel to developing countries are at substantial risk for hepatitis B virus (HBV) infection. Hepatitis B vaccine has been recommended for adults at increased risk for infection, including travelers to high or intermediate hepatitis B endemic countries. To assess hepatitis B vaccination coverage among adults ≥ 18 years traveling to a country of high or intermediate endemicity from the United States. Data from the 2015 National Health Interview Survey (NHIS) were analyzed to determine hepatitis B vaccination coverage (≥1 dose) and series completion (≥3 doses) among persons aged ≥ 18 years who reported traveling to a country of high or intermediate hepatitis B endemicity. Multivariable logistic regression and predictive marginal analyses were conducted to identify factors independently associated with hepatitis B vaccination. In 2015, hepatitis B vaccination coverage (≥1 dose) among adults aged ≥ 18 years who reported traveling to high or intermediate hepatitis B endemic countries was 38.6%, significantly higher compared with 25.9% among non-travelers. Series completion (≥3 doses) was 31.7% and 21.2%, respectively (P travel status was significantly associated with hepatitis B vaccination coverage and series completion. Other characteristics independently associated with vaccination (≥1 dose, and ≥ 3 doses) among travelers included age, race/ethnicity, educational level, duration of U.S. residence, number of physician contacts in the past year, status of ever being tested for HIV, and healthcare personnel status. Although travel to a country of high or intermediate hepatitis B endemicity was associated with higher likelihood of hepatitis B vaccination, hepatitis B vaccination coverage was low among adult travelers to these areas. Healthcare providers should ask their patients about travel plans and recommend and offer travel related vaccinations to their patients or refer them to alternate

  10. Building and measuring a high performance network architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  11. High ANC coverage and low skilled attendance in a rural Tanzanian district: a case for implementing a birth plan intervention

    Directory of Open Access Journals (Sweden)

    Cousens Simon

    2010-03-01

    Full Text Available Abstract Background In Tanzania, more than 90% of all pregnant women attend antenatal care at least once and approximately 62% four times or more, yet less than five in ten receive skilled delivery care at available health units. We conducted a qualitative study in Ngorongoro district, Northern Tanzania, in order to gain an understanding of the health systems and socio-cultural factors underlying this divergent pattern of high use of antenatal services and low use of skilled delivery care. Specifically, the study examined beliefs and behaviors related to antenatal, labor, delivery and postnatal care among the Maasai and Watemi ethnic groups. The perspectives of health care providers and traditional birth attendants on childbirth and the factors determining where women deliver were also investigated. Methods Twelve key informant interviews and fifteen focus group discussions were held with Maasai and Watemi women, traditional birth attendants, health care providers, and community members. Principles of the grounded theory approach were used to elicit and assess the various perspectives of each group of participants interviewed. Results The Maasai and Watemi women's preferences for a home birth and lack of planning for delivery are reinforced by the failure of health care providers to consistently communicate the importance of skilled delivery and immediate post-partum care for all women during routine antenatal visits. Husbands typically serve as gatekeepers of women's reproductive health in the two groups - including decisions about where they will deliver- yet they are rarely encouraged to attend antenatal sessions. While husbands are encouraged to participate in programs to prevent maternal-to-child transmission of HIV, messages about the importance of skilled delivery care for all women are not given emphasis. Conclusions Increasing coverage of skilled delivery care and achieving the full implementation of Tanzania's Focused Antenatal Care

  12. High ANC coverage and low skilled attendance in a rural Tanzanian district: a case for implementing a birth plan intervention.

    Science.gov (United States)

    Magoma, Moke; Requejo, Jennifer; Campbell, Oona M R; Cousens, Simon; Filippi, Veronique

    2010-03-19

    In Tanzania, more than 90% of all pregnant women attend antenatal care at least once and approximately 62% four times or more, yet less than five in ten receive skilled delivery care at available health units. We conducted a qualitative study in Ngorongoro district, Northern Tanzania, in order to gain an understanding of the health systems and socio-cultural factors underlying this divergent pattern of high use of antenatal services and low use of skilled delivery care. Specifically, the study examined beliefs and behaviors related to antenatal, labor, delivery and postnatal care among the Maasai and Watemi ethnic groups. The perspectives of health care providers and traditional birth attendants on childbirth and the factors determining where women deliver were also investigated. Twelve key informant interviews and fifteen focus group discussions were held with Maasai and Watemi women, traditional birth attendants, health care providers, and community members. Principles of the grounded theory approach were used to elicit and assess the various perspectives of each group of participants interviewed. The Maasai and Watemi women's preferences for a home birth and lack of planning for delivery are reinforced by the failure of health care providers to consistently communicate the importance of skilled delivery and immediate post-partum care for all women during routine antenatal visits. Husbands typically serve as gatekeepers of women's reproductive health in the two groups - including decisions about where they will deliver- yet they are rarely encouraged to attend antenatal sessions. While husbands are encouraged to participate in programs to prevent maternal-to-child transmission of HIV, messages about the importance of skilled delivery care for all women are not given emphasis. Increasing coverage of skilled delivery care and achieving the full implementation of Tanzania's Focused Antenatal Care Package in Ngorongoro depends upon improved training and monitoring of

  13. Edema worsens target coverage in high-dose-rate interstitial brachytherapy of mobile tongue cancer: a report of two cases.

    Science.gov (United States)

    Yoshida, Ken; Yamazaki, Hideya; Kotsuma, Tadayuki; Akiyama, Hironori; Takenaka, Tadashi; Masui, Koji; Yoshioka, Yasuo; Uesugi, Yasuo; Shimbo, Taiju; Yoshikawa, Nobuhiko; Yoshioka, Hiroto; Arika, Takumi; Tanaka, Eiichi; Narumi, Yoshifumi

    2017-02-01

    We report our study on two patients to highlight the risk of underdosage of the clinical target volume (CTV) due to edema during high-dose-rate interstitial brachytherapy (HDR-ISBT) of mobile tongue cancer. To treat the lateral side of the CTV, flexible applicator tubes were implanted on the mouth floor. Two-dimensional planning was performed using X-ray images for Case 1, and three-dimensional (3D) planning was performed using computed tomography (CT) for Case 2. Prescribed doses for both cases were 54 Gy in nine fractions. Case 1 was treated for cancer of the right lateral border of the tongue in 2005. Tongue edema occurred after implantation, and part of the lateral border of the tongue protruded between the applicator tubes. Acute mucosal reaction abated in the protruded area earlier than in the other parts of the CTV. In this case, the tumor recurred in this area 5 months after the treatment. Case 2 was treated for cancer of the left lateral border of the tongue. Because tongue edema occurred in this case also, plastic splints were inserted between the applicator tubes to push the edematous region into the irradiated area. The mucosal surface of the CTV was covered by the 70% isodose, and 100% isodose line for before and after splint insertion. Local control of the tumor was achieved 4 years after treatment. To ensure sufficient target coverage, 3D image-based planning using CT should be performed, followed by re-planning using repeated CT as needed. Also, the development of devices to prevent protrusion of the edematous tissue outside the target area will help to ensure the full dosing of CTV.

  14. Simultaneous and complete genome sequencing of influenza A and B with high coverage by Illumina MiSeq Platform.

    Science.gov (United States)

    Rutvisuttinunt, Wiriya; Chinnawirotpisan, Piyawan; Simasathien, Sriluck; Shrestha, Sanjaya K; Yoon, In-Kyu; Klungthong, Chonticha; Fernandez, Stefan

    2013-11-01

    Active global surveillance and characterization of influenza viruses are essential for better preparation against possible pandemic events. Obtaining comprehensive information about the influenza genome can improve our understanding of the evolution of influenza viruses and emergence of new strains, and improve the accuracy when designing preventive vaccines. This study investigated the use of deep sequencing by the next-generation sequencing (NGS) Illumina MiSeq Platform to obtain complete genome sequence information from influenza virus isolates. The influenza virus isolates were cultured from 6 respiratory acute clinical specimens collected in Thailand and Nepal. DNA libraries obtained from each viral isolate were mixed and all were sequenced simultaneously. Total information of 2.6 Gbases was obtained from a 455±14 K/mm2 density with 95.76% (8,571,655/8,950,724 clusters) of the clusters passing quality control (QC) filters. Approximately 93.7% of all sequences from Read1 and 83.5% from Read2 contained high quality sequences that were ≥Q30, a base calling QC score standard. Alignments analysis identified three seasonal influenza A H3N2 strains, one 2009 pandemic influenza A H1N1 strain and two influenza B strains. The nearly entire genomes of all six virus isolates yielded equal or greater than 600-fold sequence coverage depth. MiSeq Platform identified seasonal influenza A H3N2, 2009 pandemic influenza A H1N1and influenza B in the DNA library mixtures efficiently. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Mapping Urban Tree Canopy Coverage and Structure using Data Fusion of High Resolution Satellite Imagery and Aerial Lidar

    Science.gov (United States)

    Elmes, A.; Rogan, J.; Williams, C. A.; Martin, D. G.; Ratick, S.; Nowak, D.

    2015-12-01

    Urban tree canopy (UTC) coverage is a critical component of sustainable urban areas. Trees provide a number of important ecosystem services, including air pollution mitigation, water runoff control, and aesthetic and cultural values. Critically, urban trees also act to mitigate the urban heat island (UHI) effect by shading impervious surfaces and via evaporative cooling. The cooling effect of urban trees can be seen locally, with individual trees reducing home HVAC costs, and at a citywide scale, reducing the extent and magnitude of an urban areas UHI. In order to accurately model the ecosystem services of a given urban forest, it is essential to map in detail the condition and composition of these trees at a fine scale, capturing individual tree crowns and their vertical structure. This paper presents methods for delineating UTC and measuring canopy structure at fine spatial resolution (body of methods, relying on a data fusion method to combine the information contained in high resolution WorldView-3 satellite imagery and aerial lidar data using an object-based image classification approach. The study area, Worcester, MA, has recently undergone a large-scale tree removal and reforestation program, following a pest eradication effort. Therefore, the urban canopy in this location provides a wide mix of tree age class and functional type, ideal for illustrating the effectiveness of the proposed methods. Early results show that the object-based classifier is indeed capable of identifying individual tree crowns, while continued research will focus on extracting crown structural characteristics using lidar-derived metrics. Ultimately, the resulting fine resolution UTC map will be compared with previously created UTC maps of the same area but for earlier dates, producing a canopy change map corresponding to the Worcester area tree removal and replanting effort.

  16. Performance Evaluation of a Dual Coverage System for Internet of Things Environments

    Directory of Open Access Journals (Sweden)

    Omar Said

    2016-01-01

    Full Text Available A dual coverage system for Internet of Things (IoT environments is introduced. This system is used to connect IoT nodes regardless of their locations. The proposed system has three different architectures, which are based on satellites and High Altitude Platforms (HAPs. In case of Internet coverage problems, the Internet coverage will be replaced with the Satellite/HAP network coverage under specific restrictions such as loss and delay. According to IoT requirements, the proposed architectures should include multiple levels of satellites or HAPs, or a combination of both, to cover the global Internet things. It was shown that the Satellite/HAP/HAP/Things architecture provides the largest coverage area. A network simulation package, NS2, was used to test the performance of the proposed multilevel architectures. The results indicated that the HAP/HAP/Things architecture has the best end-to-end delay, packet loss, throughput, energy consumption, and handover.

  17. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  18. EasyPCC: Benchmark Datasets and Tools for High-Throughput Measurement of the Plant Canopy Coverage Ratio under Field Conditions

    Directory of Open Access Journals (Sweden)

    Wei Guo

    2017-04-01

    Full Text Available Understanding interactions of genotype, environment, and management under field conditions is vital for selecting new cultivars and farming systems. Image analysis is considered a robust technique in high-throughput phenotyping with non-destructive sampling. However, analysis of digital field-derived images remains challenging because of the variety of light intensities, growth environments, and developmental stages. The plant canopy coverage (PCC ratio is an important index of crop growth and development. Here, we present a tool, EasyPCC, for effective and accurate evaluation of the ground coverage ratio from a large number of images under variable field conditions. The core algorithm of EasyPCC is based on a pixel-based segmentation method using a decision-tree-based segmentation model (DTSM. EasyPCC was developed under the MATLAB® and R languages; thus, it could be implemented in high-performance computing to handle large numbers of images following just a single model training process. This study used an experimental set of images from a paddy field to demonstrate EasyPCC, and to show the accuracy improvement possible by adjusting key points (e.g., outlier deletion and model retraining. The accuracy (R2 = 0.99 of the calculated coverage ratio was validated against a corresponding benchmark dataset. The EasyPCC source code is released under GPL license with benchmark datasets of several different crop types for algorithm development and for evaluating ground coverage ratios.

  19. High coverage hydrogen adsorption on the Fe{sub 3}O{sub 4}(1 1 0) surface

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xiaohu, E-mail: yuxiaohu950203@126.com [College of Physics and Electrical Engineering, Anyang Normal University, Anyang, Henan 455000 (China); State Key laboratory of Coal Conversion, Institute of Coal Chemistry, Chinese Academy of Sciences, Taiyuan, Shanxi 030001 (China); Zhang, Xuemei [College of Physics and Electrical Engineering, Anyang Normal University, Anyang, Henan 455000 (China); Wang, Shengguang [State Key laboratory of Coal Conversion, Institute of Coal Chemistry, Chinese Academy of Sciences, Taiyuan, Shanxi 030001 (China); Synfuels China Co., Ltd., Huairou, Beijing 101407 (China)

    2015-10-30

    Graphical abstract: - Highlights: • Hydrogen adsorption on the A and B termination layers of the Fe{sub 3}O{sub 4}(1 1 0) surface at different coverage has been studied by DFT + U method. • The adsorption of hydrogen prefers surface oxygen atoms on both Fe{sub 3}O{sub 4}(1 1 0) surface layers. • The more stable A layer has stronger adsorption energy than the less stable B layer. • The saturation coverage has two dissociatively adsorbed H{sub 2} on the A layer, and one dissociatively adsorbed H{sub 2} on the B layer. - Abstract: Hydrogen adsorption on the A and B termination layers of the Fe{sub 3}O{sub 4}(1 1 0) surface at different coverage has been systematically studied by density functional theory calculations including an on-site Hubbard term (GGA + U). The adsorption of hydrogen prefers surface oxygen atoms on both layers. The more stable A layer has stronger adsorption energy than the less stable B layer. The saturation coverage has two dissociatively adsorbed H{sub 2} on the A layer, and one dissociatively adsorbed H{sub 2} on the B layer. The adsorption mechanism has been analyzed on the basis of projected density of states (PDOS).

  20. Applications of neural networks in high energy physics

    International Nuclear Information System (INIS)

    Cutts, D.; Hoftun, J.S.; Nesic, D.; Sornborger, A.; Johnson, C.R.; Zeller, R.T.

    1990-01-01

    Neural network techniques provide promising solutions to pattern recognition problems in high energy physics. We discuss several applications of back propagation networks, and in particular describe the operation of an electron algorithm based on calorimeter energies. 5 refs., 5 figs., 1 tab

  1. Network Information Management: The Key To Providing High WAN Availability.

    Science.gov (United States)

    Tysdal, Craig

    1996-01-01

    Discusses problems associated with increasing corporate network complexity as a result of the proliferation of client/server applications at remote locations, and suggests the key to providing high WAN (wide area network) availability is relational databases used in an integrated management approach. (LRW)

  2. High-Speed Railways and Urban Networks in China

    NARCIS (Netherlands)

    Yang, Haoran

    2018-01-01

    Worldwide, High-Speed Railway (HSR) networks have been developed intensely over the last few decades, such as Tokyo-Osaka, the first HSR corridor in Japan, the TGV in France and the ICE in Germany. HSR has also experienced exponential growth in China so that currently China’s HSR networks are the

  3. Hepatitis A vaccination coverage among adults 18-49 years traveling to a country of high or intermediate endemicity, United States.

    Science.gov (United States)

    Lu, Peng-Jun; Byrd, Kathy K; Murphy, Trudy V

    2013-05-01

    Since 1996, hepatitis A vaccine (HepA) has been recommended for adults at increased risk for infection including travelers to high or intermediate hepatitis A endemic countries. In 2009, travel outside the United States and Canada was the most common exposure nationally reported for persons with hepatitis A virus (HAV) infection. To assess HepA vaccination coverage among adults 18-49 years traveling to a country of high or intermediate endemicity in the United States. We analyzed data from the 2010 National Health Interview Survey (NHIS), to determine self-reported HepA vaccination coverage (≥1 dose) and series completion (≥2 dose) among persons 18-49 years who traveled, since 1995, to a country of high or intermediate HAV endemicity. Multivariable logistic regression and predictive marginal analyses were conducted to identify factors independently associated with HepA vaccine receipt. In 2010, approximately 36.6% of adults 18-49 years reported traveling to high or intermediate hepatitis A endemic countries; among this group unadjusted HepA vaccination coverage was 26.6% compared to 12.7% among non-travelers (P-valuestravel status was an independent predictor of HepA coverage and series completion (both P-valuestravelers, HepA coverage and series completion (≥2 doses) were higher for travelers 18-25 years (prevalence ratios 2.3, 2.8, respectively, P-valuestravelers 26-39 years (prevalence ratios 1.5, 1.5, respectively, P-valuetravelers 40-49 years. Other characteristics independently associated with a higher likelihood of HepA receipt among travelers included Asian race/ethnicity, male sex, never having been married, having a high school or higher education, living in the western United States, having greater number of physician contacts or receipt of influenza vaccination in the previous year. HepB vaccination was excluded from the model because of the significant correlation between receipt of HepA vaccination and HepB vaccination could distort the model

  4. Interaction Networks: Generating High Level Hints Based on Network Community Clustering

    Science.gov (United States)

    Eagle, Michael; Johnson, Matthew; Barnes, Tiffany

    2012-01-01

    We introduce a novel data structure, the Interaction Network, for representing interaction-data from open problem solving environment tutors. We show how using network community detecting techniques are used to identify sub-goals in problems in a logic tutor. We then use those community structures to generate high level hints between sub-goals.…

  5. Coverage dependent molecular assembly of anthraquinone on Au(111)

    Science.gov (United States)

    DeLoach, Andrew S.; Conrad, Brad R.; Einstein, T. L.; Dougherty, Daniel B.

    2017-11-01

    A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.

  6. High-speed and high-fidelity system and method for collecting network traffic

    Science.gov (United States)

    Weigle, Eric H [Los Alamos, NM

    2010-08-24

    A system is provided for the high-speed and high-fidelity collection of network traffic. The system can collect traffic at gigabit-per-second (Gbps) speeds, scale to terabit-per-second (Tbps) speeds, and support additional functions such as real-time network intrusion detection. The present system uses a dedicated operating system for traffic collection to maximize efficiency, scalability, and performance. A scalable infrastructure and apparatus for the present system is provided by splitting the work performed on one host onto multiple hosts. The present system simultaneously addresses the issues of scalability, performance, cost, and adaptability with respect to network monitoring, collection, and other network tasks. In addition to high-speed and high-fidelity network collection, the present system provides a flexible infrastructure to perform virtually any function at high speeds such as real-time network intrusion detection and wide-area network emulation for research purposes.

  7. Multicast Performance Analysis for High-Speed Torus Networks

    National Research Council Canada - National Science Library

    Oral, S; George, A

    2002-01-01

    ... for unicast-based and path-based multicast communication on high-speed torus networks. Software-based multicast performance results of selected algorithms on a 16-node Scalable Coherent Interface (SCI) torus are given...

  8. Conservation in gene encoding Mycobacterium tuberculosis antigen Rv2660 and a high predicted population coverage of H56 multistage vaccine in South Africa.

    Science.gov (United States)

    Perez-Martinez, Angy P; Ong, Edison; Zhang, Lixin; Marrs, Carl F; He, Yongqun; Yang, Zhenhua

    2017-11-01

    H56/AERAS-456+IC31 (H56), composed of two early secretion proteins, Ag85B and ESAT-6, and a latency associated protein, Rv2660, and the IC31 Intercell adjuvant, is a new fusion subunit vaccine candidate designed to induce immunity against both new infection and reactivation of latent tuberculosis infection. Efficacy of subunit vaccines may be affected by the diversity of vaccine antigens among clinical strains and the extent of recognition by the diverse HLA molecules in the recipient population. Although a previous study showed the conservative nature of Ag85B- and ESAT-6-encoding genes, genetic diversity of Rv2660c that encodes RV2660 is largely unknown. The population coverage of H56 as a whole yet remains to be assessed. The present study was conducted to address these important knowledge gaps. DNA sequence analysis of Rv2660c found no variation among 83 of the 84 investigated clinical strains belonging to four genetic lineages. H56 was predicted to have as high as 99.6% population coverage in the South Africa population using the Immune Epitope Database (IEDB) Population Coverage Tool. Further comparison of H56 population coverage between South African Blacks and Caucasians based on the phenotypic frequencies of binding MHC Class I and Class II supertype alleles found that all of the nine MHC-I and six of eight MHC-II human leukocyte antigen (HLA) supertype alleles analyzed were significantly differentially expressed between the two subpopulations. This finding suggests the presence of race-specific functional binding motifs of MHC-I and MHC-II HLA alleles, which, in turn, highlights the importance of including diverse populations in vaccine clinical evaluation. In conclusion, H56 vaccine is predicted to have a promising population coverage in South Africa; this study demonstrates the utility of integrating comparative genomics and bioinformatics in bridging animal and clinical studies of novel TB vaccines. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  10. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  11. Compressive sensing of high betweenness centrality nodes in networks

    Science.gov (United States)

    Mahyar, Hamidreza; Hasheminezhad, Rouzbeh; Ghalebi K., Elahe; Nazemian, Ali; Grosu, Radu; Movaghar, Ali; Rabiee, Hamid R.

    2018-05-01

    Betweenness centrality is a prominent centrality measure expressing importance of a node within a network, in terms of the fraction of shortest paths passing through that node. Nodes with high betweenness centrality have significant impacts on the spread of influence and idea in social networks, the user activity in mobile phone networks, the contagion process in biological networks, and the bottlenecks in communication networks. Thus, identifying k-highest betweenness centrality nodes in networks will be of great interest in many applications. In this paper, we introduce CS-HiBet, a new method to efficiently detect top- k betweenness centrality nodes in networks, using compressive sensing. CS-HiBet can perform as a distributed algorithm by using only the local information at each node. Hence, it is applicable to large real-world and unknown networks in which the global approaches are usually unrealizable. The performance of the proposed method is evaluated by extensive simulations on several synthetic and real-world networks. The experimental results demonstrate that CS-HiBet outperforms the best existing methods with notable improvements.

  12. Supplementing High-Density SNP Microarrays for Additional Coverage of Disease-Related Genes: Addiction as a Paradigm

    Energy Technology Data Exchange (ETDEWEB)

    SacconePhD, Scott F [Washington University, St. Louis; Chesler, Elissa J [ORNL; Bierut, Laura J [Washington University, St. Louis; Kalivas, Peter J [Medical College of South Carolina, Charleston; Lerman, Caryn [University of Pennsylvania; Saccone, Nancy L [Washington University, St. Louis; Uhl, George R [Johns Hopkins University; Li, Chuan-Yun [Peking University; Philip, Vivek M [ORNL; Edenberg, Howard [Indiana University; Sherry, Steven [National Center for Biotechnology Information; Feolo, Michael [National Center for Biotechnology Information; Moyzis, Robert K [Johns Hopkins University; Rutter, Joni L [National Institute of Drug Abuse

    2009-01-01

    Commercial SNP microarrays now provide comprehensive and affordable coverage of the human genome. However, some diseases have biologically relevant genomic regions that may require additional coverage. Addiction, for example, is thought to be influenced by complex interactions among many relevant genes and pathways. We have assembled a list of 486 biologically relevant genes nominated by a panel of experts on addiction. We then added 424 genes that showed evidence of association with addiction phenotypes through mouse QTL mappings and gene co-expression analysis. We demonstrate that there are a substantial number of SNPs in these genes that are not well represented by commercial SNP platforms. We address this problem by introducing a publicly available SNP database for addiction. The database is annotated using numeric prioritization scores indicating the extent of biological relevance. The scores incorporate a number of factors such as SNP/gene functional properties (including synonymy and promoter regions), data from mouse systems genetics and measures of human/mouse evolutionary conservation. We then used HapMap genotyping data to determine if a SNP is tagged by a commercial microarray through linkage disequilibrium. This combination of biological prioritization scores and LD tagging annotation will enable addiction researchers to supplement commercial SNP microarrays to ensure comprehensive coverage of biologically relevant regions.

  13. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  14. Sampling from complex networks with high community structures.

    Science.gov (United States)

    Salehi, Mostafa; Rabiee, Hamid R; Rajabi, Arezo

    2012-06-01

    In this paper, we propose a novel link-tracing sampling algorithm, based on the concepts from PageRank vectors, to sample from networks with high community structures. Our method has two phases; (1) Sampling the closest nodes to the initial nodes by approximating personalized PageRank vectors and (2) Jumping to a new community by using PageRank vectors and unknown neighbors. Empirical studies on several synthetic and real-world networks show that the proposed method improves the performance of network sampling compared to the popular link-based sampling methods in terms of accuracy and visited communities.

  15. High speed fiber optics local area networks: Design and implementation

    Science.gov (United States)

    Tobagi, Fouad A.

    1988-01-01

    The design of high speed local area networks (HSLAN) for communication among distributed devices requires solving problems in three areas: (1) the network medium and its topology; (2) the medium access control; and (3) the network interface. Considerable progress has been made in all areas. Accomplishments are divided into two groups according to their theoretical or experimental nature. A brief summary is given in Section 2, including references to papers which appeared in the literature, as well as to Ph.D. dissertations and technical reports published at Stanford University.

  16. CERNET - A high-speed packet-switching network

    International Nuclear Information System (INIS)

    Gerard, J.M.

    1981-01-01

    A general mesh-structured high-speed computer network has been designed and built. This network provides communication between any pair of connected user computers over distances of upto 6 km and at line speeds of 1 to 5 Mbit/second. The network is composed of a communication subnet providing a datagram service, complemented by tasks in the connected machines to implement an end-to-end logical link protocol. Details are given of the overall structure as well as the specific modules of which the system is composed. (orig.)

  17. High-precision multi-node clock network distribution.

    Science.gov (United States)

    Chen, Xing; Cui, Yifan; Lu, Xing; Ci, Cheng; Zhang, Xuesong; Liu, Bo; Wu, Hong; Tang, Tingsong; Shi, Kebin; Zhang, Zhigang

    2017-10-01

    A high precision multi-node clock network for multiple users was built following the precise frequency transmission and time synchronization of 120 km fiber. The network topology adopts a simple star-shaped network structure. The clock signal of a hydrogen maser (synchronized with UTC) was recovered from a 120 km telecommunication fiber link and then was distributed to 4 sub-stations. The fractional frequency instability of all substations is in the level of 10 -15 in a second and the clock offset instability is in sub-ps in root-mean-square average.

  18. High-resolution method for evolving complex interface networks

    Science.gov (United States)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  19. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  20. Introduction to neural networks in high energy physics

    International Nuclear Information System (INIS)

    Therhaag, J.

    2013-01-01

    Artificial neural networks are a well established tool in high energy physics, playing an important role in both online and offline data analysis. Nevertheless they are often perceived as black boxes which perform obscure operations beyond the control of the user, resulting in a skepticism against any results that may be obtained using them. The situation is not helped by common explanations which try to draw analogies between artificial neural networks and the human brain, for the brain is an even more complex black box itself. In this introductory text, I will take a problem-oriented approach to neural network techniques, showing how the fundamental concepts arise naturally from the demand to solve classification tasks which are frequently encountered in high energy physics. Particular attention is devoted to the question how probability theory can be used to control the complexity of neural networks. (authors)

  1. Population-based CD4 counts in a rural area in South Africa with high HIV prevalence and high antiretroviral treatment coverage.

    Directory of Open Access Journals (Sweden)

    Abraham Malaza

    Full Text Available Little is known about the variability of CD4 counts in the general population of sub-Saharan Africa countries affected by the HIV epidemic. We investigated factors associated with CD4 counts in a rural area in South Africa with high HIV prevalence and high antiretroviral treatment (ART coverage.CD4 counts, health status, body mass index (BMI, demographic characteristics and HIV status were assessed in 4990 adult resident participants of a demographic surveillance in rural KwaZulu-Natal in South Africa; antiretroviral treatment duration was obtained from a linked clinical database. Multivariable regression analysis, overall and stratified by HIV status, was performed with CD4 count levels as outcome.Median CD4 counts were significantly higher in women than in men overall (714 vs. 630 cells/µl, p<0.0001, both in HIV-uninfected (833 vs. 683 cells/µl, p<0.0001 and HIV-infected adults (384.5 vs. 333 cells/µl, p<0.0001. In multivariable regression analysis, women had 19.4% (95% confidence interval (CI 16.1-22.9 higher CD4 counts than men, controlling for age, HIV status, urban/rural residence, household wealth, education, BMI, self-reported tuberculosis, high blood pressure, other chronic illnesses and sample processing delay. At ART initiation, HIV-infected adults had 21.7% (95% CI 14.6-28.2 lower CD4 counts than treatment-naive individuals; CD4 counts were estimated to increase by 9.2% (95% CI 6.2-12.4 per year of treatment.CD4 counts are primarily determined by sex in HIV-uninfected adults, and by sex, age and duration of antiretroviral treatment in HIV-infected adults. Lower CD4 counts at ART initiation in men could be a consequence of lower CD4 cell counts before HIV acquisition.

  2. A high-coverage nanoparticle monolayer for the fabrication of a subwavelength structure on InP substrates.

    Science.gov (United States)

    Kim, Dae-Seon; Park, Min-Su; Jang, Jae-Hyung

    2011-08-01

    Subwavelength structures (SWSs) were fabricated on the Indium Phosphide (InP) substrate by utilizing the confined convective self-assembly (CCSA) method followed by reactive ion etching (RIE). The surface condition of the InP substrate was changed by depositing a 30-nm-thick SiO2 layer and subsequently treating the surface with O2 plasma to achieve better surface coverage. The surface coverage of nanoparticle monolayer reached 90% by using O2 plasma-treated SiO2/InP substrate among three kinds of starting substrates such as the bare InP, SiO2/InP and O2 plasma-treated SiO2/InP substrate. A nanoparticle monolayer consisting of polystyrene spheres with diameter of 300 nm was used as an etch mask for transferring a two-dimensional periodic pattern onto the InP substrate. The fabricated conical SWS with an aspect ratio of 1.25 on the O2 plasma-treated SiO2/InP substrate exhibited the lowest reflectance. The average reflectance of the conical SWS was 5.84% in a spectral range between 200 and 900 nm under the normal incident angle.

  3. Technologies for highly miniaturized autonomous sensor networks

    NARCIS (Netherlands)

    Baert, K.; Gyselinckx, B.; Torfs, T.; Leonov, V.; Yazicioglu, F.; Brebels, S.; Donnay, S.; Vanfleteren, J.; Beyne, E.; Hoof, C. van

    2006-01-01

    Recent results of the autonomous sensor research program HUMAN++ will be summarized in this paper. The research program aims to achieve highly miniaturized and (nearly) autonomous sensor systems that assist our health and comfort. Although the application examples are dedicated to human

  4. Where do the rural poor deliver when high coverage of health facility delivery is achieved? Findings from a community and hospital survey in Tanzania.

    Directory of Open Access Journals (Sweden)

    Manuela Straneo

    Full Text Available As part of maternal mortality reducing strategies, coverage of delivery care among sub-Saharan African rural poor will improve, with a range of facilities providing services. Whether high coverage will benefit all socio-economic groups is unknown. Iringa rural District, Southern Tanzania, with high facility delivery coverage, offers a paradigm to address this question. Delivery services are available in first-line facilities (dispensaries, health centres and one hospital. We assessed whether all socio-economic groups access the only comprehensive emergency obstetric care facility equally, and surveyed existing delivery services.District population characteristics were obtained from a household community survey (n = 463. A Hospital survey collected data on women who delivered in this facility (n = 1072. Principal component analysis on household assets was used to assess socio-economic status. Hospital population socio-demographic characteristics were compared to District population using multivariable logistic regression. Deliveries' distribution in District facilities and staffing were analysed using routine data.Women from the hospital compared to the District population were more likely to be wealthier. Adjusted odds ratio of hospital delivery increased progressively across socio-economic groups, from 1.73 for the poorer (p = 0.0031 to 4.53 (p<0.0001 for the richest. Remarkable dispersion of deliveries and poor staffing were found. In 2012, 5505/7645 (72% institutional deliveries took place in 68 first-line facilities, the remaining in the hospital. 56/68 (67.6% first-line facilities reported ≤100 deliveries/year, attending 33% of deliveries. Insufficient numbers of skilled birth attendants were found in 42.9% of facilities.Poorer women remain disadvantaged in high coverage, as they access lower level facilities and are under-represented where life-saving transfusions and caesarean sections are available. Tackling the challenges

  5. Brain Network Analysis from High-Resolution EEG Signals

    Science.gov (United States)

    de Vico Fallani, Fabrizio; Babiloni, Fabio

    Over the last decade, there has been a growing interest in the detection of the functional connectivity in the brain from different neuroelectromagnetic and hemodynamic signals recorded by several neuro-imaging devices such as the functional Magnetic Resonance Imaging (fMRI) scanner, electroencephalography (EEG) and magnetoencephalography (MEG) apparatus. Many methods have been proposed and discussed in the literature with the aim of estimating the functional relationships among different cerebral structures. However, the necessity of an objective comprehension of the network composed by the functional links of different brain regions is assuming an essential role in the Neuroscience. Consequently, there is a wide interest in the development and validation of mathematical tools that are appropriate to spot significant features that could describe concisely the structure of the estimated cerebral networks. The extraction of salient characteristics from brain connectivity patterns is an open challenging topic, since often the estimated cerebral networks have a relative large size and complex structure. Recently, it was realized that the functional connectivity networks estimated from actual brain-imaging technologies (MEG, fMRI and EEG) can be analyzed by means of the graph theory. Since a graph is a mathematical representation of a network, which is essentially reduced to nodes and connections between them, the use of a theoretical graph approach seems relevant and useful as firstly demonstrated on a set of anatomical brain networks. In those studies, the authors have employed two characteristic measures, the average shortest path L and the clustering index C, to extract respectively the global and local properties of the network structure. They have found that anatomical brain networks exhibit many local connections (i.e. a high C) and few random long distance connections (i.e. a low L). These values identify a particular model that interpolate between a regular

  6. Neural networks and cellular automata in experimental high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B

    1988-06-01

    Within the past few years, two novel computing techniques, cellular automata and neural networks, have shown considerable promise in the solution of problems of a very high degree of complexity, such as turbulent fluid flow, image processing, and pattern recognition. Many of the problems faced in experimental high energy physics are also of this nature. Track reconstruction in wire chambers and cluster finding in cellular calorimeters, for instance, involve pattern recognition and high combinatorial complexity since many combinations of hits or cells must be considered in order to arrive at the final tracks or clusters. Here we examine in what way connective network methods can be applied to some of the problems of experimental high energy physics. It is found that such problems as track and cluster finding adapt naturally to these approaches. When large scale hard-wired connective networks become available, it will be possible to realize solutions to such problems in a fraction of the time required by traditional methods. For certain types of problems, faster solutions are already possible using model networks implemented on vector or other massively parallel machines. It should also be possible, using existing technology, to build simplified networks that will allow detailed reconstructed event information to be used in fast trigger decisions.

  7. Neural networks and cellular automata in experimental high energy physics

    International Nuclear Information System (INIS)

    Denby, B.

    1987-11-01

    Within the past few years, two novel computing techniques, cellular automata and neural networks, have shown considerable promise in the solution of problems of a very high degree of complexity, such as turbulent fluid flow, image processing, and pattern recognition. Many of the problems faced in experimental high energy physics are also of this nature. Track reconstruction in wire chambers and cluster finding in cellular calorimeters, for instance, involve pattern recognition and high combinatorial complexity since many combinations of hits or cells must be considered in order to arrive at the final tracks or clusters. Here we examine in what way connective network methods can be applied to some of the problems of experimental high physics. It is found that such problems as track and cluster finding adapt naturally to these approaches. When large scale hardwired connective networks become available, it will be possible to realize solutions to such problems in a fraction of the time required by traditional methods. For certain types of problems, faster solutions are already possible using model networks implemented on vector or other massively parallel machines. It should also be possible, using existing technology, to build simplified networks that will allow detailed reconstructed event information to be used in fast trigger decisions

  8. Neural networks and cellular automata in experimental high energy physics

    International Nuclear Information System (INIS)

    Denby, B.

    1988-01-01

    Within the past few years, two novel computing techniques, cellular automata and neural networks, have shown considerable promise in the solution of problems of a very high degree of complexity, such as turbulent fluid flow, image processing, and pattern recognition. Many of the problems faced in experimental high energy physics are also of this nature. Track reconstruction in wire chambers and cluster finding in cellular calorimeters, for instance, involve pattern recognition and high combinatorial complexity since many combinations of hits or cells must be considered in order to arrive at the final tracks or clusters. Here we examine in what way connective network methods can be applied to some of the problems of experimental high energy physics. It is found that such problems as track and cluster finding adapt naturally to these approaches. When large scale hard-wired connective networks become available, it will be possible to realize solutions to such problems in a fraction of the time required by traditional methods. For certain types of problems, faster solutions are already possible using model networks implemented on vector or other massively parallel machines. It should also be possible, using existing technology, to build simplified networks that will allow detailed reconstructed event information to be used in fast trigger decisions. (orig.)

  9. Achieving high data throughput in research networks

    International Nuclear Information System (INIS)

    Matthews, W.; Cottrell, L.

    2001-01-01

    After less than a year of operation, the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB. Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory (LLNL). BaBar collaborators plan to double data collection each year and export a third of the data to IN2P3. So within a few years the SLAC OC3 (155 Mbps) connection will be fully utilized by file transfer to France alone. Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical. In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed. Methods for achieving the ambitious requirements will be discussed

  10. Achieving High Data Throughput in Research Networks

    International Nuclear Information System (INIS)

    Matthews, W

    2004-01-01

    After less than a year of operation, the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB. Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory (LLNL). BaBar collaborators plan to double data collection each year and export a third of the data to IN2P3. So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone. Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical. In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed. Methods for achieving the ambitious requirements will be discussed

  11. High coverage of the complete mitochondrial genome of the rare Gray's beaked whale (Mesoplodon grayi) using Illumina next generation sequencing.

    Science.gov (United States)

    Thompson, Kirsten F; Patel, Selina; Williams, Liam; Tsai, Peter; Constantine, Rochelle; Baker, C Scott; Millar, Craig D

    2016-01-01

    Using an Illumina platform, we shot-gun sequenced the complete mitochondrial genome of Gray's beaked whale (Mesoplodon grayi) to an average coverage of 152X. We performed a de novo assembly using SOAPdenovo2 and determined the total mitogenome length to be 16,347 bp. The nucleotide composition was asymmetric (33.3% A, 24.6% C, 12.6% G, 29.5% T) with an overall GC content of 37.2%. The gene organization was similar to that of other cetaceans with 13 protein-coding genes, 2 rRNAs (12S and 16S), 22 predicted tRNAs and 1 control region or D-loop. We found no evidence of heteroplasmy or nuclear copies of mitochondrial DNA in this individual. Beaked whales within the genus Mesoplodon are rarely seen at sea and their basic biology is poorly understood. These data will contribute to resolving the phylogeography and population ecology of this speciose group.

  12. High-Throughput Network Communication with NetIO

    CERN Document Server

    Schumacher, J\\"orn; The ATLAS collaboration; Vandelli, Wainer

    2016-01-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low-latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS target exclusively the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs (and this has been done), but it requires a non negligible effort and expert knowledge. On the other hand, message services like 0MQ have gained popularity in the HEP community. Such APIs allow to build distributed applications with a high-level approach and provide good performance. Unfortunately their usage usually limits developers to TCP/IP-based networks. While it is possible to operate a TCP/IP stack on to...

  13. MmWave Vehicle-to-Infrastructure Communication :Analysis of Urban Microcellular Networks

    Science.gov (United States)

    2017-05-01

    Vehicle-to-infrastructure (V2I) communication may provide high data rates to vehicles via millimeterwave (mmWave) microcellular networks. This report uses stochastic geometry to analyze the coverage of urban mmWave microcellular networks. Prior work ...

  14. Design and implementation of interface units for high speed fiber optics local area networks and broadband integrated services digital networks

    Science.gov (United States)

    Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph

    1990-01-01

    The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.

  15. Management of synchronized network activity by highly active neurons

    International Nuclear Information System (INIS)

    Shein, Mark; Raichman, Nadav; Ben-Jacob, Eshel; Volman, Vladislav; Hanein, Yael

    2008-01-01

    Increasing evidence supports the idea that spontaneous brain activity may have an important functional role. Cultured neuronal networks provide a suitable model system to search for the mechanisms by which neuronal spontaneous activity is maintained and regulated. This activity is marked by synchronized bursting events (SBEs)—short time windows (hundreds of milliseconds) of rapid neuronal firing separated by long quiescent periods (seconds). However, there exists a special subset of rapidly firing neurons whose activity also persists between SBEs. It has been proposed that these highly active (HA) neurons play an important role in the management (i.e. establishment, maintenance and regulation) of the synchronized network activity. Here, we studied the dynamical properties and the functional role of HA neurons in homogeneous and engineered networks, during early network development, upon recovery from chemical inhibition and in response to electrical stimulations. We found that their sequences of inter-spike intervals (ISI) exhibit long time correlations and a unimodal distribution. During the network's development and under intense inhibition, the observed activity follows a transition period during which mostly HA neurons are active. Studying networks with engineered geometry, we found that HA neurons are precursors (the first to fire) of the spontaneous SBEs and are more responsive to electrical stimulations

  16. Evaluation of High-Performance Network Technologies for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Zagar, K.; Kolaric, P.; Sabjan, R.; Zagar, A. [Cosylab d.d., Ljubljana (Slovenia); Hunt, S. [Alceli Hunt Beratung, Meisterschwanden (Switzerland)

    2009-07-01

    To facilitate fast feedback control of plasma, ITER's Control, Data Access and Communication system (CODAC) will need to provide a mechanism for hard real-time communication between its distributed nodes. In particular, four types of high-performance communication have been identified. Synchronous Databus Network (SDN) is to provide an ability to distribute parameters of plasma (estimated to about 5000 double-valued signals) across the system to allow for 1 ms control cycles. Event Distribution Network (EDN) and Time Communication Network (TCN) are to allow synchronization of node I/O operations to 10 ns. Finally, the Audio Video Network (AVN) is to provide sufficient bandwidth for streaming of surveillance and diagnostics video at a high resolution (1024*1024) and frame rate (30 Hz). In this article, we present some combinations of common off-the-shelf (COTS) technologies that allow the above requirements to be met. Also, we present the performances achieved in a practical (though small scale) technology demonstrator, which involved a real-time LINUS operating running on National Instruments' PXI platform, UDP communication implemented directly atop the Ethernet network adapter, CISCO switches, Micro Research Finland's timing and event solution, and GigE audio-video streaming. This document is composed of an abstract followed by the presentation transparencies. (authors)

  17. Evaluation of high-performance network technologies for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Zagar, K., E-mail: klemen.zagar@cosylab.co [Cosylab d.d., 1000 Ljubljana (Slovenia); Hunt, S. [Alceli Hunt Beratung, 5616 Meisterschwanden (Switzerland); Kolaric, P.; Sabjan, R.; Zagar, A.; Dedic, J. [Cosylab d.d., 1000 Ljubljana (Slovenia)

    2010-07-15

    For the fast feedback plasma controllers, ITER's Control, Data Access and Communication system (CODAC) will need to provide a mechanism for hard real-time communication between its distributed nodes. In particular, the ITER CODAC team identified four types of high-performance communication applications. Synchronous Databus Network (SDN) is to provide an ability to distribute parameters of plasma (estimated to about 5000 double-valued signals) across the system to allow for 1 ms control cycles. Event Distribution Network (EDN) and Time Communication Network (TCN) are to allow synchronization of node I/O operations to 10 ns. Finally, the Audio-Video Network (AVN) is to provide sufficient bandwidth for streaming of surveillance and diagnostics video at a high resolution (1024 x 1024) and frame rate (30 Hz). In this article, we present some combinations of common-off-the-shelf (COTS) technologies that allow the above requirements to be met. Also, we present the performances achieved in a practical (though small scale) technology demonstrator, which involved a real-time Linux operating running on National Instruments' PXI platform, UDP communication implemented directly atop the Ethernet network adapter, CISCO switches, Micro Research Finland's timing and event solution, and GigE audio-video streaming.

  18. Evaluation of high-performance network technologies for ITER

    International Nuclear Information System (INIS)

    Zagar, K.; Hunt, S.; Kolaric, P.; Sabjan, R.; Zagar, A.; Dedic, J.

    2010-01-01

    For the fast feedback plasma controllers, ITER's Control, Data Access and Communication system (CODAC) will need to provide a mechanism for hard real-time communication between its distributed nodes. In particular, the ITER CODAC team identified four types of high-performance communication applications. Synchronous Databus Network (SDN) is to provide an ability to distribute parameters of plasma (estimated to about 5000 double-valued signals) across the system to allow for 1 ms control cycles. Event Distribution Network (EDN) and Time Communication Network (TCN) are to allow synchronization of node I/O operations to 10 ns. Finally, the Audio-Video Network (AVN) is to provide sufficient bandwidth for streaming of surveillance and diagnostics video at a high resolution (1024 x 1024) and frame rate (30 Hz). In this article, we present some combinations of common-off-the-shelf (COTS) technologies that allow the above requirements to be met. Also, we present the performances achieved in a practical (though small scale) technology demonstrator, which involved a real-time Linux operating running on National Instruments' PXI platform, UDP communication implemented directly atop the Ethernet network adapter, CISCO switches, Micro Research Finland's timing and event solution, and GigE audio-video streaming.

  19. The Influence of Social Networks on High School Students' Performance

    Science.gov (United States)

    Abu-Shanab, Emad; Al-Tarawneh, Heyam

    2015-01-01

    Social networks are becoming an integral part of people's lives. Students are spending much time on social media and are considered the largest category that uses such application. This study tries to explore the influence of social media use, and especially Facebook, on high school students' performance. The study used the GPA of students in four…

  20. Temporary over voltages in the high voltage networks

    International Nuclear Information System (INIS)

    Vukelja, Petar; Naumov, Radomir; Mrvic, Jovan; Minovski, Risto

    2001-01-01

    The paper treats the temporary over voltages that may arise in the high voltage networks as a result of: ground faults, loss of load, loss of one or two phases and switching operation. Based on the analysis, the measures for their limitation are proposed. (Original)

  1. A high-speed interconnect network using ternary logic

    DEFF Research Database (Denmark)

    Madsen, Jens Kargaard; Long, S. I.

    1995-01-01

    This paper describes the design and implementation of a high-speed interconnect network (ICN) for a multiprocessor system using ternary logic. By using ternary logic and a fast point-to-point communication technique called STARI (Self-Timed At Receiver's Input), the communication between...

  2. Seroprevalence of mumps in The Netherlands: dynamics over a decade with high vaccination coverage and recent outbreaks.

    Directory of Open Access Journals (Sweden)

    Gaby Smits

    Full Text Available Here we present mumps virus specific antibody levels in a large cross-sectional population-based serosurveillance study performed in the Netherlands in 2006/2007 (n = 7900. Results were compared with a similar study (1995/1996 and discussed in the light of recent outbreaks. Mumps antibodies were tested using a fluorescent bead-based multiplex immunoassay. Overall seroprevalence was 90.9% with higher levels in the naturally infected cohorts compared with vaccinated cohorts. Mumps virus vaccinations at 14 months and 9 years resulted in an increased seroprevalence and antibody concentration. The second vaccination seemed to be important in acquiring stable mumps antibody levels in the long term. In conclusion, the Dutch population is well protected against mumps virus infection. However, we identified specific age- and population groups at increased risk of mumps infection. Indeed, in 2007/2008 an outbreak has occurred in the low vaccination coverage groups emphasizing the predictive value of serosurveillance studies.

  3. A highly crystalline single Au wire network as a high temperature transparent heater

    Science.gov (United States)

    Rao, K. D. M.; Kulkarni, Giridhar U.

    2014-05-01

    A transparent conductor which can generate high temperatures finds important applications in optoelectronics. In this article, a wire network made of Au on quartz is shown to serve as an effective high temperature transparent heater. The heater has been fabricated by depositing Au onto a cracked sacrificial template. The highly interconnected Au wire network thus formed exhibited a transmittance of ~87% in a wide spectral range with a sheet resistance of 5.4 Ω □-1. By passing current through the network, it could be joule heated to ~600 °C within a few seconds. The extraordinary thermal performance and stability owe much to the seamless junctions present in the wire network. Furthermore, the wire network gets self-annealed through joule heating as seen from its increased crystallinity. Interestingly, both transmittance and sheet resistance improved following annealing to 92% and 3.2 Ω □-1, respectively. A transparent conductor which can generate high temperatures finds important applications in optoelectronics. In this article, a wire network made of Au on quartz is shown to serve as an effective high temperature transparent heater. The heater has been fabricated by depositing Au onto a cracked sacrificial template. The highly interconnected Au wire network thus formed exhibited a transmittance of ~87% in a wide spectral range with a sheet resistance of 5.4 Ω □-1. By passing current through the network, it could be joule heated to ~600 °C within a few seconds. The extraordinary thermal performance and stability owe much to the seamless junctions present in the wire network. Furthermore, the wire network gets self-annealed through joule heating as seen from its increased crystallinity. Interestingly, both transmittance and sheet resistance improved following annealing to 92% and 3.2 Ω □-1, respectively. Electronic supplementary information (ESI) available: Optical micrographs, EDAX, XRD, SEM and TEM images of Au metal wires. See DOI: 10.1039/c4nr00869c

  4. UCLA High Speed, High Volume Laboratory Network for Infectious Diseases

    Science.gov (United States)

    2008-04-01

    of Human Influenza A( H1N2 ) Reassortant Viruses during the 2001–2002 Influenza Season. Journal Infectious Diseases 2002;186:1490–1493...X, Smith CB, Mungall BA, Lindstrom SE, Hall HE, Subbarao K, et al. Intercontinental circulation of human influenza A( H1N2 ) reas- sortant viruses...numerous samples containing highly pathologic avian influenza and other select agents (dual-use). With FY07 (available), FY08 (available) and FY 09

  5. Women's Health Insurance Coverage

    Science.gov (United States)

    ... Women's Health Policy Women’s Health Insurance Coverage Women’s Health Insurance Coverage Published: Oct 31, 2017 Facebook Twitter LinkedIn ... that many women continue to face. Sources of Health Insurance Coverage Employer-Sponsored Insurance: Approximately 57.9 million ...

  6. Cisco Networking Essentials

    CERN Document Server

    McMillan, Troy

    2011-01-01

    An engaging approach for anyone beginning a career in networking As the world leader of networking products and services, Cisco products are constantly growing in demand. Yet, few books are aimed at those who are beginning a career in IT--until now. Cisco Networking Essentials provides a solid foundation on the Cisco networking products and services with thorough coverage of fundamental networking concepts. Author Troy McMillan applies his years of classroom instruction to effectively present high-level topics in easy-to-understand terms for beginners. With this indispensable full-color resour

  7. Supply Networks and Value Creation in High Innovation and Strong Network Externalities Industry

    Directory of Open Access Journals (Sweden)

    Fernando Claro Tomaselli

    2013-12-01

    Full Text Available The rapid developing product and service markets and developments in information technologies have accelerated growth in outsourcing of peripheral activities and critical business as well, enhancing the importance of network supply chain management. This paper analyzes the dynamics of supply chain management and the creation of value in an industry with strong network effects and constantly introduction of disruptive technologies, the videogame industry. This industry evolves at a high velocity, with a lifecycle of five to six years for consoles, which features a new generation of consoles, where new companies and technologies appear and disappear at each generation.

  8. Assessment of network perturbation amplitudes by applying high-throughput data to causal biological networks

    Directory of Open Access Journals (Sweden)

    Martin Florian

    2012-05-01

    Full Text Available Abstract Background High-throughput measurement technologies produce data sets that have the potential to elucidate the biological impact of disease, drug treatment, and environmental agents on humans. The scientific community faces an ongoing challenge in the analysis of these rich data sources to more accurately characterize biological processes that have been perturbed at the mechanistic level. Here, a new approach is built on previous methodologies in which high-throughput data was interpreted using prior biological knowledge of cause and effect relationships. These relationships are structured into network models that describe specific biological processes, such as inflammatory signaling or cell cycle progression. This enables quantitative assessment of network perturbation in response to a given stimulus. Results Four complementary methods were devised to quantify treatment-induced activity changes in processes described by network models. In addition, companion statistics were developed to qualify significance and specificity of the results. This approach is called Network Perturbation Amplitude (NPA scoring because the amplitudes of treatment-induced perturbations are computed for biological network models. The NPA methods were tested on two transcriptomic data sets: normal human bronchial epithelial (NHBE cells treated with the pro-inflammatory signaling mediator TNFα, and HCT116 colon cancer cells treated with the CDK cell cycle inhibitor R547. Each data set was scored against network models representing different aspects of inflammatory signaling and cell cycle progression, and these scores were compared with independent measures of pathway activity in NHBE cells to verify the approach. The NPA scoring method successfully quantified the amplitude of TNFα-induced perturbation for each network model when compared against NF-κB nuclear localization and cell number. In addition, the degree and specificity to which CDK

  9. Highball: A high speed, reserved-access, wide area network

    Science.gov (United States)

    Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.

    1990-01-01

    A network architecture called Highball and a preliminary design for a prototype, wide-area data network designed to operate at speeds of 1 Gbps and beyond are described. It is intended for applications requiring high speed burst transmissions where some latency between requesting a transmission and granting the request can be anticipated and tolerated. Examples include real-time video and disk-disk transfers, national filestore access, remote sensing, and similar applications. The network nodes include an intelligent crossbar switch, but have no buffering capabilities; thus, data must be queued at the end nodes. There are no restrictions on the network topology, link speeds, or end-end protocols. The end system, nodes, and links can operate at any speed up to the limits imposed by the physical facilities. An overview of an initial design approach is presented and is intended as a benchmark upon which a detailed design can be developed. It describes the network architecture and proposed access protocols, as well as functional descriptions of the hardware and software components that could be used in a prototype implementation. It concludes with a discussion of additional issues to be resolved in continuing stages of this project.

  10. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  11. Passive and Active Monitoring on a High Performance Research Network

    International Nuclear Information System (INIS)

    Matthews, Warren

    2001-01-01

    The bold network challenges described in ''Internet End-to-end Performance Monitoring for the High Energy and Nuclear Physics Community'' presented at PAM 2000 have been tackled by the intrepid administrators and engineers providing the network services. After less than a year, the BaBar collaboration has collected almost 100 million particle collision events in a database approaching 165TB (Tera=10 12 ). Around 20TB has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, for processing and around 40 TB of simulated events have been imported to SLAC from Lawrence Livermore National Laboratory (LLNL). An unforseen challenge has arisen due to recent events and highlighted security concerns at DoE funded labs. New rules and regulations suggest it is only a matter of time before many active performance measurements may not be possible between many sites. Yet, at the same time, the importance of understanding every aspect of the network and eradicating packet loss for high throughput data transfers has become apparent. Work at SLAC to employ passive monitoring using netflow and OC3MON is underway and techniques to supplement and possibly replace the active measurements are being considered. This paper will detail the special needs and traffic characterization of a remarkable research project, and how the networking hurdles have been resolved (or not) to achieve the required high data throughput. Results from active and passive measurements will be compared, and methods for achieving high throughput and the effect on the network will be assessed along with tools that directly measure throughput and applications used to actually transfer data

  12. Passive and Active Monitoring on a High Performance Research Network.

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Warren

    2001-05-01

    The bold network challenges described in ''Internet End-to-end Performance Monitoring for the High Energy and Nuclear Physics Community'' presented at PAM 2000 have been tackled by the intrepid administrators and engineers providing the network services. After less than a year, the BaBar collaboration has collected almost 100 million particle collision events in a database approaching 165TB (Tera=10{sup 12}). Around 20TB has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, for processing and around 40 TB of simulated events have been imported to SLAC from Lawrence Livermore National Laboratory (LLNL). An unforseen challenge has arisen due to recent events and highlighted security concerns at DoE funded labs. New rules and regulations suggest it is only a matter of time before many active performance measurements may not be possible between many sites. Yet, at the same time, the importance of understanding every aspect of the network and eradicating packet loss for high throughput data transfers has become apparent. Work at SLAC to employ passive monitoring using netflow and OC3MON is underway and techniques to supplement and possibly replace the active measurements are being considered. This paper will detail the special needs and traffic characterization of a remarkable research project, and how the networking hurdles have been resolved (or not!) to achieve the required high data throughput. Results from active and passive measurements will be compared, and methods for achieving high throughput and the effect on the network will be assessed along with tools that directly measure throughput and applications used to actually transfer data.

  13. Achieving high aspect ratio wrinkles by modifying material network stress.

    Science.gov (United States)

    Chen, Yu-Cheng; Wang, Yan; McCarthy, Thomas J; Crosby, Alfred J

    2017-06-07

    Wrinkle aspect ratio, or the amplitude divided by the wavelength, is hindered by strain localization transitions when an increasing global compressive stress is applied to synthetic material systems. However, many examples from living organisms show extremely high aspect ratios, such as gut villi and flower petals. We use three experimental approaches to demonstrate that these high aspect ratio structures can be achieved by modifying the network stress in the wrinkle substrate. We modify the wrinkle stress and effectively delay the strain localization transition, such as folding, to larger aspect ratios by using a zero-stress initial wavy substrate, creating a secondary network with post-curing, or using chemical stress relaxation materials. A wrinkle aspect ratio as high as 0.85, almost three times higher than common values of synthetic wrinkles, is achieved, and a quantitative framework is presented to provide understanding the different strategies and predictions for future investigations.

  14. SynUTC - high precision time synchronization over ethernet networks

    CERN Document Server

    Höller, R; Horauer, M; Kerö, N; Schmid, U; Schossmaier, K

    2002-01-01

    This article describes our SynUTC (Synchronized Universal Time Coordinated) technology, which enables high-accuracy distribution of GPS time and time synchronization of network nodes connected via standard Ethernet LANs. By means of exchanging data packets in conjunction with moderate hardware support at nodes and switches, an overall worst-case accuracy in the range of some 100 ns can be achieved, with negligible communication overhead. Our technology thus improves the 1 ms-range accuracy achievable by conventional, software-based approaches like NTP by 4 orders of magnitude. Applications can use the high-accuracy global time provided by SynUTC for event timestamping and event generation both at hardware and software level. SynUTC is based upon inserting highly accurate time information into dedicated data packets at the media-independent interface (MII) between the physical layer transceiver and the network controller upon packet transmission and reception, respectively. As a consequence, every node has acc...

  15. Organizing NPD networks for high innovation performance: the case of Dutch medical devices SMEs

    NARCIS (Netherlands)

    Pullen, A.J.J.; Groen, Arend J.; de Weerd-Nederhof, Petronella C.; Fisscher, O.A.M.

    2010-01-01

    This research examines which combination of network characteristics (the network configuration) leads to high innovation performance for small and medium sized companies (SMEs). Even though research has paid significant attention to the relation between the external network and the innovation

  16. High Energy Physics and Nuclear Physics Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli; Bauerdick, Lothar; Bell, Greg; Ciuffo, Leandro; Dasu, Sridhara; Dattoria, Vince; De, Kaushik; Ernst, Michael; Finkelson, Dale; Gottleib, Steven; Gutsche, Oliver; Habib, Salman; Hoeche, Stefan; Hughes-Jones, Richard; Ibarra, Julio; Johnston, William; Kisner, Theodore; Kowalski, Andy; Lauret, Jerome; Luitz, Steffen; Mackenzie, Paul; Maguire, Chales; Metzger, Joe; Monga, Inder; Ng, Cho-Kuen; Nielsen, Jason; Price, Larry; Porter, Jeff; Purschke, Martin; Rai, Gulshan; Roser, Rob; Schram, Malachi; Tull, Craig; Watson, Chip; Zurawski, Jason

    2014-03-02

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements needed by instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In August 2013, ESnet and the DOE SC Offices of High Energy Physics (HEP) and Nuclear Physics (NP) organized a review to characterize the networking requirements of the programs funded by the HEP and NP program offices. Several key findings resulted from the review. Among them: 1. The Large Hadron Collider?s ATLAS (A Toroidal LHC Apparatus) and CMS (Compact Muon Solenoid) experiments are adopting remote input/output (I/O) as a core component of their data analysis infrastructure. This will significantly increase their demands on the network from both a reliability perspective and a performance perspective. 2. The Large Hadron Collider (LHC) experiments (particularly ATLAS and CMS) are working to integrate network awareness into the workflow systems that manage the large number of daily analysis jobs (1 million analysis jobs per day for ATLAS), which are an integral part of the experiments. Collaboration with networking organizations such as ESnet, and the consumption of performance data (e.g., from perfSONAR [PERformance Service Oriented Network monitoring Architecture]) are critical to the success of these efforts. 3. The international aspects of HEP and NP collaborations continue to expand. This includes the LHC experiments, the Relativistic Heavy Ion Collider (RHIC) experiments, the Belle II Collaboration, the Large Synoptic Survey Telescope (LSST), and others. The international nature of these collaborations makes them heavily

  17. The Role of Social Networks in Financing High Technology New Ventures: An Empirical Exploration

    NARCIS (Netherlands)

    Heuven, J.M.J.

    2006-01-01

    This paper focuses on the role of networks in financing high technology start-ups. We claim that the role of networks is twofold. On the one hand networks are important because network contacts can give direct access to resources. On the other hand, networks are important because being affiliated

  18. Metabolic fingerprinting of high-fat plasma samples processed by centrifugation- and filtration-based protein precipitation delineates significant differences in metabolite information coverage.

    Science.gov (United States)

    Barri, Thaer; Holmer-Jensen, Jens; Hermansen, Kjeld; Dragsted, Lars O

    2012-03-09

    Metabolomics and metabolic fingerprinting are being extensively employed for improved understanding of biological changes induced by endogenous or exogenous factors. Blood serum or plasma samples are often employed for metabolomics studies. Plasma protein precipitation (PPP) is currently performed in most laboratories before LC-MS analysis. However, the impact of fat content in plasma samples on metabolite coverage has not previously been investigated. Here, we have studied whether PPP procedures influence coverage of plasma metabolites from high-fat plasma samples. An optimized UPLC-QTOF/MS metabolic fingerprinting approach and multivariate modeling (PCA and OPLS-DA) were utilized for finding characteristic metabolite changes induced by two PPP procedures; centrifugation and filtration. We used 12-h fasting samples and postprandial samples collected at 2h after a standardized high-fat protein-rich meal in obese non-diabetic subjects recruited in a dietary intervention. The two PPP procedures as well as external and internal standards (ISs) were used to track errors in response normalization and quantification. Remarkably and sometimes uniquely, the fPPP, but not the cPPP approach, recovered not only high molecular weight (HMW) lipophilic metabolites, but also small molecular weight (SMW) relatively polar metabolites. Characteristic SMW markers of postprandial samples were aromatic and branched-chain amino acids that were elevated (p<0.001) as a consequence of the protein challenge. In contrast, some HMW lipophilic species, e.g. acylcarnitines, were moderately lower (p<0.001) in postprandial samples. LysoPCs were largely unaffected. In conclusion, the fPPP procedure is recommended for processing high-fat plasma samples in metabolomics studies. While method improvements presented here were clear, use of several ISs revealed substantial challenges to untargeted metabolomics due to large and variable matrix effects. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Measuring large-scale social networks with high resolution.

    Directory of Open Access Journals (Sweden)

    Arkadiusz Stopczynski

    Full Text Available This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  20. An evaluation of current high-performance networks

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Christian; Bonachea, Dan; Cote, Yannick; Duell, Jason; Hargrove, Paul; Husbands, Parry; Iancu, Costin; Welcome, Michael; Yelick, Katherine

    2003-01-25

    High-end supercomputers are increasingly built out of commodity components, and lack tight integration between the processor and network. This often results in inefficiencies in the communication subsystem, such as high software overheads and/or message latencies. In this paper we use a set of microbenchmarks to quantify the cost of this commoditization, measuring software overhead, latency, and bandwidth on five contemporary supercomputing networks. We compare the performance of the ubiquitous MPI layer to that of lower-level communication layers, and quantify the advantages of the latter for small message performance. We also provide data on the potential for various communication-related optimizations, such as overlapping communication with computation or other communication. Finally, we determine the minimum size needed for a message to be considered 'large' (i.e., bandwidth-bound) on these platforms, and provide historical data on the software overheads of a number of supercomputers over the past decade.

  1. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  2. Copercolating Networks: An Approach for Realizing High-Performance Transparent Conductors using Multicomponent Nanostructured Networks

    Directory of Open Access Journals (Sweden)

    Das Suprem R.

    2016-06-01

    Full Text Available Although transparent conductive oxides such as indium tin oxide (ITO are widely employed as transparent conducting electrodes (TCEs for applications such as touch screens and displays, new nanostructured TCEs are of interest for future applications, including emerging transparent and flexible electronics. A number of twodimensional networks of nanostructured elements have been reported, including metallic nanowire networks consisting of silver nanowires, metallic carbon nanotubes (m-CNTs, copper nanowires or gold nanowires, and metallic mesh structures. In these single-component systems, it has generally been difficult to achieve sheet resistances that are comparable to ITO at a given broadband optical transparency. A relatively new third category of TCEs consisting of networks of 1D-1D and 1D-2D nanocomposites (such as silver nanowires and CNTs, silver nanowires and polycrystalline graphene, silver nanowires and reduced graphene oxide have demonstrated TCE performance comparable to, or better than, ITO. In such hybrid networks, copercolation between the two components can lead to relatively low sheet resistances at nanowire densities corresponding to high optical transmittance. This review provides an overview of reported hybrid networks, including a comparison of the performance regimes achievable with those of ITO and single-component nanostructured networks. The performance is compared to that expected from bulk thin films and analyzed in terms of the copercolation model. In addition, performance characteristics relevant for flexible and transparent applications are discussed. The new TCEs are promising, but significant work must be done to ensure earth abundance, stability, and reliability so that they can eventually replace traditional ITO-based transparent conductors.

  3. A High-Resolution Sensor Network for Monitoring Glacier Dynamics

    Science.gov (United States)

    Edwards, S.; Murray, T.; O'Farrell, T.; Rutt, I. C.; Loskot, P.; Martin, I.; Selmes, N.; Aspey, R.; James, T.; Bevan, S. L.; Baugé, T.

    2013-12-01

    Changes in Greenland and Antarctic ice sheets due to ice flow/ice-berg calving are a major uncertainty affecting sea-level rise forecasts. Latterly GNSS (Global Navigation Satellite Systems) have been employed extensively to monitor such glacier dynamics. Until recently however, the favoured methodology has been to deploy sensors onto the glacier surface, collect data for a period of time, then retrieve and download the sensors. This approach works well in less dynamic environments where the risk of sensor loss is low. In more extreme environments e.g. approaching the glacial calving front, the risk of sensor loss and hence data loss increases dramatically. In order to provide glaciologists with new insights into flow dynamics and calving processes we have developed a novel sensor network to increase the robustness of data capture. We present details of the technological requirements for an in-situ Zigbee wireless streaming network infrastructure supporting instantaneous data acquisition from high resolution GNSS sensors thereby increasing data capture robustness. The data obtained offers new opportunities to investigate the interdependence of mass flow, uplift, velocity and geometry and the network architecture has been specifically designed for deployment by helicopter close to the calving front to yield unprecedented detailed information. Following successful field trials of a pilot three node network during 2012, a larger 20 node network was deployed on the fast-flowing Helheim glacier, south-east Greenland over the summer months of 2013. The utilisation of dual wireless transceivers in each glacier node, multiple frequencies and four ';collector' stations located on the valley sides creates overlapping networks providing enhanced capacity, diversity and redundancy of data 'back-haul', even close to ';floor' RSSI (Received Signal Strength Indication) levels around -100 dBm. Data loss through radio packet collisions within sub-networks are avoided through the

  4. Full-Coverage High-Resolution Daily PM(sub 2.5) Estimation using MAIAC AOD in the Yangtze River Delta of China

    Science.gov (United States)

    Xiao, Qingyang; Wang, Yujie; Chang, Howard H.; Meng, Xia; Geng, Guannan; Lyapustin, Alexei Ivanovich; Liu, Yang

    2017-01-01

    Satellite aerosol optical depth (AOD) has been used to assess population exposure to fine particulate matter (PM (sub 2.5)). The emerging high-resolution satellite aerosol product, Multi-Angle Implementation of Atmospheric Correction(MAIAC), provides a valuable opportunity to characterize local-scale PM(sub 2.5) at 1-km resolution. However, non-random missing AOD due to cloud snow cover or high surface reflectance makes this task challenging. Previous studies filled the data gap by spatially interpolating neighboring PM(sub 2.5) measurements or predictions. This strategy ignored the effect of cloud cover on aerosol loadings and has been shown to exhibit poor performance when monitoring stations are sparse or when there is seasonal large-scale missngness. Using the Yangtze River Delta of China as an example, we present a Multiple Imputation (MI) method that combines the MAIAC high-resolution satellite retrievals with chemical transport model (CTM) simulations to fill missing AOD. A two-stage statistical model driven by gap-filled AOD, meteorology and land use information was then fitted to estimate daily ground PM(sub 2.5) concentrations in 2013 and 2014 at 1 km resolution with complete coverage in space and time. The daily MI models have an average R(exp 2) of 0.77, with an inter-quartile range of 0.71 to 0.82 across days. The overall Ml model 10-fold cross-validation R(exp 2) (root mean square error) were 0.81 (25 gm(exp 3)) and 0.73 (18 gm(exp 3)) for year 2013 and 2014, respectively. Predictions with only observational AOD or only imputed AOD showed similar accuracy.Comparing with previous gap-filling methods, our MI method presented in this study performed bette rwith higher coverage, higher accuracy, and the ability to fill missing PM(sub 2.5) predictions without ground PM(sub 2.5) measurements. This method can provide reliable PM(sub 2.5)predictions with complete coverage that can reduce biasin exposure assessment in air pollution and health studies.

  5. Ultrafine manganese dioxide nanowire network for high-performance supercapacitors.

    Science.gov (United States)

    Jiang, Hao; Zhao, Ting; Ma, Jan; Yan, Chaoyi; Li, Chunzhong

    2011-01-28

    Ultrafine MnO(2) nanowires with sub-10 nm diameters have been synthesized by a simple process of hydrothermal treatment with subsequent calcinations to form networks that exhibit an enhanced specific capacitance (279 F g(-1) at 1 A g(-1)), high rate capability (54.5% retention at 20 A g(-1)) and good cycling stability (1.7% loss after 1000 cycles).

  6. Model Complexities of Shallow Networks Representing Highly Varying Functions

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2016-01-01

    Roč. 171, 1 January (2016), s. 598-604 ISSN 0925-2312 R&D Projects: GA MŠk(CZ) LD13002 Grant - others:grant for Visiting Professors(IT) GNAMPA-INdAM Institutional support: RVO:67985807 Keywords : shallow networks * model complexity * highly varying functions * Chernoff bound * perceptrons * Gaussian kernel units Subject RIV: IN - Informatics, Computer Science Impact factor: 3.317, year: 2016

  7. Design and implementation of a high performance network security processor

    Science.gov (United States)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  8. Argonne National Laboratory high performance network support of APS experiments

    International Nuclear Information System (INIS)

    Knot, M.J.; McMahon, R.J.

    1996-01-01

    Argonne National Laboratory is currently positioned to provide access to high performance regional and national networks. Much of the impetus for this effort is the anticipated needs of the upcoming experimental program at the APS. Some APS collaborative access teams (CATs) are already pressing for network speed improvements and security enhancements. Requirements range from the need for high data rate, secure transmission of experimental data, to the desire to establish a open-quote open-quote virtual experimental environment close-quote close-quote at their home institution. In the near future, 155 megabit/sec (Mb/s) national and regional asynchronous transfer mode (ATM) networks will be operational and available to APS users. Full-video teleconferencing, virtual presence operation of experiments, and high speed, secure transmission of data are being tested and, in some cases, will be operational. We expect these efforts to enable a substantial improvement in the speed of processing experimental results as well as an increase in convenience to the APS experimentalist. copyright 1996 American Institute of Physics

  9. Dosimetric Coverage of the Prostate, Normal Tissue Sparing, and Acute Toxicity with High-Dose-Rate Brachytherapy for Large Prostate Volumes

    Directory of Open Access Journals (Sweden)

    George Yang

    2015-06-01

    Full Text Available ABSTRACTPurposeTo evaluate dosimetric coverage of the prostate, normal tissue sparing, and acute toxicity with HDR brachytherapy for large prostate volumes.Materials and MethodsOne hundred and two prostate cancer patients with prostate volumes >50 mL (range: 5-29 mL were treated with high-dose-rate (HDR brachytherapy ± intensity modulated radiation therapy (IMRT to 4,500 cGy in 25 daily fractions between 2009 and 2013. HDR brachytherapy monotherapy doses consisted of two 1,350-1,400 cGy fractions separated by 2-3 weeks, and HDR brachytherapy boost doses consisted of two 950-1,150 cGy fractions separated by 4 weeks. Twelve of 32 (38% unfavorable intermediate risk, high risk, and very high risk patients received androgen deprivation therapy. Acute toxicity was graded according to the Common Terminology Criteria for Adverse Events (CTCAE version 4.ResultsMedian follow-up was 14 months. Dosimetric goals were achieved in over 90% of cases. Three of 102 (3% patients developed Grade 2 acute proctitis. No variables were significantly associated with Grade 2 acute proctitis. Seventeen of 102 (17% patients developed Grade 2 acute urinary retention. American Urological Association (AUA symptom score was the only variable significantly associated with Grade 2 acute urinary retention (p=0.04. There was no ≥ Grade 3 acute toxicity.ConclusionsDosimetric coverage of the prostate and normal tissue sparing were adequate in patients with prostate volumes >50 mL. Higher pre-treatment AUA symptom scores increased the relative risk of Grade 2 acute urinary retention. However, the overall incidence of acute toxicity was acceptable in patients with large prostate volumes.

  10. Dosimetric coverage of the prostate, normal tissue sparing, and acute toxicity with high-dose-rate brachytherapy for large prostate volumes

    Energy Technology Data Exchange (ETDEWEB)

    Yang, George; Strom, Tobin J.; Shrinath, Kushagra; Mellon, Eric A.; Fernandez, Daniel C.; Biagioli, Matthew C. [Department of Radiation Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL (United States); Wilder, Richard B., E-mail: mcbiagioli@yahoo.com [Cancer Treatment Centers of America, Newnan, GA (United States)

    2015-05-15

    Purpose: to evaluate dosimetric coverage of the prostate, normal tissue sparing, and acute toxicity with HDR brachytherapy for large prostate volumes. Materials and methods: one hundred and two prostate cancer patients with prostate volumes >50 mL (range: 5-29 mL) were treated with high-dose-rate (HDR) brachytherapy ± intensity modulated radiation therapy (IMRT) to 4,500 cGy in 25 daily fractions between 2009 and 2013. HDR brachytherapy monotherapy doses consisted of two 1,350-1,400 cGy fractions separated by 2-3 weeks, and HDR brachytherapy boost doses consisted of two 950-1,150 cGy fractions separated by 4 weeks. Twelve of 32 (38%) unfavorable intermediate risk, high risk, and very high risk patients received androgen deprivation therapy. Acute toxicity was graded according to the Common Terminology Criteria for Adverse Events (CTCAE) version 4. Results: median follow-up was 14 months. Dosimetric goals were achieved in over 90% of cases. Three of 102 (3%) patients developed Grade 2 acute proctitis. No variables were significantly associated with Grade 2 acute proctitis. Seventeen of 102 (17%) patients developed Grade 2 acute urinary retention. American Urological Association (AUA) symptom score was the only variable significantly associated with Grade 2 acute urinary retention (p-0.04). There was no ≥ Grade 3 acute toxicity. Conclusions: dosimetric coverage of the prostate and normal tissue sparing were adequate in patients with prostate volumes >50 mL. Higher pre-treatment AUA symptom scores increased the relative risk of Grade 2 acute urinary retention. However, the overall incidence of acute toxicity was acceptable in patients with large prostate volumes. (author)

  11. Utilizing HPC Network Technologies in High Energy Physics Experiments

    CERN Document Server

    AUTHOR|(CDS)2088631; The ATLAS collaboration

    2017-01-01

    Because of their performance characteristics high-performance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physics experiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability. The study finds that existing software APIs for high-performance interconnects are focused on applications in high-performance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems a custom library, NetIO, is presented and compared against existing technologies. NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on a interchangeable bac...

  12. A Network Approach to Analyzing Highly Recombinant Malaria Parasite Genes

    Science.gov (United States)

    Larremore, Daniel B.; Clauset, Aaron; Buckee, Caroline O.

    2013-01-01

    The var genes of the human malaria parasite Plasmodium falciparum present a challenge to population geneticists due to their extreme diversity, which is generated by high rates of recombination. These genes encode a primary antigen protein called PfEMP1, which is expressed on the surface of infected red blood cells and elicits protective immune responses. Var gene sequences are characterized by pronounced mosaicism, precluding the use of traditional phylogenetic tools that require bifurcating tree-like evolutionary relationships. We present a new method that identifies highly variable regions (HVRs), and then maps each HVR to a complex network in which each sequence is a node and two nodes are linked if they share an exact match of significant length. Here, networks of var genes that recombine freely are expected to have a uniformly random structure, but constraints on recombination will produce network communities that we identify using a stochastic block model. We validate this method on synthetic data, showing that it correctly recovers populations of constrained recombination, before applying it to the Duffy Binding Like-α (DBLα) domain of var genes. We find nine HVRs whose network communities map in distinctive ways to known DBLα classifications and clinical phenotypes. We show that the recombinational constraints of some HVRs are correlated, while others are independent. These findings suggest that this micromodular structuring facilitates independent evolutionary trajectories of neighboring mosaic regions, allowing the parasite to retain protein function while generating enormous sequence diversity. Our approach therefore offers a rigorous method for analyzing evolutionary constraints in var genes, and is also flexible enough to be easily applied more generally to any highly recombinant sequences. PMID:24130474

  13. A network approach to analyzing highly recombinant malaria parasite genes.

    Science.gov (United States)

    Larremore, Daniel B; Clauset, Aaron; Buckee, Caroline O

    2013-01-01

    The var genes of the human malaria parasite Plasmodium falciparum present a challenge to population geneticists due to their extreme diversity, which is generated by high rates of recombination. These genes encode a primary antigen protein called PfEMP1, which is expressed on the surface of infected red blood cells and elicits protective immune responses. Var gene sequences are characterized by pronounced mosaicism, precluding the use of traditional phylogenetic tools that require bifurcating tree-like evolutionary relationships. We present a new method that identifies highly variable regions (HVRs), and then maps each HVR to a complex network in which each sequence is a node and two nodes are linked if they share an exact match of significant length. Here, networks of var genes that recombine freely are expected to have a uniformly random structure, but constraints on recombination will produce network communities that we identify using a stochastic block model. We validate this method on synthetic data, showing that it correctly recovers populations of constrained recombination, before applying it to the Duffy Binding Like-α (DBLα) domain of var genes. We find nine HVRs whose network communities map in distinctive ways to known DBLα classifications and clinical phenotypes. We show that the recombinational constraints of some HVRs are correlated, while others are independent. These findings suggest that this micromodular structuring facilitates independent evolutionary trajectories of neighboring mosaic regions, allowing the parasite to retain protein function while generating enormous sequence diversity. Our approach therefore offers a rigorous method for analyzing evolutionary constraints in var genes, and is also flexible enough to be easily applied more generally to any highly recombinant sequences.

  14. A network approach to analyzing highly recombinant malaria parasite genes.

    Directory of Open Access Journals (Sweden)

    Daniel B Larremore

    Full Text Available The var genes of the human malaria parasite Plasmodium falciparum present a challenge to population geneticists due to their extreme diversity, which is generated by high rates of recombination. These genes encode a primary antigen protein called PfEMP1, which is expressed on the surface of infected red blood cells and elicits protective immune responses. Var gene sequences are characterized by pronounced mosaicism, precluding the use of traditional phylogenetic tools that require bifurcating tree-like evolutionary relationships. We present a new method that identifies highly variable regions (HVRs, and then maps each HVR to a complex network in which each sequence is a node and two nodes are linked if they share an exact match of significant length. Here, networks of var genes that recombine freely are expected to have a uniformly random structure, but constraints on recombination will produce network communities that we identify using a stochastic block model. We validate this method on synthetic data, showing that it correctly recovers populations of constrained recombination, before applying it to the Duffy Binding Like-α (DBLα domain of var genes. We find nine HVRs whose network communities map in distinctive ways to known DBLα classifications and clinical phenotypes. We show that the recombinational constraints of some HVRs are correlated, while others are independent. These findings suggest that this micromodular structuring facilitates independent evolutionary trajectories of neighboring mosaic regions, allowing the parasite to retain protein function while generating enormous sequence diversity. Our approach therefore offers a rigorous method for analyzing evolutionary constraints in var genes, and is also flexible enough to be easily applied more generally to any highly recombinant sequences.

  15. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Science.gov (United States)

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  16. Supporting Control Room Operators in Highly Automated Future Power Networks

    DEFF Research Database (Denmark)

    Chen, Minjiang; Catterson, Victoria; Syed, Mazheruddin

    2017-01-01

    Operating power systems is an extremely challenging task, not least because power systems have become highly interconnected, as well as the range of network issues that can occur. It is therefore a necessity to develop decision support systems and visualisation that can effectively support the hu...... the human operators for decisionmaking in the complex and dynamic environment of future highly automated power system. This paper aims to investigate the decision support functions associated with frequency deviation events for the proposed Web of Cells concept....

  17. High Resolution Sensing and Control of Urban Water Networks

    Science.gov (United States)

    Bartos, M. D.; Wong, B. P.; Kerkez, B.

    2016-12-01

    We present a framework to enable high-resolution sensing, modeling, and control of urban watersheds using (i) a distributed sensor network based on low-cost cellular-enabled motes, (ii) hydraulic models powered by a cloud computing infrastructure, and (iii) automated actuation valves that allow infrastructure to be controlled in real time. This platform initiates two major advances. First, we achieve a high density of measurements in urban environments, with an anticipated 40+ sensors over each urban area of interest. In addition to new measurements, we also illustrate the design and evaluation of a "smart" control system for real-world hydraulic networks. This control system improves water quality and mitigates flooding by using real-time hydraulic models to adaptively control releases from retention basins. We evaluate the potential of this platform through two ongoing deployments: (i) a flood monitoring network in the Dallas-Fort Worth metropolitan area that detects and anticipates floods at the level of individual roadways, and (ii) a real-time hydraulic control system in the city of Ann Arbor, MI—soon to be one of the most densely instrumented urban watersheds in the United States. Through these applications, we demonstrate that distributed sensing and control of water infrastructure can improve flash flood predictions, emergency response, and stormwater contaminant mitigation.

  18. Parameterized neural networks for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Baldi, Pierre; Sadowski, Peter [University of California, Department of Computer Science, Irvine, CA (United States); Cranmer, Kyle [NYU, Department of Physics, New York, NY (United States); Faucett, Taylor; Whiteson, Daniel [University of California, Department of Physics and Astronomy, Irvine, CA (United States)

    2016-05-15

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  19. Parameterized neural networks for high-energy physics

    International Nuclear Information System (INIS)

    Baldi, Pierre; Sadowski, Peter; Cranmer, Kyle; Faucett, Taylor; Whiteson, Daniel

    2016-01-01

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  20. High Performance Data mining by Genetic Neural Network

    Directory of Open Access Journals (Sweden)

    Dadmehr Rahbari

    2013-10-01

    Full Text Available Data mining in computer science is the process of discovering interesting and useful patterns and relationships in large volumes of data. Most methods for mining problems is based on artificial intelligence algorithms. Neural network optimization based on three basic parameters topology, weights and the learning rate is a powerful method. We introduce optimal method for solving this problem. In this paper genetic algorithm with mutation and crossover operators change the network structure and optimized that. Dataset used for our work is stroke disease with twenty features that optimized number of that achieved by new hybrid algorithm. Result of this work is very well incomparison with other similar method. Low present of error show that our method is our new approach to efficient, high-performance data mining problems is introduced.

  1. High-performance, scalable optical network-on-chip architectures

    Science.gov (United States)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  2. Low and High-Frequency Field Potentials of Cortical Networks ...

    Science.gov (United States)

    Neural networks grown on microelectrode arrays (MEAs) have become an important, high content in vitro assay for assessing neuronal function. MEA experiments typically examine high- frequency (HF) (>200 Hz) spikes, and bursts which can be used to discriminate between different pharmacological agents/chemicals. However, normal brain activity is additionally composed of integrated low-frequency (0.5-100 Hz) field potentials (LFPs) which are filtered out of MEA recordings. The objective of this study was to characterize the relationship between HF and LFP neural network signals, and to assess the relative sensitivity of LFPs to selected neurotoxicants. Rat primary cortical cultures were grown on glass, single-well MEA chips. Spontaneous activity was sampled at 25 kHz and recorded (5 min) (Multi-Channel Systems) from mature networks (14 days in vitro). HF (spike, mean firing rate, MFR) and LF (power spectrum, amplitude) components were extracted from each network and served as its baseline (BL). Next, each chip was treated with either 1) a positive control, bicuculline (BIC, 25μM) or domoic acid (DA, 0.3μM), 2) or a negative control, acetaminophen (ACE, 100μM) or glyphosate (GLY, 100μM), 3) a solvent control (H2O or DMSO:EtOH), or 4) a neurotoxicant, (carbaryl, CAR 5, 30μM ; lindane, LIN 1, 10μM; permethrin, PERM 25, 50μM; triadimefon, TRI 5, 65μM). Post treatment, 5 mins of spontaneous activity was recorded and analyzed. As expected posit

  3. Measles transmission following the tsunami in a population with a high one-dose vaccination coverage, Tamil Nadu, India 2004–2005

    Directory of Open Access Journals (Sweden)

    Wairgkar Niteen S

    2006-09-01

    Full Text Available Abstract Background On 26 December 2004, a tsunami struck the coast of the state of Tamil Nadu, India, where one-dose measles coverage exceeded 95%. On 29 December, supplemental measles immunization activities targeted children 6 to 60 months of age in affected villages. On 30 December, Cuddalore, a tsunami-affected district in Tamil Nadu reported a cluster of measles cases. We investigated this cluster to estimate the magnitude of the problem and to propose recommendations for control. Methods We received notification of WHO-defined measles cases through stimulated passive surveillance. We collected information regarding date of onset, age, sex, vaccination status and residence. We collected samples for IgM antibodies and genotype studies. We modeled the accumulation of susceptible individuals over the time on the basis of vaccination coverage, vaccine efficacy and birth rate. Results We identified 101 measles cases and detected IgM antibodies against measles virus in eight of 11 sera. Cases were reported from tsunami-affected (n = 71 and unaffected villages (n = 30 with attack rates of 1.3 and 1.7 per 1000, respectively. 42% of cases in tsunami-affected villages had an onset date within 14 days of the tsunami. The median ages of case-patients in tsunami-affected and un-affected areas were 54 months and 60 months respectively (p = 0.471. 36% of cases from tsunami-affected areas were above 60 months of age. Phylogenetic analyses indicated that the sequences of virus belonged to genotype D8 that circulated in Tamil Nadu. Conclusion Measles virus circulated in Cuddalore district following the tsunami, although there was no association between the two events. Transmission despite high one-dose vaccination coverage pointed to the limitations of this vaccination strategy. A second opportunity for measles immunization may help reducing measles mortality and morbidity in such areas. Children from 6 month to 14 years of age must be targeted for

  4. Microstrip Antenna Design for Femtocell Coverage Optimization

    Directory of Open Access Journals (Sweden)

    Afaz Uddin Ahmed

    2014-01-01

    Full Text Available A mircostrip antenna is designed for multielement antenna coverage optimization in femtocell network. Interference is the foremost concern for the cellular operator in vast commercial deployments of femtocell. Many techniques in physical, data link and network-layer are analysed and developed to settle down the interference issues. A multielement technique with self-configuration features is analyzed here for coverage optimization of femtocell. It also focuses on the execution of microstrip antenna for multielement configuration. The antenna is designed for LTE Band 7 by using standard FR4 dielectric substrate. The performance of the proposed antenna in the femtocell application is discussed along with results.

  5. Implementation and Operational Research: Cost and Efficiency of a Hybrid Mobile Multidisease Testing Approach With High HIV Testing Coverage in East Africa.

    Science.gov (United States)

    Chang, Wei; Chamie, Gabriel; Mwai, Daniel; Clark, Tamara D; Thirumurthy, Harsha; Charlebois, Edwin D; Petersen, Maya; Kabami, Jane; Ssemmondo, Emmanuel; Kadede, Kevin; Kwarisiima, Dalsone; Sang, Norton; Bukusi, Elizabeth A; Cohen, Craig R; Kamya, Moses; Havlir, Diane V; Kahn, James G

    2016-11-01

    In 2013-2014, we achieved 89% adult HIV testing coverage using a hybrid testing approach in 32 communities in Uganda and Kenya (SEARCH: NCT01864603). To inform scalability, we sought to determine: (1) overall cost and efficiency of this approach; and (2) costs associated with point-of-care (POC) CD4 testing, multidisease services, and community mobilization. We applied microcosting methods to estimate costs of population-wide HIV testing in 12 SEARCH trial communities. Main intervention components of the hybrid approach are census, multidisease community health campaigns (CHC), and home-based testing for CHC nonattendees. POC CD4 tests were provided for all HIV-infected participants. Data were extracted from expenditure records, activity registers, staff interviews, and time and motion logs. The mean cost per adult tested for HIV was $20.5 (range: $17.1-$32.1) (2014 US$), including a POC CD4 test at $16 per HIV+ person identified. Cost per adult tested for HIV was $13.8 at CHC vs. $31.7 by home-based testing. The cost per HIV+ adult identified was $231 ($87-$1245), with variability due mainly to HIV prevalence among persons tested (ie, HIV positivity rate). The marginal costs of multidisease testing at CHCs were $1.16/person for hypertension and diabetes, and $0.90 for malaria. Community mobilization constituted 15.3% of total costs. The hybrid testing approach achieved very high HIV testing coverage, with POC CD4, at costs similar to previously reported mobile, home-based, or venue-based HIV testing approaches in sub-Saharan Africa. By leveraging HIV infrastructure, multidisease services were offered at low marginal costs.

  6. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  7. Threats of school violence in Pennsylvania after media coverage of the Columbine High School massacre: examining the role of imitation.

    Science.gov (United States)

    Kostinsky, S; Bixler, E O; Kettl, P A

    2001-09-01

    Following the April 20, 1999, massacre at Columbine High School, Littleton, Colo, school administrators, law enforcement officials, and the media reported a rash of successive bomb threats and threats of school violence that were attributed to imitation. To establish that the clustering of threats of school violence following the Columbine massacre was initiated by imitation. A database of threats of school violence reported to the Pennsylvania Emergency Management Agency, Harrisburg, during the 50 days following the Columbine incident was examined to determine the daily frequency of threats. To determine factors that predict the occurrence of these threats, counties and school districts in which threats occurred were noted. Pennsylvania school districts reported 354 threats of school violence during the 50 days after the Columbine massacre, far exceeding the 1 or 2 threats per year estimated by school administrators before 1999. The frequency of these threats over time demonstrated a crescendo-decrescendo pattern. Fifty-six percent of the threats were made on or before day 10 after the incident, and more than one third occurred on days 8, 9, and 10. Factors that predicted the likelihood of a school's receiving a threat after the massacre included a greater proportion of white students and larger school enrollment. Successive threats of violence follow a publicized act of school violence. The media should recognize that imitation threats can occur and craft their stories accordingly.

  8. A cheap and non-destructive approach to increase coverage/loading of hydrophilic hydroxide on hydrophobic carbon for lightweight and high-performance supercapacitors

    Science.gov (United States)

    Zhang, Liuyang; Gong, Hao

    2015-01-01

    Carbon-based substrates offer unprecedented advantages in lightweight supercapacitors. However, it is still challenging to achieve high coverage or loading. Different from the traditional belief that a lack of defects or functional groups is the cause of poor growth on carbon-based substrates, we reckon that the major cause is the discrepancy between the hydrophilic nature of the metal oxide/hydroxide and the hydrophobic nature of carbon. To solve this incompatibility, we introduced ethanol into the precursor solution. The method to synthesize nickel copper hydroxide on carbon fiber paper employs only water and ethanol, in addition to nickel acetate and copper acetate. The results revealed good growth and tight adhesion of active materials on carbon fiber paper substrates. The specific capacitance and energy density per total weight of the active material plus substrate (carbon fiber paper, current collector) reached 770 F g−1 and 33 Wh kg−1 (1798 F g−1 and 54 Wh kg−1 per weight of the active materials), owing to the high loading of active material and the light weight of carbon fiber paper. These results signified the achievability of light, cheap and high-performance supercapacitors by an environmental-friendly approach. PMID:26643665

  9. A cheap and non-destructive approach to increase coverage/loading of hydrophilic hydroxide on hydrophobic carbon for lightweight and high-performance supercapacitors

    Science.gov (United States)

    Zhang, Liuyang; Gong, Hao

    2015-12-01

    Carbon-based substrates offer unprecedented advantages in lightweight supercapacitors. However, it is still challenging to achieve high coverage or loading. Different from the traditional belief that a lack of defects or functional groups is the cause of poor growth on carbon-based substrates, we reckon that the major cause is the discrepancy between the hydrophilic nature of the metal oxide/hydroxide and the hydrophobic nature of carbon. To solve this incompatibility, we introduced ethanol into the precursor solution. The method to synthesize nickel copper hydroxide on carbon fiber paper employs only water and ethanol, in addition to nickel acetate and copper acetate. The results revealed good growth and tight adhesion of active materials on carbon fiber paper substrates. The specific capacitance and energy density per total weight of the active material plus substrate (carbon fiber paper, current collector) reached 770 F g-1 and 33 Wh kg-1 (1798 F g-1 and 54 Wh kg-1 per weight of the active materials), owing to the high loading of active material and the light weight of carbon fiber paper. These results signified the achievability of light, cheap and high-performance supercapacitors by an environmental-friendly approach.

  10. Analyzing the Social Networks of High- and Low-Performing Students in Online Discussion Forums

    Science.gov (United States)

    Ghadirian, Hajar; Salehi, Keyvan; Ayub, Ahmad Fauzi Mohd

    2018-01-01

    An ego network is an individual's social network relationships with core members. In this study, the ego network parameters in online discussion spaces of high- and low-performing students were compared. The extent to which students' ego networks changed over the course were also analyzed. Participation in 7 weeks of online discussions were…

  11. Water quality monitoring for high-priority water bodies in the Sonoran Desert network

    Science.gov (United States)

    Terry W. Sprouse; Robert M. Emanuel; Sara A. Strorrer

    2005-01-01

    This paper describes a network monitoring program for “high priority” water bodies in the Sonoran Desert Network of the National Park Service. Protocols were developed for monitoring selected waters for ten of the eleven parks in the Network. Park and network staff assisted in identifying potential locations of testing sites, local priorities, and how water quality...

  12. High-coverage sequencing and annotated assembly of the genome of the Australian dragon lizard Pogona vitticeps.

    Science.gov (United States)

    Georges, Arthur; Li, Qiye; Lian, Jinmin; O'Meally, Denis; Deakin, Janine; Wang, Zongji; Zhang, Pei; Fujita, Matthew; Patel, Hardip R; Holleley, Clare E; Zhou, Yang; Zhang, Xiuwen; Matsubara, Kazumi; Waters, Paul; Graves, Jennifer A Marshall; Sarre, Stephen D; Zhang, Guojie

    2015-01-01

    The lizards of the family Agamidae are one of the most prominent elements of the Australian reptile fauna. Here, we present a genomic resource built on the basis of a wild-caught male ZZ central bearded dragon Pogona vitticeps. The genomic sequence for P. vitticeps, generated on the Illumina HiSeq 2000 platform, comprised 317 Gbp (179X raw read depth) from 13 insert libraries ranging from 250 bp to 40 kbp. After filtering for low-quality and duplicated reads, 146 Gbp of data (83X) was available for assembly. Exceptionally high levels of heterozygosity (0.85 % of single nucleotide polymorphisms plus sequence insertions or deletions) complicated assembly; nevertheless, 96.4 % of reads mapped back to the assembled scaffolds, indicating that the assembly included most of the sequenced genome. Length of the assembly was 1.8 Gbp in 545,310 scaffolds (69,852 longer than 300 bp), the longest being 14.68 Mbp. N50 was 2.29 Mbp. Genes were annotated on the basis of de novo prediction, similarity to the green anole Anolis carolinensis, Gallus gallus and Homo sapiens proteins, and P. vitticeps transcriptome sequence assemblies, to yield 19,406 protein-coding genes in the assembly, 63 % of which had intact open reading frames. Our assembly captured 99 % (246 of 248) of core CEGMA genes, with 93 % (231) being complete. The quality of the P. vitticeps assembly is comparable or superior to that of other published squamate genomes, and the annotated P. vitticeps genome can be accessed through a genome browser available at https://genomics.canberra.edu.au.

  13. Space Link Extension Protocol Emulation for High-Throughput, High-Latency Network Connections

    Science.gov (United States)

    Tchorowski, Nicole; Murawski, Robert

    2014-01-01

    New space missions require higher data rates and new protocols to meet these requirements. These high data rate space communication links push the limitations of not only the space communication links, but of the ground communication networks and protocols which forward user data to remote ground stations (GS) for transmission. The Consultative Committee for Space Data Systems, (CCSDS) Space Link Extension (SLE) standard protocol is one protocol that has been proposed for use by the NASA Space Network (SN) Ground Segment Sustainment (SGSS) program. New protocol implementations must be carefully tested to ensure that they provide the required functionality, especially because of the remote nature of spacecraft. The SLE protocol standard has been tested in the NASA Glenn Research Center's SCENIC Emulation Lab in order to observe its operation under realistic network delay conditions. More specifically, the delay between then NASA Integrated Services Network (NISN) and spacecraft has been emulated. The round trip time (RTT) delay for the continental NISN network has been shown to be up to 120ms; as such the SLE protocol was tested with network delays ranging from 0ms to 200ms. Both a base network condition and an SLE connection were tested with these RTT delays, and the reaction of both network tests to the delay conditions were recorded. Throughput for both of these links was set at 1.2Gbps. The results will show that, in the presence of realistic network delay, the SLE link throughput is significantly reduced while the base network throughput however remained at the 1.2Gbps specification. The decrease in SLE throughput has been attributed to the implementation's use of blocking calls. The decrease in throughput is not acceptable for high data rate links, as the link requires constant data a flow in order for spacecraft and ground radios to stay synchronized, unless significant data is queued a the ground station. In cases where queuing the data is not an option

  14. Nationwide network of total solar eclipse high altitude balloon flights

    Science.gov (United States)

    Des Jardins, A. C.

    2017-12-01

    Three years ago we envisioned tapping into the strength of the National Space Grant Program to make the most of a rare astronomical event to engage the general public through education and to create meaningful long-lasting partnerships with other private and public entities. We believe strongly in giving student participants career-making opportunities through the use of the most cutting edge tools, resources, and communication. The NASA Space Grant network was in a unique position to engage the public in the eclipse in an awe-inspiring and educational way at a surprisingly small cost. In addition to public engagement, the multidisciplinary project presented an in-depth hands-on learning opportunity for the thousands of student participants. The project used a network of high altitude ballooning teams positioned along the path of totality from Oregon to South Carolina to conduct coordinated collaborative activities during the eclipse. These activities included 1) capturing and streaming live video of the eclipse from near space, 2) partnering with NASA Ames on a space biology experiment, and 3) conducting high-resolution atmospheric radiosonde measurements. This presentation will summarize the challenges, results, lessons learned, and professional evaluation from developing, training, and coordinating the collaboration. Details of the live streaming HD video and radiosonde activities are described in separate submissions to this session.

  15. Quasi-Optical Network Analyzers and High-Reliability RF MEMS Switched Capacitors

    Science.gov (United States)

    Grichener, Alexander

    The thesis first presents a 2-port quasi-optical scalar network analyzer consisting of a transmitter and receiver both built in planar technology. The network analyzer is based on a Schottky-diode mixer integrated inside a planar antenna and fed differentially by a CPW transmission line. The antenna is placed on an extended hemispherical high-resistivity silicon substrate lens. The LO signal is swept from 3-5 GHz and high-order harmonic mixing in both up- and down- conversion mode is used to realize the 15-50 GHz RF bandwidth. The network analyzer resulted in a dynamic range of greater than 40 dB and was successfully used to measure a frequency selective surface with a second-order bandpass response. Furthermore, the system was built with circuits and components for easy scaling to millimeter-wave frequencies which is the primary motivation for this work. The application areas for a millimeter and submillimeter-wave network analyzer include material characterization and art diagnostics. The second project presents several RF MEMS switched capacitors designed for high-reliability operation and suitable for tunable filters and reconfigurable networks. The first switched-capacitor resulted in a digital capacitance ratio of 5 and an analog capacitance ratio of 5-9. The analog tuning of the down-state capacitance is enhanced by a positive vertical stress gradient in the the beam, making it ideal for applications that require precision tuning. A thick electroplated beam resulted in Q greater than 100 at C to X-band frequencies, and power handling of 0.6-1.1 W. The design also minimized charging in the dielectric, resulting in excellent reliability performance even under hot-switched and high power (1 W) conditions. The second switched-capacitor was designed without any dielectric to minimize charging. The device was hot-switched at 1 W of RF power for greater than 11 billion cycles with virtually no change in the C-V curve. The final project presents a 7-channel

  16. The WET Coverage - How Well Do We Do?

    Directory of Open Access Journals (Sweden)

    Solheim Jan-Erik

    2003-06-01

    Full Text Available The Whole Earth Telescope collaboration is build solidly on the interest of the participants. One of the goals of the collaboration is to produce a high signal to noise, as continuous as possible, light curve for a selected target. During the nearly 15 years of existence the operation of the network has been based on what the members have been able to provide of local funds for their own participation, in addition to NSF grants to run the headquarters activities. This has led to a very uneven geographical distribution of participating groups and observatories. An analysis of the coverage of some of the last WET runs shows that we still have large holes in the coverage, and this leads to aliasing and loss of precision in our final products.

  17. National High Frequency Radar Network (hfrnet) and Pacific Research Efforts

    Science.gov (United States)

    Hazard, L.; Terrill, E. J.; Cook, T.; de Paolo, T.; Otero, M. P.; Rogowski, P.; Schramek, T. A.

    2016-12-01

    The U.S. High Frequency Radar Network (HFRNet) has been in operation for over ten years with representation from 31 organizations spanning academic institutions, state and local government agencies, and private organizations. HFRNet currently holds a collection from over 130 radar installations totaling over 10 million records of surface ocean velocity measurements. HFRNet is a primary example of inter-agency and inter-institutional partnerships for improving oceanographic research and operations. HF radar derived surface currents have been used in several societal applications including coastal search and rescue, oil spill response, water quality monitoring and marine navigation. Central to the operational success of the large scale network is an efficient data management, storage, access, and delivery system. The networking of surface current mapping systems is characterized by a tiered structure that extends from the individual field installations to local regional operations maintaining multiple sites and on to centralized locations aggregating data from all regions. The data system development effort focuses on building robust data communications from remote field locations (sites) for ingestion into the data system via data on-ramps (Portals or Site Aggregators) to centralized data repositories (Nodes). Centralized surface current data enables the aggregation of national surface current grids and allows for ingestion into displays, management tools, and models. The Coastal Observing Research and Development Center has been involved in international relationships and research in the Philippines, Palau, and Vietnam. CORDC extends this IT architecture of surface current mapping data systems leveraging existing developments and furthering standardization of data services for seamless integration of higher level applications. Collaborations include the Philippine Atmospheric Geophysical and Astronomical Services Administration (PAGASA), The Coral Reef Research

  18. Evaluation of Coverage and Barriers to Access to MAM Treatment in West Pokot County, Kenya

    International Nuclear Information System (INIS)

    Basquin, Cecile; Imelda, Awino; Gallagher, Maureen

    2014-01-01

    Full text: Despite an increased number of nutrition treatment coverage assessments conducted, they often focus on Severe Acute Malnutrition (SAM) treatment. In a recent experience in Kenya, Action Against Hunger| ACF International (ACF) conducted a coverage assessment to evaluate access to SAM and Moderate Acute Malnutrition (MAM) treatment. ACF supports the Ministry of Health (MoH) in delivering SAM and MAM treatment at health facility level through an Integrated Management of Acute Malnutrition (IMAM) programme in West Pokot county since 2011. In order to evaluate the coverage of Outpatient Therapeutic Programme (OTP) and Supplementary Feeding Programme (SFP) components, the Simplified Lot Quality Assurance Sampling Evaluation of Access and Coverage (SLEAC) methodology was used. The goals of the coverage assessment were i) to estimate coverage for OTP and SFP; ii) to identify barriers to access to SAM and MAM treatment; iii) to evaluate whether any differences exist between barriers to access to SAM versus to MAM treatment as SFP coverage and uptake of MAM services were never assessed before; and iv) to build local capacities in assessing coverage and to provide recommendations for the MoH-led IMAM programme. With the support of the Coverage Monitoring Network (CMN), ACF led the SLEAC assessment as part of an on-the-job training exercise for MoH and partners in July 2013, covering all of West Pokot county. SLEAC is a rapid and low-resource survey method that uses a three-tier classification approach to evaluate and classify coverage, i.e., low coverage: < 20%; moderate: 20% -50%; and high coverage: ≤ 50%. In a first sampling stage, villages in each of the four sub-counties were randomly selected using systematic sampling. In a second sampling stage, in order to also assess MAM coverage, a house-to-house approach was applied to identify all or near all acutely malnourished children using Mid Upper Arm Circumference (MUAC) tape and identification of bilateral

  19. High level cognitive information processing in neural networks

    Science.gov (United States)

    Barnden, John A.; Fields, Christopher A.

    1992-01-01

    Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.

  20. High Energy Physics Computer Networking: Report of the HEPNET Review Committee

    International Nuclear Information System (INIS)

    1988-06-01

    This paper discusses the computer networks available to high energy physics facilities for transmission of data. Topics covered in this paper are: Existing and planned networks and HEPNET requirements

  1. Interpenetrated polymer networks based on commercial silicone elastomers and ionic networks with high dielectric permittivity and self-healing properties

    DEFF Research Database (Denmark)

    Ogliani, Elisa; Yu, Liyun; Skov, Anne Ladegaard

    the applicability. One method used to avoid this limitation is to increase the dielectric permittivity of the material in order to improve the actuation response at a given field. Recently, interpenetrating polymer networks (IPNs) based on covalently cross-linked commercial silicone elastomers and ionic networks...... from amino- and carboxylic acid- functional silicones have been designed[2] (Figure 1). This novel system provides both the mechanical stability and the high breakdown strength given by the silicone part of the IPNs and the high permittivity and the softening effect of the ionic network. Thus......,1 Hz), and the commercial elastomers RT625 and LR3043/30 provide thebest viscoelastic properties to the systems, since they maintain low viscous losses upon addition of ionic network. The values ofthe breakdown strength in all cases remain higher than that of the reference pure PDMS network (ranging...

  2. Robust Learning of High-dimensional Biological Networks with Bayesian Networks

    Science.gov (United States)

    Nägele, Andreas; Dejori, Mathäus; Stetter, Martin

    Structure learning of Bayesian networks applied to gene expression data has become a potentially useful method to estimate interactions between genes. However, the NP-hardness of Bayesian network structure learning renders the reconstruction of the full genetic network with thousands of genes unfeasible. Consequently, the maximal network size is usually restricted dramatically to a small set of genes (corresponding with variables in the Bayesian network). Although this feature reduction step makes structure learning computationally tractable, on the downside, the learned structure might be adversely affected due to the introduction of missing genes. Additionally, gene expression data are usually very sparse with respect to the number of samples, i.e., the number of genes is much greater than the number of different observations. Given these problems, learning robust network features from microarray data is a challenging task. This chapter presents several approaches tackling the robustness issue in order to obtain a more reliable estimation of learned network features.

  3. Phrenic motoneurons: output elements of a highly organized intraspinal network.

    Science.gov (United States)

    Ghali, Michael George Zaki

    2018-03-01

    pontomedullary respiratory network generates the respiratory pattern and relays it to bulbar and spinal respiratory motor outputs. The phrenic motor system controlling diaphragm contraction receives and processes descending commands to produce orderly, synchronous, and cycle-to-cycle-reproducible spatiotemporal firing. Multiple investigators have studied phrenic motoneurons (PhMNs) in an attempt to shed light on local mechanisms underlying phrenic pattern formation. I and colleagues (Marchenko V, Ghali MG, Rogers RF. Am J Physiol Regul Integr Comp Physiol 308: R916-R926, 2015.) recorded PhMNs in unanesthetized, decerebrate rats and related their activity to simultaneous phrenic nerve (PhN) activity by creating a time-frequency representation of PhMN-PhN power and coherence. On the basis of their temporal firing patterns and relationship to PhN activity, we categorized PhMNs into three classes, each of which emerges as a result of intrinsic biophysical and network properties and organizes the orderly contraction of diaphragm motor fibers. For example, early inspiratory diaphragmatic activation by the early coherent burst generated by high-frequency PhMNs may be necessary to prime it to overcome its initial inertia. We have also demonstrated the existence of a prominent role for local intraspinal inhibitory mechanisms in shaping phrenic pattern formation. The objective of this review is to relate and synthesize recent findings with those of previous studies with the aim of demonstrating that the phrenic nucleus is a region of active local processing, rather than a passive relay of descending inputs.

  4. High Resolution Flash Flood Forecasting Using a Wireless Sensor Network in the Dallas-Fort Worth Metroplex

    Science.gov (United States)

    Bartos, M. D.; Kerkez, B.; Noh, S.; Seo, D. J.

    2017-12-01

    In this study, we develop and evaluate a high resolution urban flash flood monitoring system using a wireless sensor network (WSN), a real-time rainfall-runoff model, and spatially-explicit radar rainfall predictions. Flooding is the leading cause of natural disaster fatalities in the US, with flash flooding in particular responsible for a majority of flooding deaths. While many riverine flood models have been operationalized into early warning systems, there is currently no model that is capable of reliably predicting flash floods in urban areas. Urban flash floods are particularly difficult to model due to a lack of rainfall and runoff data at appropriate scales. To address this problem, we develop a wide-area flood-monitoring wireless sensor network for the Dallas-Fort Worth metroplex, and use this network to characterize rainfall-runoff response over multiple heterogeneous catchments. First, we deploy a network of 22 wireless sensor nodes to collect real-time stream stage measurements over catchments ranging from 2-80 km2 in size. Next, we characterize the rainfall-runoff response of each catchment by combining stream stage data with gage and radar-based precipitation measurements. Finally, we demonstrate the potential for real-time flash flood prediction by joining the derived rainfall-runoff models with real-time radar rainfall predictions. We find that runoff response is highly heterogeneous among catchments, with large variabilities in runoff response detected even among nearby gages. However, when spatially-explicit rainfall fields are included, spatial variability in runoff response is largely captured. This result highlights the importance of increased spatial coverage for flash flood prediction.

  5. Status of networking for high energy physics in the United States

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1985-06-01

    Networks are used extensively for High Energy Physics in the United States. Although the networks have grown in an ad hoc manner with connections typically being made to satisfy the needs of one detector group, they now encompass to large fraction of the US HEP community in one form or another. This paper summarizes the current status and experience with networks

  6. State budget transfers to Health Insurance Funds for universal health coverage: institutional design patterns and challenges of covering those outside the formal sector in Eastern European high-income countries.

    Science.gov (United States)

    Vilcu, Ileana; Mathauer, Inke

    2016-01-15

    Many countries from the European region, which moved from a government financed and provided health system to social health insurance, would have had the risk of moving away from universal health coverage if they had followed a "traditional" approach. The Eastern European high-income countries studied in this paper managed to avoid this potential pitfall by using state budget revenues to explicitly pay health insurance contributions on behalf of certain (vulnerable) population groups who have difficulties to pay these contributions themselves. The institutional design aspects of their government revenue transfer arrangements are analysed, as well as their impact on universal health coverage progress. This regional study is based on literature review and review of databases for the performance assessment. The analytical framework focuses on the following institutional design features: rules on eligibility for contribution exemption, financing and pooling arrangements, and purchasing arrangements and benefit package design. More commonalities than differences can be identified across countries: a broad range of groups eligible for exemption from payment of health insurance contributions, full state contributions on behalf of the exempted groups, mostly mandatory participation, integrated pools for both the exempted and contributors, and relatively comprehensive benefit packages. In terms of performance, all countries have high total population coverage rates, but there are still challenges regarding financial protection and access to and utilization of health care services, especially for low income people. Overall, government revenue transfer arrangements to exempt vulnerable groups from contributions are one option to progress towards universal health coverage.

  7. Energy efficient mechanisms for high-performance Wireless Sensor Networks

    Science.gov (United States)

    Alsaify, Baha'adnan

    2009-12-01

    Due to recent advances in microelectronics, the development of low cost, small, and energy efficient devices became possible. Those advances led to the birth of the Wireless Sensor Networks (WSNs). WSNs consist of a large set of sensor nodes equipped with communication capabilities, scattered in the area to monitor. Researchers focus on several aspects of WSNs. Such aspects include the quality of service the WSNs provide (data delivery delay, accuracy of data, etc...), the scalability of the network to contain thousands of sensor nodes (the terms node and sensor node are being used interchangeably), the robustness of the network (allowing the network to work even if a certain percentage of nodes fails), and making the energy consumption in the network as low as possible to prolong the network's lifetime. In this thesis, we present an approach that can be applied to the sensing devices that are scattered in an area for Sensor Networks. This work will use the well-known approach of using a awaking scheduling to extend the network's lifespan. We designed a scheduling algorithm that will reduce the delay's upper bound the reported data will experience, while at the same time keeps the advantages that are offered by the use of the awaking scheduling -- the energy consumption reduction which will lead to the increase in the network's lifetime. The wakeup scheduling is based on the location of the node relative to its neighbors and its distance from the Base Station (the terms Base Station and sink are being used interchangeably). We apply the proposed method to a set of simulated nodes using the "ONE Simulator". We test the performance of this approach with three other approaches -- Direct Routing technique, the well known LEACH algorithm, and a multi-parent scheduling algorithm. We demonstrate a good improvement on the network's quality of service and a reduction of the consumed energy.

  8. A High Elevation Climate Monitoring Network: Strategy and Progress

    Science.gov (United States)

    Redmond, K. T.

    2004-12-01

    Populations living at low elevations are critically dependent on processes and resources at higher elevations. Most western U.S. streamflow begins as mountain snowmelt. Observational evidence and theoretical considerations indicate that climate variations in a given geographic domain can and do exhibit different characteristics and temporal behavior at different elevations. Subtleties in the interplay between topography and airflow can significantly affect precipitation patterns. However, there are very few systematic, long-term, in-situ, climate quality, high-altitude observational time series with hourly resolution for the western North American mountains to investigate these issues at the proper scales. Climate at high elevations is severely undersampled, a consequence of the harsh physical environment, and demands on sensors, maintenance, access, communications, time, and budgets. Costs are higher, human presence is limited, AC power is often not available, and there are permitting and aesthetic constraints. The observational strategy should include these main elements: 1) All major mountain ranges should be sampled. 2) Along-axis and cross-axis sampling for major mountain chains. 3) Approximately 5-10 sites per state (1 per 56000 sq km to 1 per 28000 sq km). 4) Highest sites as high as possible within each state, but at both high relative and absolute elevations. 5) Free air exposures at higher sites. 6) Utilize existing measurements and networks, and extend existing records, when possible. 7) AC power to prevent ice/rime when practical. 8) Temperature, relative humidity, wind speed and direction, solar radiation as main elements, others as feasible. 9) Hourly readings, and real time communication whenever possible. 10) Absence of local artificial influences, site stable for next 5-10 decades. 11) Current and historical measurements accessible via World Wide Web when possible. 12) Hydro measurements (precipitation, snow water content and depth) are not

  9. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  10. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  11. Smart Antennas for Battlefield Multimedia Wireless Networks with Dual Use Applications

    National Research Council Canada - National Science Library

    Paulraj, Arogyaswami

    1998-01-01

    Work under this contract focused on algorithms for space time processing (STP) address the critical needs of the wireless battlefield network including high speed data, enhanced coverage and capacity, low power, and antijam capability...

  12. Examination of the Spatial Correlation Among Gauge Precipitation Data and Gridded Radar Data for the Determination of Sufficient in-Situ Network Coverage

    Science.gov (United States)

    Gassert, K.; Kunkel, K.; Nelson, B. R.; Prat, O. P.; Stevens, S. E.

    2014-12-01

    Monitoring of pressure buildup can provide explicit information on reservoir integrity and is an appealing tool, however pressure variation is dependent on a variety of factors causing high uncertainty in pressure predictions. This work evaluated pressurization of a reservoir system in the presence of leakage pathways as well as exploring the effects of compartmentalization of the reservoir utilizing design of experiments (Definitive Screening, Box Behnken, Central Composite, and Latin Hypercube designs) and response surface methods. Two models were developed, 1) an idealized injection scenario in order to evaluate the performance of multiple designs, and 2) a complex injection scenario implementing the best performing design to investigate pressurization of the reservoir system. A holistic evaluation of scenario 1, determined that the Central Composite design would be used for the complex injection scenario. The complex scenario evaluated 5 risk factors: reservoir, seal, leakage pathway and fault permeabilities, and horizontal position of the pathway. A total of 60 response surface models (RSM) were developed for the complex scenario with an average R2 of 0.95 and a NRMSE of 0.067. Sensitivity to the input factors was dynamic through space and time; at the earliest time (0.05 years) the reservoir permeability was dominant, and for later times (>0.5 years) the fault permeability became dominant for all locations. The RSM's were then used to conduct a Monte Carlo Analysis to further analyze pressurization risks, identifying the P10, P50, P90 values. This identified the in zone (lower) P90 values as 2.16, 1.77, and 1.53 MPa and above zone values of 1.35, 1.23, 1.09 MPa for monitoring locations 1, 2, and 3, respectively. In summary, the design of experiments and response surface methods allowed for an efficient sensitivity and uncertainty analysis to be conducted permitting a complete evaluation of the pressurization across the entire parameter space.

  13. Large scale network management. Condition indicators for network stations, high voltage power conductions and cables

    International Nuclear Information System (INIS)

    Eggen, Arnt Ove; Rolfseng, Lars; Langdal, Bjoern Inge

    2006-02-01

    In the Strategic Institute Programme (SIP) 'Electricity Business enters e-business (eBee)' SINTEF Energy research has developed competency that can help the energy business employ ICT systems and computer technology in an improved way. Large scale network management is now a reality, and it is characterized by large entities with increasing demands on efficiency and quality. These are goals that can only be reached by using ICT systems and computer technology in a more clever way than what is the case today. At the same time it is important that knowledge held by experienced co-workers is consulted when formal rules for evaluations and decisions in ICT systems are developed. In this project an analytical concept for evaluation of networks based information in different ICT systems has been developed. The method estimating the indicators to describe different conditions in a network is general, and indicators can be made to fit different levels of decision and network levels, for example network station, transformer circuit, distribution network and regional network. Moreover, the indicators can contain information about technical aspects, economy and HSE. An indicator consists of an indicator name, an indicator value, and an indicator colour based on a traffic-light analogy to indicate a condition or a quality for the indicator. Values on one or more indicators give an impression of important conditions in the network, and make up the basis for knowing where more detailed evaluations have to be conducted before a final decision on for example maintenance or renewal is made. A prototype has been developed for testing the new method. The prototype has been developed in Excel, and especially designed for analysing transformer circuits in a distribution network. However, the method is a general one, and well suited for implementation in a commercial computer system (ml)

  14. Fibre and components induced limitations in high capacity optical networks

    DEFF Research Database (Denmark)

    Peucheret, Christophe

    2003-01-01

    The design of future all-optical networks relies on the knowledge of the physical layer transport properties. In this thesis, we focus on two types of system impairments: those induced by the non-ideal transfer functions of optical filters to be found in network elements such as optical add...... design in order to maximise the spectral efficiency in a four add-drop node ring network. The concept of "normalised transmission sections" is introduced in order to ease the dimensioning of transparent domains in future all-optical networks. Normalised sections based on standard single mode fibre (SMF......-drop multiplexers (OADM) and optical cross-connects (OXC), as well as those due to the interaction of group-velocity dispersion, optical fibre non-linearities and accumulation of amplifier noise in the transmission path. The dispersion of fibre optics components is shown to limit their cascadability. Dispersion...

  15. Opportunities and Challenges of AC/DC Transmission Network Planning Considering High Proportion Renewable Energy

    Directory of Open Access Journals (Sweden)

    Arslan Habib

    2018-03-01

    Full Text Available The time and space distribution characteristics of future high proportion of renewable energy sources will bring unprecedented challenges to the electric power system’s processing and planning, the basic form of electric power system and operating characteristics will have fundamental changes. Based on the research status quo at home and abroad, this paper expounds the four scientific problems of the transmission network planning with high proportion of renewable energy. Respectively, from the network source collaborative planning, transmission network flexible planning. With the distribution network in conjunction with the transmission network planning, transmission planning program comprehensive evaluation and decision-making methods. This paper puts forward the research ideas and framework of transmission network planning considering the high proportion of renewable energy. At the end, the future high proportion of (renewable energy grid-connected transmission network’s opportunities and challenges are presented.

  16. Application of GPS in a high precision engineering survey network

    International Nuclear Information System (INIS)

    Ruland, R.; Leick, A.

    1985-04-01

    A GPS satellite survey was carried out with the Macrometer to support construction at the Stanford Linear Accelerator Center (SLAC). The network consists of 16 stations of which 9 stations were part of the Macrometer network. The horizontal and vertical accuracy of the GPS survey is estimated to be 1 to 2 mm and 2 to 3 mm respectively. The horizontal accuracy of the terrestrial survey, consisting of angles and distances, equals that of the GPS survey only in the ''loop'' portion of the network. All stations are part of a precise level network. The ellipsoidal heights obtained from the GPS survey and the orthometric heights of the level network are used to compute geoid undulations. A geoid profile along the linac was computed by the National Geodetic Survey in 1963. This profile agreed with the observed geoid within the standard deviation of the GPS survey. Angles and distances were adjusted together (TERRA), and all terrestrial observations were combined with the GPS vector observations in a combination adjustment (COMB). A comparison of COMB and TERRA revealed systematic errors in the terrestrial solution. A scale factor of 1.5 ppM +- .8 ppM was estimated. This value is of the same magnitude as the over-all horizontal accuracy of both networks. 10 refs., 3 figs., 5 tabs

  17. [Contribution of the detection of IgA antibodies to the laboratory diagnosis of mumps in the population with a high vaccination coverage].

    Science.gov (United States)

    Limberková, R; Smíšková, D; Havlíčková, M; Herrmannová, K; Lexová, P; Malý, M

    2015-03-01

    Serological diagnosis of epidemic mumps can be difficult in vaccinated persons, particularly due to the absence of specific IgM antibodies. The aim was to find whether adding the detection of IgA antibodies to the currently used routine serological diagnosis of mumps (detection of IgM and IgG antibodies in an acute serum sample) would make the serological diagnosis of mumps more effective in a population with a high vaccination coverage. At the same time, ELISA kits for the detection of early IgA and IgM antibodies against the mumps virus were compared and statistical analysis of the results was performed. Sixty-four acute sera from patients with laboratory confirmed diagnosis of mumps were included in the study. Clinical specimens were collected at the onset of clinical symptoms. To test the sera, the MASTAZYME ELISA Mumps IgA kit (MAST DIAGNOSTICA, Germany) with the MASTSORB sorbent (RF and IgG) and Enzygnost Anti-Parotitis-Virus/IgM kit (Siemens, Germany) were used. A panel of 121 acute sera with no epidemiological link to mumps virus served as specificity controls for the IgA assay. The epidemiological data were derived from the EPIDAT system. The level of agreement was assessed using the McNemara test and Cohen's coefficient kappa. The Stata 9.2 software (Stata Corp LP, College Station, USA) was used for statistical analysis. The detection of IgA and IgM antibodies against the mumps virus yielded concordant results in 50/64 acute sera, 32 positive and 18 negative, i.e. an agreement of 78.12 %. Of the remaining 14 samples, 13 were only IgA positive and one was only IgM positive. The controls showed non-specific IgA positivity in 5/121 samples which indicates a 96% specificity. The absence of specific IgM antibodies against mumps virus is relatively often seen in vaccinated indivi-duals; nevertheless, the test is routinely used in patients with suspected active infection. The test for IgA antibodies, which is not routinely performed, significantly increased the

  18. Verifying cell loss requirements in high-speed communication networks

    Directory of Open Access Journals (Sweden)

    Kerry W. Fendick

    1998-01-01

    Full Text Available In high-speed communication networks it is common to have requirements of very small cell loss probabilities due to buffer overflow. Losses are measured to verify that the cell loss requirements are being met, but it is not clear how to interpret such measurements. We propose methods for determining whether or not cell loss requirements are being met. A key idea is to look at the stream of losses as successive clusters of losses. Often clusters of losses, rather than individual losses, should be regarded as the important “loss events”. Thus we propose modeling the cell loss process by a batch Poisson stochastic process. Successive clusters of losses are assumed to arrive according to a Poisson process. Within each cluster, cell losses do not occur at a single time, but the distance between losses within a cluster should be negligible compared to the distance between clusters. Thus, for the purpose of estimating the cell loss probability, we ignore the spaces between successive cell losses in a cluster of losses. Asymptotic theory suggests that the counting process of losses initiating clusters often should be approximately a Poisson process even though the cell arrival process is not nearly Poisson. The batch Poisson model is relatively easy to test statistically and fit; e.g., the batch-size distribution and the batch arrival rate can readily be estimated from cell loss data. Since batch (cluster sizes may be highly variable, it may be useful to focus on the number of batches instead of the number of cells in a measurement interval. We also propose a method for approximately determining the parameters of a special batch Poisson cell loss with geometric batch-size distribution from a queueing model of the buffer content. For this step, we use a reflected Brownian motion (RBM approximation of a G/D/1/C queueing model. We also use the RBM model to estimate the input burstiness given the cell loss rate. In addition, we use the RBM model to

  19. Distribution Networks Management with High Penetration of Photovoltaic Panels

    DEFF Research Database (Denmark)

    Morais, Hugo

    2013-01-01

    The photovoltaic solar panels penetration increases significantly in recent years in several European countries, mainly in the low voltage and medium voltage networks supported by governmental policies and incentives. Consequently, the acquisition and installation costs of PV panels decrease...... and the know–how increase significantly. Presently is important the use of new management methodologies in distribution networks to support the growing penetration of PV panels. In some countries, like in Germany and in Italy, the solar generation based in photovoltaic panels supply 40% of the demand in some...

  20. Low and High-Frequency Field Potentials of Cortical Networks Exhibit Distinct Responses to Chemicals

    Science.gov (United States)

    Neural networks grown on microelectrode arrays (MEAs) have become an important, high content in vitro assay for assessing neuronal function. MEA experiments typically examine high- frequency (HF) (>200 Hz) spikes, and bursts which can be used to discriminate between differ...

  1. Cost allocation model for distribution networks considering high penetration of distributed energy resources

    DEFF Research Database (Denmark)

    Soares, Tiago; Pereira, Fábio; Morais, Hugo

    2015-01-01

    The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used...... in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed......, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle...

  2. A quantitative approach to static sensor network design

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Burgess, Greg; Weng, Kevin C.

    2014-01-01

    . We illustrate the method with real topographic data from a rugose coral reef where network performance is highly influenced by detection shadowing. Network performance is visualized by a coverage map indicating the probability of detection at any location in the study area. The reported unique...

  3. Coverage Metrics for Model Checking

    Science.gov (United States)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  4. Investigating the structure of semantic networks in low and high creative persons

    Directory of Open Access Journals (Sweden)

    Yoed Nissan Kenett

    2014-06-01

    Full Text Available According to Mednick’s (1962 theory of individual differences in creativity, creative individuals appear to have a richer and more flexible associative network than less creative individuals. Thus, creative individuals are characterized by flat (broader associations instead of steep (few, common associations associational hierarchies. To study these differences, we implement a novel computational approach to the study of semantic networks, through the analysis of free associations. The core notion of our method is that concepts in the network are related to each other by their association correlations - overlap of similar associative responses (association clouds. We began by collecting a large sample of participants who underwent several creativity measurements and used a decision tree approach to divide the sample into low and high creative groups. Next, each group underwent a free association generation paradigm which allowed us to construct and analyze the semantic networks of both groups. Comparison of the semantic memory networks of persons with low creative ability and persons with high creative ability revealed differences between the two networks. The semantic memory network of persons with low creative ability seems to be more rigid, compared to the network of persons with high creative ability, in the sense that it is more spread out and breaks apart into more sub-parts. We discuss how our findings are in accord and extend Mednick’s (1962 theory and the feasibility of using network science paradigms to investigate high level cognition.

  5. THE IMPROVEMENT OF COMPUTER NETWORK PERFORMANCE WITH BANDWIDTH MANAGEMENT IN KEMURNIAN II SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Bayu Kanigoro

    2012-05-01

    Full Text Available This research describes the improvement of computer network performance with bandwidth management in Kemurnian II Senior High School. The main issue of this research is the absence of bandwidth division on computer, which makes user who is downloading data, the provided bandwidth will be absorbed by the user. It leads other users do not get the bandwidth. Besides that, it has been done IP address division on each room, such as computer, teacher and administration room for supporting learning process in Kemurnian II Senior High School, so wireless network is needed. The method is location observation and interview with related parties in Kemurnian II Senior High School, the network analysis has run and designed a new topology network including the wireless network along with its configuration and separation bandwidth on microtic router and its limitation. The result is network traffic on Kemurnian II Senior High School can be shared evenly to each user; IX and IIX traffic are separated, which improve the speed on network access at school and the implementation of wireless network.Keywords: Bandwidth Management; Wireless Network

  6. Software Switching for High Throughput Data Acquisition Networks

    CERN Document Server

    AUTHOR|(CDS)2089787; Lehmann Miotto, Giovanna

    The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. The problem arising from this pattern is widely known in the literature as \\emph{incast} and can be observed as TCP throughput collapse. It is a result of overloading the switch buffers, when a specific node in a network requests data from multiple sources. This will become even more demanding for future upgrades of the experiments at the Large Hadron Collider at CERN. It is questionable whether commodity TCP/IP and Ethernet technologies in their current form will be still able to effectively adapt to bursty traffic without losing packets due to the scarcity of buffers in the networking hardware. This thesis provides an analysis of TCP/IP performance in data acquisition networks and presents a novel approach to incast congestion in these networks based on software-based packet forwarding. Our first contribution lies in confirming the strong analogies bet...

  7. High speed switching for computer and communication networks

    NARCIS (Netherlands)

    Dorren, H.J.S.

    2014-01-01

    The role of data centers and computers are vital for the future of our data-centric society. Historically the performance of data-centers is increasing with a factor 100-1000 every ten years and as a result of this the capacity of the data-center communication network has to scale accordingly. This

  8. Review of Congestion Management Methods for Distribution Networks with High Penetration of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi

    2014-01-01

    This paper reviews the existing congestion management methods for distribution networks with high penetration of DERs documented in the recent research literatures. The congestion management methods for distribution networks reviewed can be grouped into two categories – market methods and direct...... control methods. The market methods consist of dynamic tariff, distribution capacity market, shadow price and flexible service market. The direct control methods are comprised of network reconfiguration, reactive power control and active power control. Based on the review of the existing methods...

  9. Knockouts of high-ranking males have limited impact on baboon social networks.

    Science.gov (United States)

    Franz, Mathias; Altmann, Jeanne; Alberts, Susan C

    Social network structures can crucially impact complex social processes such as collective behaviour or the transmission of information and diseases. However, currently it is poorly understood how social networks change over time. Previous studies on primates suggest that `knockouts' (due to death or dispersal) of high-ranking individuals might be important drivers for structural changes in animal social networks. Here we test this hypothesis using long-term data on a natural population of baboons, examining the effects of 29 natural knockouts of alpha or beta males on adult female social networks. We investigated whether and how knockouts affected (1) changes in grooming and association rates among adult females, and (2) changes in mean degree and global clustering coefficient in these networks. The only significant effect that we found was a decrease in mean degree in grooming networks in the first month after knockouts, but this decrease was rather small, and grooming networks rebounded to baseline levels by the second month after knockouts. Taken together our results indicate that the removal of high-ranking males has only limited or no lasting effects on social networks of adult female baboons. This finding calls into question the hypothesis that the removal of high-ranking individuals has a destabilizing effect on social network structures in social animals.

  10. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  11. Social Networks and High Healthcare Utilization: Building Resilience Through Analysis

    Science.gov (United States)

    2016-09-01

    attributes, such as gender and race. Focusing on individual members’ attributes in a social network seeks to identify common nodes and links, but may fail...response varied by individual paramedic assessment. Table 6 shows that requests for EMS services vary by complaint, that the dominant gender ...of disease and increased life expectancy across much of the globe; however, “noninfectious disease” and “social inequities of health” remain

  12. High-Performance Neural Networks for Visual Object Classification

    OpenAIRE

    Cireşan, Dan C.; Meier, Ueli; Masci, Jonathan; Gambardella, Luca M.; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better ...

  13. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  14. High Frequency Resonance Damping of DFIG based Wind Power System under Weak Network

    DEFF Research Database (Denmark)

    Song, Yipeng; Wang, Xiongfei; Blaabjerg, Frede

    2017-01-01

    When operating in a micro or weak grid which has a relatively large network impedance, the Doubly Fed Induction Generator (DFIG) based wind power generation system is prone to suffer high frequency resonance due to the impedance interaction between DFIG system and the parallel compensated network...

  15. Network Financial Support and Conflict as Predictors of Depressive Symptoms among a Highly Disadvantaged Population

    Science.gov (United States)

    Knowlton, Amy R.; Latkin, Carl A.

    2007-01-01

    The study examined multiple dimensions of social support as predictors of depressive symptoms among a highly vulnerable population. Social network analysis was used to assess perceived and enacted dimensions of support (emotional, financial, instrumental), network conflict, closeness, and composition. Participants were 393 current and former…

  16. Analyzing 3D xylem networks in Vitis vinifera using High Resolution Computed Tomography (HRCT)

    Science.gov (United States)

    Recent developments in High Resolution Computed Tomography (HRCT) have made it possible to visualize three dimensional (3D) xylem networks without time consuming, labor intensive physical sectioning. Here we describe a new method to visualize complex vessel networks in plants and produce a quantitat...

  17. Ethnic boundaries in high school students’ networks in Flanders and the Netherlands

    NARCIS (Netherlands)

    Baerveldt, C.; Zijlstra, Bonne; De Wolf, M.; Van Rossem, R.; van Duijn, M.A.J.

    2007-01-01

    Ethnic boundaries were tested in students' networks in 34 Flemish and 19 Dutch high schools. Each network consisted of a school cohort in an intermediate level of education ( track). While students from the native majority predominantly had friendships within their own ethnic category, minority

  18. Study on the forward-feed neural network used for the classification of high energy particles

    International Nuclear Information System (INIS)

    Luo Guangxuan; Dai Guiliang

    1997-01-01

    Neural network has been applied in the field of high energy physics experiment for the classification of particles and gained good results. The author emphasizes the systematic analysis of the fundamental principle of the forward-feed neural network and discusses the problems and solving methods in application

  19. Innovation, networking, and proximity: lessons from small high-technology firms in the UK

    NARCIS (Netherlands)

    Romijn, H.A.; Albu, Mike

    2002-01-01

    The article explores how the innovative performance of small high-tech firms relates to their external networking activities, and whether geographical proximity in their network relations matters. Data from a small sample of electronics firms and software developers in South East England are used to

  20. Selection of variables for neural network analysis. Comparisons of several methods with high energy physics data

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)

  1. Hierarchical micro-mobility management in high-speed multihop access networks

    Institute of Scientific and Technical Information of China (English)

    TANG Bi-hua; MA Xiao-lei; LIU Yuan-an; GAO Jin-chun

    2006-01-01

    This article integrates the hierarchical micro-mobility management and the high-speed multihop access networks (HMAN), to accomplish the smooth handover between different access routers. The proposed soft handover scheme in the high-speed HMAN can solve the micro-mobility management problem in the access network. This article also proposes the hybrid access router (AR) advertisement scheme and AR selection algorithm, which uses the time delay and stable route to the AR as the gateway selection parameters. By simulation, the proposed micro-mobility management scheme can achieve high packet delivery fraction and improve the lifetime of network.

  2. Link and Network Layers Design for Ultra-High-Speed Terahertz-Band Communications Networks

    Science.gov (United States)

    2017-01-01

    rod in position 13. . . . . . . . . . . 28 14 Measured and approximated received pulse energy as functions of the transmission distance...rates. For example, in Long-Term Evolution Advanced ( LTE -A) networks, peak data rates on the order of 1 Gigabit-per-second (Gbps) are feasible over 100...The antennas are meshed with a resolution of spp/5. To simulate the array performance measures of gain and directivity, the same parameters are

  3. A High-Level Petri Net Framework for Genetic Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Banks Richard

    2007-12-01

    Full Text Available To understand the function of genetic regulatory networks in the development of cellular systems, we must not only realise the individual network entities, but also the manner by which they interact. Multi-valued networks are a promising qualitative approach for modelling such genetic regulatory networks, however, at present they have limited formal analysis techniques and tools. We present a flexible formal framework for modelling and analysing multi-valued genetic regulatory networks using high-level Petri nets and logic minimization techniques. We demonstrate our approach with a detailed case study in which part of the genetic regulatory network responsible for the carbon starvation stress response in Escherichia coli is modelled and analysed. We then compare and contrast this multivalued model to a corresponding Boolean model and consider their formal relationship.

  4. Characterization of differentially expressed genes using high-dimensional co-expression networks

    DEFF Research Database (Denmark)

    Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.

    2010-01-01

    We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...

  5. High frequency of functional extinctions in ecological networks.

    Science.gov (United States)

    Säterberg, Torbjörn; Sellman, Stefan; Ebenman, Bo

    2013-07-25

    Intensified exploitation of natural populations and habitats has led to increased mortality rates and decreased abundances of many species. There is a growing concern that this might cause critical abundance thresholds of species to be crossed, with extinction cascades and state shifts in ecosystems as a consequence. When increased mortality rate and decreased abundance of a given species lead to extinction of other species, this species can be characterized as functionally extinct even though it still exists. Although such functional extinctions have been observed in some ecosystems, their frequency is largely unknown. Here we use a new modelling approach to explore the frequency and pattern of functional extinctions in ecological networks. Specifically, we analytically derive critical abundance thresholds of species by increasing their mortality rates until an extinction occurs in the network. Applying this approach on natural and theoretical food webs, we show that the species most likely to go extinct first is not the one whose mortality rate is increased but instead another species. Indeed, up to 80% of all first extinctions are of another species, suggesting that a species' ecological functionality is often lost before its own existence is threatened. Furthermore, we find that large-bodied species at the top of the food chains can only be exposed to small increases in mortality rate and small decreases in abundance before going functionally extinct compared to small-bodied species lower in the food chains. These results illustrate the potential importance of functional extinctions in ecological networks and lend strong support to arguments advocating a more community-oriented approach in conservation biology, with target levels for populations based on ecological functionality rather than on mere persistence.

  6. Firewall Architectures for High-Speed Networks: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Errin W. Fulp

    2007-08-20

    Firewalls are a key component for securing networks that are vital to government agencies and private industry. They enforce a security policy by inspecting and filtering traffic arriving or departing from a secure network. While performing these critical security operations, firewalls must act transparent to legitimate users, with little or no effect on the perceived network performance (QoS). Packets must be inspected and compared against increasingly complex rule sets and tables, which is a time-consuming process. As a result, current firewall systems can introduce significant delays and are unable to maintain QoS guarantees. Furthermore, firewalls are susceptible to Denial of Service (DoS) attacks that merely overload/saturate the firewall with illegitimate traffic. Current firewall technology only offers a short-term solution that is not scalable; therefore, the \\textbf{objective of this DOE project was to develop new firewall optimization techniques and architectures} that meet these important challenges. Firewall optimization concerns decreasing the number of comparisons required per packet, which reduces processing time and delay. This is done by reorganizing policy rules via special sorting techniques that maintain the original policy integrity. This research is important since it applies to current and future firewall systems. Another method for increasing firewall performance is with new firewall designs. The architectures under investigation consist of multiple firewalls that collectively enforce a security policy. Our innovative distributed systems quickly divide traffic across different levels based on perceived threat, allowing traffic to be processed in parallel (beyond current firewall sandwich technology). Traffic deemed safe is transmitted to the secure network, while remaining traffic is forwarded to lower levels for further examination. The result of this divide-and-conquer strategy is lower delays for legitimate traffic, higher throughput

  7. Wireless Sensor Network Exploiting High Altitude Platform in 5G Network

    Directory of Open Access Journals (Sweden)

    Veronica Windha Mahyastuty

    2017-08-01

    Full Text Available Technology development and socio-economic transformation have increased the demand for 5G cellular networks. They are expected to send information quickly and support many use cases emerging from a variety of applications. One of the use cases on the 5G network is the massive MTC (Machine Type Communication, wherein wireless sensor network (WSN is a typical application. Challenges faced by a 5G cellular network are how to model an architecture/topology to support WSN and to solve energy consumption efficiency problem in WSN. So, to overcome these challenges, a HAP system integrated with WSN which uses Low Energy Adaptive Hierarchy routing protocol is implemented. The HAP system is designed to be used at a 20-km altitude, and the topologies used are those with and without clustering. It uses 1,000 sensor nodes and Low Energy Adaptive Clustering Hierarchy protocol. This system was simulated using MATLAB. Simulations were performed to analyze the energy consumption, the number of dead nodes, and the average total packets which were sent to HAP for non-clustered topology and clustered topology. Simulation results showed that the clustered topology could reduce energy consumption and the number of dead nodes while increasing the total packet sent to HAP.*****Perkembangan teknologi dan transformasi sosial-ekonomi telah menyebabkan bisnis jaringan seluler 5G mengalami perubahan, sehingga jaringan seluler 5G diharapkan dapat mengirim informasi dengan cepat dan mendukung kasus penggunaan yang banyak bermunculan dari berbagai aplikasi. Salah satu kasus penggunaan pada jaringan 5G adalah massive Machine Type Communication (MTC. Salah satu aplikasi massive MTC adalah jaringan sensor nirkabel (JSN. Tantangan bagi jaringan seluler 5G ini adalah bagaimana memodelkan arsitektur/topologi untuk mendukung JSN dan bagaimana mengatasi masalah efisiensi konsumsi energi di JSN. Untuk menjawab tantangan ini, maka diterapkan sistem HAP yang terintegrasi JSN dan

  8. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yan [Northwesten University

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  9. High Speed PAM -8 Optical Interconnects with Digital Equalization based on Neural Network

    DEFF Research Database (Denmark)

    Gaiarin, Simone; Pang, Xiaodan; Ozolins, Oskars

    2016-01-01

    We experimentally evaluate a high-speed optical interconnection link with neural network equalization. Enhanced equalization performances are shown comparing to standard linear FFE for an EML-based 32 GBd PAM-8 signal after 4-km SMF transmission.......We experimentally evaluate a high-speed optical interconnection link with neural network equalization. Enhanced equalization performances are shown comparing to standard linear FFE for an EML-based 32 GBd PAM-8 signal after 4-km SMF transmission....

  10. Dielectric elastomers, with very high dielectric permittivity, based on silicone and ionic interpenetrating networks

    DEFF Research Database (Denmark)

    Yu, Liyun; Madsen, Frederikke Bahrt; Hvilsted, Søren

    2015-01-01

    permittivity and the Young's modulus of the elastomer. One system that potentially achieves this involves interpenetrating polymer networks (IPNs), based on commercial silicone elastomers and ionic networks from amino- and carboxylic acid-functional silicones. The applicability of these materials as DEs...... are obtained while dielectric breakdown strength and Young's modulus are not compromised. These good overall properties stem from the softening effect and very high permittivity of ionic networks – as high as ε′ = 7500 at 0.1 Hz – while the silicone elastomer part of the IPN provides mechanical integrity...

  11. Burst analysis tool for developing neuronal networks exhibiting highly varying action potential dynamics

    Directory of Open Access Journals (Sweden)

    Fikret Emre eKapucu

    2012-06-01

    Full Text Available In this paper we propose a firing statistics based neuronal network burst detection algorithm for neuronal networks exhibiting highly variable action potential dynamics. Electrical activity of neuronal networks is generally analyzed by the occurrences of spikes and bursts both in time and space. Commonly accepted analysis tools employ burst detection algorithms based on predefined criteria. However, maturing neuronal networks, such as those originating from human embryonic stem cells (hESC, exhibit highly variable network structure and time-varying dynamics. To explore the developing burst/spike activities of such networks, we propose a burst detection algorithm which utilizes the firing statistics based on interspike interval (ISI histograms. Moreover, the algorithm calculates interspike interval thresholds for burst spikes as well as for pre-burst spikes and burst tails by evaluating the cumulative moving average and skewness of the ISI histogram. Because of the adaptive nature of the proposed algorithm, its analysis power is not limited by the type of neuronal cell network at hand. We demonstrate the functionality of our algorithm with two different types of microelectrode array (MEA data recorded from spontaneously active hESC-derived neuronal cell networks. The same data was also analyzed by two commonly employed burst detection algorithms and the differences in burst detection results are illustrated. The results demonstrate that our method is both adaptive to the firing statistics of the network and yields successful burst detection from the data. In conclusion, the proposed method is a potential tool for analyzing of hESC-derived neuronal cell networks and thus can be utilized in studies aiming to understand the development and functioning of human neuronal networks and as an analysis tool for in vitro drug screening and neurotoxicity assays.

  12. High-Frequency Network Oscillations in Cerebellar Cortex

    Science.gov (United States)

    Middleton, Steven J.; Racca, Claudia; Cunningham, Mark O.; Traub, Roger D.; Monyer, Hannah; Knöpfel, Thomas; Schofield, Ian S.; Jenkins, Alistair; Whittington, Miles A.

    2016-01-01

    SUMMARY Both cerebellum and neocortex receive input from the somatosensory system. Interaction between these regions has been proposed to underpin the correct selection and execution of motor commands, but it is not clear how such interactions occur. In neocortex, inputs give rise to population rhythms, providing a spatiotemporal coding strategy for inputs and consequent outputs. Here, we show that similar patterns of rhythm generation occur in cerebellum during nicotinic receptor subtype activation. Both gamma oscillations (30–80 Hz) and very fast oscillations (VFOs, 80–160 Hz) were generated by intrinsic cerebellar cortical circuitry in the absence of functional glutamatergic connections. As in neocortex, gamma rhythms were dependent on GABAA receptor-mediated inhibition, whereas VFOs required only nonsynaptically connected intercellular networks. The ability of cerebellar cortex to generate population rhythms within the same frequency bands as neocortex suggests that they act as a common spatiotemporal code within which corticocerebellar dialog may occur. PMID:18549787

  13. Spiking neural networks on high performance computer clusters

    Science.gov (United States)

    Chen, Chong; Taha, Tarek M.

    2011-09-01

    In this paper we examine the acceleration of two spiking neural network models on three clusters of multicore processors representing three categories of processors: x86, STI Cell, and NVIDIA GPGPUs. The x86 cluster utilized consists of 352 dualcore AMD Opterons, the Cell cluster consists of 320 Sony Playstation 3s, while the GPGPU cluster contains 32 NVIDIA Tesla S1070 systems. The results indicate that the GPGPU platform can dominate in performance compared to the Cell and x86 platforms examined. From a cost perspective, the GPGPU is more expensive in terms of neuron/s throughput. If the cost of GPGPUs go down in the future, this platform will become very cost effective for these models.

  14. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  15. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  16. LambdaStation: Exploiting Advance Networks In Data Intensive High Energy Physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Harvey B. Newman

    2009-09-11

    Lambda Station software implements selective, dynamic, secure path control between local storage & analysis facilities, and high bandwidth, wide-area networks (WANs). It is intended to facilitate use of desirable, alternate wide area network paths which may only be intermittently available, or subject to policies that restrict usage to specified traffic. Lambda Station clients gain awareness of potential alternate network paths via Clarens-based web services, including path characteristics such as bandwidth and availability. If alternate path setup is requested and granted, Lambda Station will configure the local network infrastructure to properly forward designated data flows via the alternate path. A fully functional implementation of Lambda Station, capable of dynamic alternate WAN path setup and teardown, has been successfully developed. A limited Lambda Station-awareness capability within the Storage Resource Manager (SRM) product has been developed. Lambda Station has been successfully tested in a number of venues, including Super Computing 2008. LambdaStation software, developed by the Fermilab team, enables dynamic allocation of alternate network paths for high impact traffic and to forward designated flows across LAN. It negotiates with reservation and provisioning systems of WAN control planes, be it based on SONET channels, demand tunnels, or dynamic circuit networks. It creates End-To-End circuit between single hosts, computer farms or networks with predictable performance characteristics, preserving QoS if supported in LAN and WAN and tied security policy allowing only specific traffic to be forwarded or received through created path. Lambda Station project also explores Network Awareness capabilities.

  17. High throughput route selection in multi-rate wireless mesh networks

    Institute of Scientific and Technical Information of China (English)

    WEI Yi-fei; GUO Xiang-li; SONG Mei; SONG Jun-de

    2008-01-01

    Most existing Ad-hoc routing protocols use the shortest path algorithm with a hop count metric to select paths. It is appropriate in single-rate wireless networks, but has a tendency to select paths containing long-distance links that have low data rates and reduced reliability in multi-rate networks. This article introduces a high throughput routing algorithm utilizing the multi-rate capability and some mesh characteristics in wireless fidelity (WiFi) mesh networks. It uses the medium access control (MAC) transmission time as the routing metric, which is estimated by the information passed up from the physical layer. When the proposed algorithm is adopted, the Ad-hoc on-demand distance vector (AODV) routing can be improved as high throughput AODV (HT-AODV). Simulation results show that HT-AODV is capable of establishing a route that has high data-rate, short end-to-end delay and great network throughput.

  18. Contraceptive Coverage and the Affordable Care Act.

    Science.gov (United States)

    Tschann, Mary; Soon, Reni

    2015-12-01

    A major goal of the Patient Protection and Affordable Care Act is reducing healthcare spending by shifting the focus of healthcare toward preventive care. Preventive services, including all FDA-approved contraception, must be provided to patients without cost-sharing under the ACA. No-cost contraception has been shown to increase uptake of highly effective birth control methods and reduce unintended pregnancy and abortion; however, some institutions and corporations argue that providing contraceptive coverage infringes on their religious beliefs. The contraceptive coverage mandate is evolving due to legal challenges, but it has already demonstrated success in reducing costs and improving access to contraception. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  20. Impact of leakage delay on bifurcation in high-order fractional BAM neural networks.

    Science.gov (United States)

    Huang, Chengdai; Cao, Jinde

    2018-02-01

    The effects of leakage delay on the dynamics of neural networks with integer-order have lately been received considerable attention. It has been confirmed that fractional neural networks more appropriately uncover the dynamical properties of neural networks, but the results of fractional neural networks with leakage delay are relatively few. This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay. The first attempt is made to tackle the stability and bifurcation of high-order fractional BAM neural networks with time delay in leakage terms in this paper. The conditions for the appearance of bifurcation for the proposed systems with leakage delay are firstly established by adopting time delay as a bifurcation parameter. Then, the bifurcation criteria of such system without leakage delay are successfully acquired. Comparative analysis wondrously detects that the stability performance of the proposed high-order fractional neural networks is critically weakened by leakage delay, they cannot be overlooked. Numerical examples are ultimately exhibited to attest the efficiency of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Resource management for multimedia services in high data rate wireless networks

    CERN Document Server

    Zhang, Ruonan; Pan, Jianping

    2017-01-01

    This brief offers a valuable resource on principles of quality-of-service (QoS) provisioning and the related link-layer resource management techniques for high data-rate wireless networks. The primary emphasis is on protocol modeling and analysis. It introduces media access control (MAC) protocols, standards of wireless local area networks (WLANs), wireless personal area networks (WPANs), and wireless body area networks (WBANs), discussing their key technologies, applications, and deployment scenarios. The main analytical approaches and models for performance analysis of the fundamental resource scheduling mechanisms, including the contention-based, reservation-based, and hybrid MAC, are presented. To help readers understand and evaluate system performance, the brief contains a range of simulation results. In addition, a thorough bibliography provides an additional tool. This brief is an essential resource for engineers, researchers, students, and users of wireless networks.

  2. Feature Extraction Method for High Impedance Ground Fault Localization in Radial Power Distribution Networks

    DEFF Research Database (Denmark)

    Jensen, Kåre Jean; Munk, Steen M.; Sørensen, John Aasted

    1998-01-01

    A new approach to the localization of high impedance ground faults in compensated radial power distribution networks is presented. The total size of such networks is often very large and a major part of the monitoring of these is carried out manually. The increasing complexity of industrial...... of three phase voltages and currents. The method consists of a feature extractor, based on a grid description of the feeder by impulse responses, and a neural network for ground fault localization. The emphasis of this paper is the feature extractor, and the detection of the time instance of a ground fault...... processes and communication systems lead to demands for improved monitoring of power distribution networks so that the quality of power delivery can be kept at a controlled level. The ground fault localization method for each feeder in a network is based on the centralized frequency broadband measurement...

  3. Hierarchy Bayesian model based services awareness of high-speed optical access networks

    Science.gov (United States)

    Bai, Hui-feng

    2018-03-01

    As the speed of optical access networks soars with ever increasing multiple services, the service-supporting ability of optical access networks suffers greatly from the shortage of service awareness. Aiming to solve this problem, a hierarchy Bayesian model based services awareness mechanism is proposed for high-speed optical access networks. This approach builds a so-called hierarchy Bayesian model, according to the structure of typical optical access networks. Moreover, the proposed scheme is able to conduct simple services awareness operation in each optical network unit (ONU) and to perform complex services awareness from the whole view of system in optical line terminal (OLT). Simulation results show that the proposed scheme is able to achieve better quality of services (QoS), in terms of packet loss rate and time delay.

  4. Polarity-specific high-level information propagation in neural networks.

    Science.gov (United States)

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  5. Networking

    OpenAIRE

    Rauno Lindholm, Daniel; Boisen Devantier, Lykke; Nyborg, Karoline Lykke; Høgsbro, Andreas; Fries, de; Skovlund, Louise

    2016-01-01

    The purpose of this project was to examine what influencing factor that has had an impact on the presumed increasement of the use of networking among academics on the labour market and how it is expressed. On the basis of the influence from globalization on the labour market it can be concluded that the globalization has transformed the labour market into a market based on the organization of networks. In this new organization there is a greater emphasis on employees having social qualificati...

  6. Fracture Simulation of Highly Crosslinked Polymer Networks: Triglyceride-Based Adhesives

    Science.gov (United States)

    Lorenz, Christian; Stevens, Mark; Wool, Richard

    2003-03-01

    The ACRES program at the U. of Delaware has shown that triglyceride oils derived from plants are a favorable alternative to the traditional adhesives. The triglyceride networks are formed from an initial mixture of styrene monomers, free-radical initiators and triglycerides. We have performed simulations to study the effect of physical composition and physical characteristics of the triglyceride network on the strength of triglyceride network. A coarse-grained, bead-spring model of the triglyceride system is used. The average triglyceride consists of 6 beads per chain, the styrenes are represented as a single bead and the initiators are two bead chains. The polymer network is formed using an off-lattice 3D Monte Carlo simulation, in which the initiators activate the styrene and triglyceride reactive sites and then bonds are randomly formed between the styrene and active triglyceride monomers producing a highly crosslinked polymer network. Molecular dynamics simulations of the network under tensile and shear strains were performed to determine the strength as a function of the network composition. The relationship between the network structure and its strength will also be discussed.

  7. SDN control of optical nodes in metro networks for high capacity inter-datacentre links

    Science.gov (United States)

    Magalhães, Eduardo; Perry, Philip; Barry, Liam

    2017-11-01

    Worldwide demand for bandwidth has been growing fast for some years and continues to do so. To cover this, mega datacentres need scalable connectivity to provide rich connectivity to handle the heavy traffic across them. Therefore, hardware infrastructures must be able to play different roles according to service and traffic requirements. In this context, software defined networking (SDN) decouples the network control and forwarding functions enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services. In addition, elastic optical networking (EON) technologies enable efficient spectrum utilization by allocating variable bandwidth to each user according to their actual needs. In particular, flexible transponders and reconfigurable optical add/drop multiplexers (ROADMs) are key elements since they can offer degrees of freedom to self adapt accordingly. Thus, it is crucial to design control methods in order to optimize the hardware utilization and offer high reconfigurability, flexibility and adaptability. In this paper, we propose and analyze, using a simulation framework, a method of capacity maximization through optical power profile manipulation for inter datacentre links that use existing metropolitan optical networks by exploiting the global network view afforded by SDN. Results show that manipulating the loss profiles of the ROADMs in the metro-network can yield optical signal-to-noise ratio (OSNR) improvements up to 10 dB leading to an increase in 112% in total capacity.

  8. Text mining and network analysis to find functional associations of genes in high altitude diseases.

    Science.gov (United States)

    Bhasuran, Balu; Subramanian, Devika; Natarajan, Jeyakumar

    2018-05-02

    Travel to elevations above 2500 m is associated with the risk of developing one or more forms of acute altitude illness such as acute mountain sickness (AMS), high altitude cerebral edema (HACE) or high altitude pulmonary edema (HAPE). Our work aims to identify the functional association of genes involved in high altitude diseases. In this work we identified the gene networks responsible for high altitude diseases by using the principle of gene co-occurrence statistics from literature and network analysis. First, we mined the literature data from PubMed on high-altitude diseases, and extracted the co-occurring gene pairs. Next, based on their co-occurrence frequency, gene pairs were ranked. Finally, a gene association network was created using statistical measures to explore potential relationships. Network analysis results revealed that EPO, ACE, IL6 and TNF are the top five genes that were found to co-occur with 20 or more genes, while the association between EPAS1 and EGLN1 genes is strongly substantiated. The network constructed from this study proposes a large number of genes that work in-toto in high altitude conditions. Overall, the result provides a good reference for further study of the genetic relationships in high altitude diseases. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Do highly cited clinicians get more citations when being present at social networking sites?

    Science.gov (United States)

    Ramezani-Pakpour-Langeroudi, Fatemeh; Okhovati, Maryam; Talebian, Ali

    2018-01-01

    The advent of social networking sites has facilitated the dissemination of scientific research. This article aims to investigate the presence of Iranian highly cited clinicians in social networking sites. This is a scientometrics study. Essential Science Indicator (ESI) was searched for Iranian highly cited papers in clinical medicine during November-December 2015. Then, the authors of the papers were checked and a list of authors was obtained. In the second phase, the authors' names were searched in the selected social networking sites (ResearchGate [RG], Academia, Mendeley, LinkedIn). The total citations and h-index in Scopus were also gathered. Fifty-five highly cited papers were retrieved. A total of 107 authors participated in writing these papers. RG was the most popular (64.5%) and LinkedIn and Academia were in 2 nd and 3 rd places. None of the authors of highly cited papers were subscribed to Mendeley. A positive direct relationship was observed between visibility at social networking sites with citation and h-index rate. A significant relationship was observed between the RG score, citations, reads indicators in RG, and citation numbers and there was a significant relationship between the number of document indicator in Academia and the citation numbers. It seems putting the papers in social networking sites can influence the citation rate. We recommend all scientists to be present at social networking sites to have better chance of visibility and also citation.

  10. Identification of highly susceptible individuals in complex networks

    Science.gov (United States)

    Tang, Shaoting; Teng, Xian; Pei, Sen; Yan, Shu; Zheng, Zhiming

    2015-08-01

    Identifying highly susceptible individuals in spreading processes is of great significance in controlling outbreaks. In this paper, we explore the susceptibility of people in susceptible-infectious-recovered (SIR) and rumor spreading dynamics. We first study the impact of community structure on people's susceptibility. Although the community structure can reduce the number of infected people for same infection rate, it will not significantly affect nodes' susceptibility. We find the susceptibility of individuals is sensitive to the choice of spreading dynamics. For SIR spreading, since the susceptibility is highly correlated to nodes' influence, the topological indicator k-shell can better identify highly susceptible individuals, outperforming degree, betweenness centrality and PageRank. In contrast, in rumor spreading model, where nodes' susceptibility and influence have no clear correlation, degree performs the best among considered topological measures. Our finding highlights the significance of both topological features and spreading mechanisms in identifying highly susceptible population.

  11. Structure Learning of Linear Bayesian Networks in High-Dimensions

    OpenAIRE

    Aragam, Nikhyl Bryon

    2015-01-01

    Research into graphical models is a rapidly developing enterprise, garnering significant interest from both the statistics and machine learning communities. A parallel thread in both communities has been the study of low-dimensional structures in high-dimensional models where $p\\gg n$. Recently, there has been a surge of interest in connecting these threads in order to understand the behaviour of graphical models in high-dimensions. Due to their relative simplicity, undirected models such as ...

  12. High capacity fiber optic sensor networks using hybrid multiplexing techniques and their applications

    Science.gov (United States)

    Sun, Qizhen; Li, Xiaolei; Zhang, Manliang; Liu, Qi; Liu, Hai; Liu, Deming

    2013-12-01

    Fiber optic sensor network is the development trend of fiber senor technologies and industries. In this paper, I will discuss recent research progress on high capacity fiber sensor networks with hybrid multiplexing techniques and their applications in the fields of security monitoring, environment monitoring, Smart eHome, etc. Firstly, I will present the architecture of hybrid multiplexing sensor passive optical network (HSPON), and the key technologies for integrated access and intelligent management of massive fiber sensor units. Two typical hybrid WDM/TDM fiber sensor networks for perimeter intrusion monitor and cultural relics security are introduced. Secondly, we propose the concept of "Microstructure-Optical X Domin Refecltor (M-OXDR)" for fiber sensor network expansion. By fabricating smart micro-structures with the ability of multidimensional encoded and low insertion loss along the fiber, the fiber sensor network of simple structure and huge capacity more than one thousand could be achieved. Assisted by the WDM/TDM and WDM/FDM decoding methods respectively, we built the verification systems for long-haul and real-time temperature sensing. Finally, I will show the high capacity and flexible fiber sensor network with IPv6 protocol based hybrid fiber/wireless access. By developing the fiber optic sensor with embedded IPv6 protocol conversion module and IPv6 router, huge amounts of fiber optic sensor nodes can be uniquely addressed. Meanwhile, various sensing information could be integrated and accessed to the Next Generation Internet.

  13. Cognitive radio networks efficient resource allocation in cooperative sensing, cellular communications, high-speed vehicles, and smart grid

    CERN Document Server

    Jiang, Tao; Cao, Yang

    2015-01-01

    PrefaceAcknowledgmentsAbout the AuthorsIntroductionCognitive Radio-Based NetworksOpportunistic Spectrum Access NetworksCognitive Radio Networks with Cooperative SensingCognitive Radio Networks for Cellular CommunicationsCognitive Radio Networks for High-Speed VehiclesCognitive Radio Networks for a Smart GridContent and OrganizationTransmission Slot Allocation in an Opportunistic Spectrum Access NetworkSingle-User Single-Channel System ModelProbabilistic Slot Allocation SchemeOptimal Probabilistic Slot AllocationBaseline PerformanceExponential DistributionHyper-Erlang DistributionPerformance An

  14. APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters

    International Nuclear Information System (INIS)

    Ammendola, R; Salamon, A; Salina, G; Biagioni, A; Prezza, O; Cicero, F Lo; Lonardo, A; Paolucci, P S; Rossetti, D; Tosoratto, L; Vicini, P; Simula, F

    2011-01-01

    We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera ® FPGA, are provided.

  15. APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R; Salamon, A; Salina, G [INFN Tor Vergata, Roma (Italy); Biagioni, A; Prezza, O; Cicero, F Lo; Lonardo, A; Paolucci, P S; Rossetti, D; Tosoratto, L; Vicini, P [INFN Roma, Roma (Italy); Simula, F [Sapienza Universita di Roma, Roma (Italy)

    2011-12-23

    We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera{sup Registered-Sign} FPGA, are provided.

  16. International network connectivity and performance -- The challenge from high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, W.

    2000-03-20

    The requirements of the new generation of High Energy and Nuclear Physics (HENP) experiments such as the BaBar detector at the Stanford Linear Accelerator Center (SLAC), the Relativistic Heavy Ion Collider (RHIC) groups at the Brookhaven National Laboratory (BNL) and the LHC projects currently under development at the European Center for Particle Physics (CERN) are a huge challenge to networking. In order to increase understanding and to improve performance and connectivity by identifying bottlenecks and allocating resources, the HENP networking community has been actively monitoring the network for over five years.

  17. The ISMAR high frequency coastal radar network: Monitoring surface currents for management of marine resources

    DEFF Research Database (Denmark)

    Carlson, Daniel Frazier

    2015-01-01

    The Institute of Marine Sciences (ISMAR) of the National Research Council of Italy (CNR) established a High Frequency (HF) Coastal Radar Network for the measurement of the velocity of surface currents in coastal seas. The network consists of four HF radar systems located on the coast of the Gargano...... Promontory (Southern Adriatic, Italy). The network has been operational since May 2013 and covers an area of approximately 1700 square kilometers in the Gulf of Manfredonia. Quality Assessment (QA) procedures are applied for the systems deployment and maintenance and Quality Control (QC) procedures...

  18. Performance of highly connected photonic switching lossless metro-access optical networks

    Science.gov (United States)

    Martins, Indayara Bertoldi; Martins, Yara; Barbosa, Felipe Rudge

    2018-03-01

    The present work analyzes the performance of photonic switching networks, optical packet switching (OPS) and optical burst switching (OBS), in mesh topology of different sizes and configurations. The "lossless" photonic switching node is based on a semiconductor optical amplifier, demonstrated and validated with experimental results on optical power gain, noise figure, and spectral range. The network performance was evaluated through computer simulations based on parameters such as average number of hops, optical packet loss fraction, and optical transport delay (Am). The combination of these elements leads to a consistent account of performance, in terms of network traffic and packet delivery for OPS and OBS metropolitan networks. Results show that a combination of highly connected mesh topologies having an ingress e-buffer present high efficiency and throughput, with very low packet loss and low latency, ensuring fast data delivery to the final receiver.

  19. Self-constructed tree-shape high thermal conductivity nanosilver networks in epoxy.

    Science.gov (United States)

    Pashayi, Kamyar; Fard, Hafez Raeisi; Lai, Fengyuan; Iruvanti, Sushumna; Plawsky, Joel; Borca-Tasciuc, Theodorian

    2014-04-21

    We report the formation of high aspect ratio nanoscale tree-shape silver networks in epoxy, at low temperatures (thermal conductivity (κ) of the nanocomposite compared to the polymer matrix. The networks form through a three-step process comprising of self-assembly by diffusion limited aggregation of polyvinylpyrrolidone (PVP) coated nanoparticles, removal of PVP coating from the surface, and sintering of silver nanoparticles in high aspect ratio networked structures. Controlling self-assembly and sintering by carefully designed multistep temperature and time processing leads to κ of our silver nanocomposites that are up to 300% of the present state of the art polymer nanocomposites at similar volume fractions. Our investigation of the κ enhancements enabled by tree-shaped network nanocomposites provides a basis for the development of new polymer nanocomposites for thermal transport and storage applications.

  20. System, apparatus and methods to implement high-speed network analyzers

    Science.gov (United States)

    Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E

    2015-11-10

    Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.

  1. The impact of indiscriminate media coverage of a celebrity suicide on a society with a high suicide rate: epidemiological findings on copycat suicides from South Korea.

    Science.gov (United States)

    Ju Ji, Nam; Young Lee, Weon; Seok Noh, Maeng; Yip, Paul S F

    2014-03-01

    This study examines the extent to which the indiscriminate media coverage of the famous young actress Lee Eun-ju's suicide in 2005 affected suicides overall and in specific subgroups (by age, gender, and suicide method) in a suicide-prone society, South Korea. South Korea's 2003-2005 suicide data (n=34,237) were obtained from death certificate records of the National Statistical Office (NSO). Data was analyzed with Poisson time series auto-regression models. After adjusting for confounding factors (such as seasonal variation, calendar year, temperature, humidity, and unemployment rate), there was a significant increase in suicide (RR=1.40, 95%, CI=1.30-1.51, no. of excess mortalities=331; 95% CI=267-391) during the 4 weeks after Lee's suicide. This increase was more prominent in subgroups with similar characteristics to the celebrity. In particular, the relative risk of suicide during this period was the largest (5.24; 95% CI=3.31-8.29) in young women who used the same suicide method as the celebrity. Moreover, the incidence of these copycat suicides during the same time significantly increased in both genders and in all age subgroups among those who committed suicide using the same method as the celebrity (hanging). It is difficult to prove conclusively that the real motivation of the suicides was Lee's death. The findings from this study imply that, if the media indiscreetly reports the suicide of a celebrity in a suicide-prone society, the copycat effect can be far-reaching and very strong, particularly for vulnerable people. © 2013 Published by Elsevier B.V.

  2. Reaching mothers and babies with early postnatal home visits: the implementation realities of achieving high coverage in large-scale programs.

    Directory of Open Access Journals (Sweden)

    Deborah Sitrin

    Full Text Available BACKGROUND: Nearly half of births in low-income countries occur without a skilled attendant, and even fewer mothers and babies have postnatal contact with providers who can deliver preventive or curative services that save lives. Community-based maternal and newborn care programs with postnatal home visits have been tested in Bangladesh, Malawi, and Nepal. This paper examines coverage and content of home visits in pilot areas and factors associated with receipt of postnatal visits. METHODS: Using data from cross-sectional surveys of women with live births (Bangladesh 398, Malawi: 900, Nepal: 615, generalized linear models were used to assess the strength of association between three factors - receipt of home visits during pregnancy, birth place, birth notification - and receipt of home visits within three days after birth. Meta-analytic techniques were used to generate pooled relative risks for each factor adjusting for other independent variables, maternal age, and education. FINDINGS: The proportion of mothers and newborns receiving home visits within three days after birth was 57% in Bangladesh, 11% in Malawi, and 50% in Nepal. Mothers and newborns were more likely to receive a postnatal home visit within three days if the mother received at least one home visit during pregnancy (OR2.18, CI1.46-3.25, the birth occurred outside a facility (OR1.48, CI1.28-1.73, and the mother reported a CHW was notified of the birth (OR2.66, CI1.40-5.08. Checking the cord was the most frequently reported action; most mothers reported at least one action for newborns. CONCLUSIONS: Reaching mothers and babies with home visits during pregnancy and within three days after birth is achievable using existing community health systems if workers are available; linked to communities; and receive training, supplies, and supervision. In all settings, programs must evaluate what community delivery systems can handle and how to best utilize them to improve postnatal care

  3. On global exponential stability of high-order neural networks with time-varying delays

    International Nuclear Information System (INIS)

    Zhang Baoyong; Xu Shengyuan; Li Yongmin; Chu Yuming

    2007-01-01

    This Letter investigates the problem of stability analysis for a class of high-order neural networks with time-varying delays. The delays are bounded but not necessarily differentiable. Based on the Lyapunov stability theory together with the linear matrix inequality (LMI) approach and the use of Halanay inequality, sufficient conditions guaranteeing the global exponential stability of the equilibrium point of the considered neural networks are presented. Two numerical examples are provided to demonstrate the effectiveness of the proposed stability criteria

  4. On global exponential stability of high-order neural networks with time-varying delays

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Baoyong [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China)]. E-mail: baoyongzhang@yahoo.com.cn; Xu Shengyuan [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China)]. E-mail: syxu02@yahoo.com.cn; Li Yongmin [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China) and Department of Mathematics, Huzhou Teacher' s College, Huzhou 313000, Zhejiang (China)]. E-mail: ymlwww@163.com; Chu Yuming [Department of Mathematics, Huzhou Teacher' s College, Huzhou 313000, Zhejiang (China)

    2007-06-18

    This Letter investigates the problem of stability analysis for a class of high-order neural networks with time-varying delays. The delays are bounded but not necessarily differentiable. Based on the Lyapunov stability theory together with the linear matrix inequality (LMI) approach and the use of Halanay inequality, sufficient conditions guaranteeing the global exponential stability of the equilibrium point of the considered neural networks are presented. Two numerical examples are provided to demonstrate the effectiveness of the proposed stability criteria.

  5. Coverage Extension via Side-Lobe Transmission in Multibeam Satellite System

    OpenAIRE

    Gharanjik, Ahmad; Kmieciak, Jarek; Shankar, Bhavani; Ottersten, Björn

    2017-01-01

    In this paper, we study feasibility of coverage extension of a multibeam satellite network by providing low-rate communications to terminals located outside the coverage of main beams. Focusing on the MEO satellite network, and using realistic link budgets from O3b networks, we investigate the performance of both forward and return-links for terminals stationed in the side lobes of the main beams. Particularly, multi-carrier transmission for forward-link and single carrier transmission for re...

  6. Aspects of coverage in medical DNA sequencing

    Directory of Open Access Journals (Sweden)

    Wilson Richard K

    2008-05-01

    Full Text Available Abstract Background DNA sequencing is now emerging as an important component in biomedical studies of diseases like cancer. Short-read, highly parallel sequencing instruments are expected to be used heavily for such projects, but many design specifications have yet to be conclusively established. Perhaps the most fundamental of these is the redundancy required to detect sequence variations, which bears directly upon genomic coverage and the consequent resolving power for discerning somatic mutations. Results We address the medical sequencing coverage problem via an extension of the standard mathematical theory of haploid coverage. The expected diploid multi-fold coverage, as well as its generalization for aneuploidy are derived and these expressions can be readily evaluated for any project. The resulting theory is used as a scaling law to calibrate performance to that of standard BAC sequencing at 8× to 10× redundancy, i.e. for expected coverages that exceed 99% of the unique sequence. A differential strategy is formalized for tumor/normal studies wherein tumor samples are sequenced more deeply than normal ones. In particular, both tumor alleles should be detected at least twice, while both normal alleles are detected at least once. Our theory predicts these requirements can be met for tumor and normal redundancies of approximately 26× and 21×, respectively. We explain why these values do not differ by a factor of 2, as might intuitively be expected. Future technology developments should prompt even deeper sequencing of tumors, but the 21× value for normal samples is essentially a constant. Conclusion Given the assumptions of standard coverage theory, our model gives pragmatic estimates for required redundancy. The differential strategy should be an efficient means of identifying potential somatic mutations for further study.

  7. Dynamic Social Networks in High Performance Football Coaching

    Science.gov (United States)

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  8. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  9. Awareness and Coverage of the National Health Insurance Scheme ...

    African Journals Online (AJOL)

    Sub- national levels possess a high degree of autonomy in a number of sectors including health. It is important to assess the level of coverage of the scheme among the formal sector workers in Nigeria as a proxy to gauge the extent of coverage of the scheme and derive suitable lessons that could be used in its expansion.

  10. A New Technique of Removing Blind Spots to Optimize Wireless Coverage in Indoor Area

    Directory of Open Access Journals (Sweden)

    A. W. Reza

    2013-01-01

    Full Text Available Blind spots (or bad sampling points in indoor areas are the positions where no signal exists (or the signal is too weak and the existence of a receiver within the blind spot decelerates the performance of the communication system. Therefore, it is one of the fundamental requirements to eliminate the blind spots from the indoor area and obtain the maximum coverage while designing the wireless networks. In this regard, this paper combines ray-tracing (RT, genetic algorithm (GA, depth first search (DFS, and branch-and-bound method as a new technique that guarantees the removal of blind spots and subsequently determines the optimal wireless coverage using minimum number of transmitters. The proposed system outperforms the existing techniques in terms of algorithmic complexity and demonstrates that the computation time can be reduced as high as 99% and 75%, respectively, as compared to existing algorithms. Moreover, in terms of experimental analysis, the coverage prediction successfully reaches 99% and, thus, the proposed coverage model effectively guarantees the removal of blind spots.

  11. Epidemics and agendas: the politics of nightly news coverage of AIDS.

    Science.gov (United States)

    Colby, D C; Cook, T E

    1991-01-01

    We examine why the exponential growth of AIDS cases or the wide-spread professional perception of a health crisis did not move the epidemic more quickly onto the agenda of public problems. One possible explanation focuses on how the national news media's construction of AIDS shaped the meaning of the epidemic for mass and elite audiences. An examination of nightly news coverage by the three major networks from 1982 to 1989 reveals considerable variability and volatility in their coverage. Topic-driven saturation coverage occurred only during three short periods in 1983, 1985, and 1987, when the epidemic seemed likely to affect the "general population". Only at such moments did public opinion shift and discussion and debate in government begin. Otherwise, the typical AIDS story tended less to sensationalize than to reassure, largely because journalists depended upon government officials and high-ranking doctors to present them with evidence of news. Such sources had interests either in avoiding coverage or in pointing toward breakthroughs; more critical sources, especially within the gay movement, had far less access to the news. In concluding, we considered the prospects and pitfalls of the news media's power to shape the public agenda.

  12. Tutorial on neural network applications in high energy physics: A 1992 perspective

    International Nuclear Information System (INIS)

    Denby, B.

    1992-04-01

    Feed forward and recurrent neural networks are introduced and related to standard data analysis tools. Tips are given on applications of neural nets to various areas of high energy physics. A review of applications within high energy physics and a summary of neural net hardware status are given

  13. Traditional nets interfere with the uptake of long-lasting insecticidal nets in the Peruvian Amazon: the relevance of net preference for achieving high coverage and use.

    Directory of Open Access Journals (Sweden)

    Koen Peeters Grietens

    Full Text Available While coverage of long-lasting insecticide-treated nets (LLIN has steadily increased, a growing number of studies report gaps between net ownership and use. We conducted a mixed-methods social science study assessing the importance of net preference and use after Olyset® LLINs were distributed through a mass campaign in rural communities surrounding Iquitos, the capital city of the Amazonian region of Peru.The study was conducted in the catchment area of the Paujil and Cahuide Health Centres (San Juan district between July 2007 and November 2008. During a first qualitative phase, participant observation and in-depth interviews collected information on key determinants for net preference and use. In a second quantitative phase, a survey among recently confirmed malaria patients evaluated the acceptability and use of both LLINs and traditional nets, and a case control study assessed the association between net preference/use and housing structure (open vs. closed houses.A total of 10 communities were selected for the anthropological fieldwork and 228 households participated in the quantitative studies. In the study area, bed nets are considered part of the housing structure and are therefore required to fulfil specific architectural and social functions, such as providing privacy and shelter, which the newly distributed Olyset® LLINs ultimately did not. The LLINs' failure to meet these criteria could mainly be attributed to their large mesh size, transparency and perceived ineffectiveness to protect against mosquitoes and other insects, resulting in 63.3% of households not using any of the distributed LLINs. Notably, LLIN usage was significantly lower in houses with no interior or exterior walls (35.2% than in those with walls (73.8% (OR = 5.2, 95CI [2.2; 12.3], p<0.001.Net preference can interfere with optimal LLIN use. In order to improve the number of effective days of LLIN protection per dollar spent, appropriate quantitative and qualitative

  14. Economic downturns, universal health coverage, and cancer mortality in high-income and middle-income countries, 1990-2010: a longitudinal analysis.

    Science.gov (United States)

    Maruthappu, Mahiben; Watkins, Johnathan; Noor, Aisyah Mohd; Williams, Callum; Ali, Raghib; Sullivan, Richard; Zeltner, Thomas; Atun, Rifat

    2016-08-13

    The global economic crisis has been associated with increased unemployment and reduced public-sector expenditure on health care (PEH). We estimated the effects of changes in unemployment and PEH on cancer mortality, and identified how universal health coverage (UHC) affected these relationships. For this longitudinal analysis, we obtained data from the World Bank and WHO (1990-2010). We aggregated mortality data for breast cancer in women, prostate cancer in men, and colorectal cancers in men and women, which are associated with survival rates that exceed 50%, into a treatable cancer class. We likewise aggregated data for lung and pancreatic cancers, which have 5 year survival rates of less than 10%, into an untreatable cancer class. We used multivariable regression analysis, controlling for country-specific demographics and infrastructure, with time-lag analyses and robustness checks to investigate the relationship between unemployment, PEH, and cancer mortality, with and without UHC. We used trend analysis to project mortality rates, on the basis of trends before the sharp unemployment rise that occurred in many countries from 2008 to 2010, and compared them with observed rates. Data were available for 75 countries, representing 2.106 billion people, for the unemployment analysis and for 79 countries, representing 2.156 billion people, for the PEH analysis. Unemployment rises were significantly associated with an increase in all-cancer mortality and all specific cancers except lung cancer in women. By contrast, untreatable cancer mortality was not significantly linked with changes in unemployment. Lag analyses showed significant associations remained 5 years after unemployment increases for the treatable cancer class. Rerunning analyses, while accounting for UHC status, removed the significant associations. All-cancer, treatable cancer, and specific cancer mortalities significantly decreased as PEH increased. Time-series analysis provided an estimate of more than

  15. Performance evaluation of two highly interconnected Data Center networks

    DEFF Research Database (Denmark)

    Andrus, Bogdan-Mihai; Mihai Poncea, Ovidiu; Vegas Olmos, Juan José

    2015-01-01

    In this paper we present the analysis of highly interconnected topologies like hypercube and torus and how they can be implemented in data centers in order to cope with the rapid increase and demands for performance of the internal traffic. By replicating the topologies and subjecting them...... to uniformly distributed traffic routed by shortest path algorithms, we are able to extract relevant statistics related to average throughput, latency and loss rate. A decrease in throughput per connection of only about 5% for the hypercube compared to 16% for the 3D torus was measured when the size...

  16. The high resolution mapping of the Venice Lagoon tidal network

    Science.gov (United States)

    Madricardo, Fantina; Foglini, Federica; Kruss, Aleksandra; Bellafiore, Debora; Trincardi, Fabio

    2017-04-01

    One of the biggest challenges of the direct observation of the ocean is to achieve a high resolution mapping of its seafloor morphology and benthic habitats. So far, sonars have mapped just 0.05% of the ocean floor with less than ten-meter resolution. The recent efforts of the scientific community have been devoted towards the mapping of both Deep Ocean and very shallow coastal areas. Coastal and transitional environments in particular undergo strong morphological changes due to natural and anthropogenic pressure. Nowadays, only about 5% of the seafloor of these environments † have been mapped: the shallowness of these environments has prevented the use of underwater acoustics to reveal their morphological features. The recent technological development of multibeam echosounder systems, however, enables these instruments to achieve very high performances also in such shallow environments. In this work, we present results and case studies of an extensive multibeam survey carried out in the Lagoon of Venice in 2013. The Lagoon of Venice is the biggest lagoon in the Mediterranean Sea with a surface of about 550 km2 and with an average depth of about 1 m. In the last century, the morphological and ecological properties of the lagoon changed dramatically: the surface of the salt marshes was reduced by 60% and some parts of the lagoon are deepening with a net sediment flux exiting from the inlets. Moreover, major engineering interventions are currently ongoing at the inlets (MOSE project). These changes at the inlets could affect substantially the lagoon environment. To understand and monitor the future evolution of the Lagoon of Venice, ISMAR within the project RITMARE (a National Research Programme funded by the Italian Ministry of University and Research) carried out an extensive survey, involving a team of more than 25 scientists, to collect high resolution (0.5 m) bathymetry of key study areas such as the tidal inlets and channels. Following a broad

  17. Fixed Access Network Sharing

    Science.gov (United States)

    Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio

    2015-12-01

    Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.

  18. Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation

    Directory of Open Access Journals (Sweden)

    M. Agatonović

    2012-12-01

    Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.

  19. High Performance Implementation of 3D Convolutional Neural Networks on a GPU

    Science.gov (United States)

    Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109

  20. High Performance Implementation of 3D Convolutional Neural Networks on a GPU.

    Science.gov (United States)

    Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.

  1. Interconnected V2O5 nanoporous network for high-performance supercapacitors.

    Science.gov (United States)

    Saravanakumar, B; Purushothaman, Kamatchi K; Muralidharan, G

    2012-09-26

    Vanadium pentoxide (V(2)O(5)) has attracted attention for supercapcitor applications because of its extensive multifunctional properties. In the present study, V(2)O(5) nanoporous network was synthesized via simple capping-agent-assisted precipitation technique and it is further annealed at different temperatures. The effect of annealing temperature on the morphology, electrochemical and structural properties, and stability upon oxidation-reduction cycling has been analyzed for supercapacitor application. We achieved highest specific capacitance of 316 F g(-1) for interconnected V(2)O(5) nanoporous network. This interconnected nanoporous network creates facile nanochannels for ion diffusion and facilitates the easy accessibility of ions. Moreover, after six hundred consecutive cycling processes the specific capacitance has changed only by 24%. A simple cost-effective preparation technique of V(2)O(5) nanoporous network with excellent capacitive behavior, energy density, and stability encourages its possible commercial exploitation for the development of high-performance supercapacitors.

  2. Experience and Lessons learnt from running High Availability Databases on Network Attached Storage

    CERN Document Server

    Guijarro, Manuel

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over.

  3. Experience and lessons learnt from running high availability databases on network attached storage

    International Nuclear Information System (INIS)

    Guijarro, M; Gaspar, R

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over

  4. Artificial neural networks in prediction of mechanical behavior of concrete at high temperature

    International Nuclear Information System (INIS)

    Mukherjee, A.; Nag Biswas, S.

    1997-01-01

    The behavior of concrete structures that are exposed to extreme thermo-mechanical loading is an issue of great importance in nuclear engineering. The mechanical behavior of concrete at high temperature is non-linear. The properties that regulate its response are highly temperature dependent and extremely complex. In addition, the constituent materials, e.g. aggregates, influence the response significantly. Attempts have been made to trace the stress-strain curve through mathematical models and rheological models. However, it has been difficult to include all the contributing factors in the mathematical model. This paper examines a new programming paradigm, artificial neural networks, for the problem. Implementing a feedforward network and backpropagation algorithm the stress-strain relationship of the material is captured. The neural networks for the prediction of uniaxial behavior of concrete at high temperature has been presented here. The results of the present investigation are very encouraging. (orig.)

  5. A Probabilistic and Highly Efficient Topology Control Algorithm for Underwater Cooperating AUV Networks.

    Science.gov (United States)

    Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan

    2017-05-04

    The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV's parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power

  6. A Probabilistic and Highly Efficient Topology Control Algorithm for Underwater Cooperating AUV Networks

    Directory of Open Access Journals (Sweden)

    Ning Li

    2017-05-01

    Full Text Available The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs project is to make autonomous underwater vehicles (AUVs, remote operated vehicles (ROVs and unmanned surface vehicles (USVs more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV’s parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the

  7. MEG network differences between low- and high-grade glioma related to epilepsy and cognition.

    Directory of Open Access Journals (Sweden)

    Edwin van Dellen

    Full Text Available OBJECTIVE: To reveal possible differences in whole brain topology of epileptic glioma patients, being low-grade glioma (LGG and high-grade glioma (HGG patients. We studied functional networks in these patients and compared them to those in epilepsy patients with non-glial lesions (NGL and healthy controls. Finally, we related network characteristics to seizure frequency and cognitive performance within patient groups. METHODS: We constructed functional networks from pre-surgical resting-state magnetoencephalography (MEG recordings of 13 LGG patients, 12 HGG patients, 10 NGL patients, and 36 healthy controls. Normalized clustering coefficient and average shortest path length as well as modular structure and network synchronizability were computed for each group. Cognitive performance was assessed in a subset of 11 LGG and 10 HGG patients. RESULTS: LGG patients showed decreased network synchronizability and decreased global integration compared to healthy controls in the theta frequency range (4-8 Hz, similar to NGL patients. HGG patients' networks did not significantly differ from those in controls. Network characteristics correlated with clinical presentation regarding seizure frequency in LGG patients, and with poorer cognitive performance in both LGG and HGG glioma patients. CONCLUSION: Lesion histology partly determines differences in functional networks in glioma patients suffering from epilepsy. We suggest that differences between LGG and HGG patients' networks are explained by differences in plasticity, guided by the particular lesional growth pattern. Interestingly, decreased synchronizability and decreased global integration in the theta band seem to make LGG and NGL patients more prone to the occurrence of seizures and cognitive decline.

  8. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Jiang

    2016-07-01

    Full Text Available Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D convex hull and spanning tree (NDACS for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.

  9. Coverage Dependent Assembly of Anthraquinone on Au(111)

    Science.gov (United States)

    Conrad, Brad; Deloach, Andrew; Einstein, Theodore; Dougherty, Daniel

    A study of adsorbate-adsorbate and surface state mediated interactions of anthraquinone (AnQ) on Au(111) is presented. We utilize scanning tunneling microscopy (STM) to characterize the coverage dependence of AnQ structure formation. Ordered structures are observed up to a single monolayer (ML) and are found to be strongly dependent on molecular surface density. While the complete ML forms a well-ordered close-packed layer, for a narrow range of sub-ML coverages irregular close-packed islands are observed to coexist with a disordered pore network linking neighboring islands. This network displays a characteristic pore size and at lower coverages, the soliton walls of the herringbone reconstruction are shown to promote formation of distinct pore nanostructures. We will discuss these nanostructure formations in the context of surface mediated and more direct adsorbate interactions.

  10. Experimental demonstration of OpenFlow-enabled media ecosystem architecture for high-end applications over metro and core networks.

    Science.gov (United States)

    Ntofon, Okung-Dike; Channegowda, Mayur P; Efstathiou, Nikolaos; Rashidi Fard, Mehdi; Nejabati, Reza; Hunter, David K; Simeonidou, Dimitra

    2013-02-25

    In this paper, a novel Software-Defined Networking (SDN) architecture is proposed for high-end Ultra High Definition (UHD) media applications. UHD media applications require huge amounts of bandwidth that can only be met with high-capacity optical networks. In addition, there are requirements for control frameworks capable of delivering effective application performance with efficient network utilization. A novel SDN-based Controller that tightly integrates application-awareness with network control and management is proposed for such applications. An OpenFlow-enabled test-bed demonstrator is reported with performance evaluations of advanced online and offline media- and network-aware schedulers.

  11. CANDU safety analysis system establishment; development of trip coverage and multi-dimensional hydrogen analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jong Ho; Ohn, M. Y.; Cho, C. H. [KOPEC, Taejon (Korea)

    2002-03-01

    The trip coverage analysis model requires the geometry network for primary and secondary circuit as well as the plant control system to simulate all the possible plant operating conditions throughout the plant life. The model was validated for the power maneuvering and the Wolsong 4 commissioning test. The trip coverage map was produced for the large break loss of coolant accident and the complete loss of class IV power event. The reliable multi-dimensional hydrogen analysis requires the high capability for thermal hydraulic modelling. To acquire such a basic capability and verify the applicability of GOTHIC code, the assessment of heat transfer model, hydrogen mixing and combustion model was performed. Also, the assessment methodology for flame acceleration and deflagration-to-detonation transition is established. 22 refs., 120 figs., 31 tabs. (Author)

  12. Wireless mesh networks.

    Science.gov (United States)

    Wang, Xinheng

    2008-01-01

    Wireless telemedicine using GSM and GPRS technologies can only provide low bandwidth connections, which makes it difficult to transmit images and video. Satellite or 3G wireless transmission provides greater bandwidth, but the running costs are high. Wireless networks (WLANs) appear promising, since they can supply high bandwidth at low cost. However, the WLAN technology has limitations, such as coverage. A new wireless networking technology named the wireless mesh network (WMN) overcomes some of the limitations of the WLAN. A WMN combines the characteristics of both a WLAN and ad hoc networks, thus forming an intelligent, large scale and broadband wireless network. These features are attractive for telemedicine and telecare because of the ability to provide data, voice and video communications over a large area. One successful wireless telemedicine project which uses wireless mesh technology is the Emergency Room Link (ER-LINK) in Tucson, Arizona, USA. There are three key characteristics of a WMN: self-organization, including self-management and self-healing; dynamic changes in network topology; and scalability. What we may now see is a shift from mobile communication and satellite systems for wireless telemedicine to the use of wireless networks based on mesh technology, since the latter are very attractive in terms of cost, reliability and speed.

  13. Basics of Computer Networking

    CERN Document Server

    Robertazzi, Thomas

    2012-01-01

    Springer Brief Basics of Computer Networking provides a non-mathematical introduction to the world of networks. This book covers both technology for wired and wireless networks. Coverage includes transmission media, local area networks, wide area networks, and network security. Written in a very accessible style for the interested layman by the author of a widely used textbook with many years of experience explaining concepts to the beginner.

  14. Performance evaluation of a high-speed switched network for PACS

    Science.gov (United States)

    Zhang, Randy H.; Tao, Wenchao; Huang, Lu J.; Valentino, Daniel J.

    1998-07-01

    We have replaced our shared-media Ethernet and FDDI network with a multi-tiered, switched network using OC-12 (622 Mbps) ATM for the network backbone, OC3 (155 Mbps) connections to high-end servers and display workstations, and switched 100/10 Mbps Ethernet for workstations and desktop computers. The purpose of this research was to help PACS designers and implementers understand key performance factors in a high- speed switched network by characterizing and evaluating its image delivery performance, specifically, the performance of socket-based TCP (Transmission Control Protocol) and DICOM 3.0 communications. A test network within the UCLA Clinical RIS/PACS was constructed using Sun UltraSPARC-II machines with ATM, Fast Ethernet, and Ethernet network interfaces. To identify performance bottlenecks, we evaluated network throughput for memory to memory, memory to disk, disk to memory, and disk to disk transfers. To evaluate the effect of file size, tests involving disks were further divided using sizes of small (514 KB), medium (8 MB), and large (16 MB) files. The observed maximum throughput for various network configurations using the TCP protocol was 117 Mbps for memory to memory and 88 MBPS for memory to disk. For disk to memory, the peak throughput was 98 Mbps using small files, 114 Mbps using medium files, and 116 Mbps using large files. The peak throughput for disk to disk became 64 Mbps using small files and 96 Mbps using medium and large files. The peak throughput using the DICOM 3.0 protocol was substantially lower in all categories. The measured throughput varied significantly among the tests when TCP socket buffer was raised above the default value. The optimal buffer size was approximately 16 KB or the TCP protocol and around 256 KB for the DICOM protocol. The application message size also displayed distinctive effects on network throughput when the TCP socket buffer size was varied. The throughput results for Fast Ethernet and Ethernet were expectedly

  15. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, J.; Gunter, D.; Tierney, B.; Allcock, B.; Bester, J.; Bresnahan, J.; Tuecke, S.

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. From their work developing a scalable distributed network cache, the authors have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). The authors discuss several hardware and software design techniques, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. The authors describe results from the Supercomputing 2000 conference

  16. CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

    OpenAIRE

    Li, Yuhong; Zhang, Xiaofan; Chen, Deming

    2018-01-01

    We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace po...

  17. Thermodynamically Controlled High-Pressure High-Temperature Synthesis of Crystalline Fluorinated sp 3 -Carbon Networks

    Energy Technology Data Exchange (ETDEWEB)

    Klier, Kamil; Landskron, Kai

    2015-11-19

    We report the feasibility of the thermodynamically controlled synthesis of crystalline sp3-carbon networks. We show that there is a critical pressure below which decomposition of the carbon network is favored and above which the carbon network is stable. Based on advanced, highly accurate quantum mechanical calculations using the all-electron full-potential linearized augmented plane-wave method (FP-LAPW) and the Birch–Murnaghan equation of state, this critical pressure is 26.5 GPa (viz. table of contents graphic). Such pressures are experimentally readily accessible and afford thermodynamic control for suppression of decomposition reactions. The present results further suggest that a general pattern of pressure-directed control exists for many isolobal conversions of sp2 to sp3 allotropes, relating not only to fluorocarbon chemistry but also extending to inorganic and solid-state materials science.

  18. Handover Incentives for Self-Interested WLANs with Overlapping Coverage

    DEFF Research Database (Denmark)

    Fafoutis, Xenofon; Siris, Vasilios A.

    2012-01-01

    We consider an environment where self-interested IEEE 802.11 Wireless Local Area Networks (WLANs) have overlapping coverage, and investigate the incentives that can trigger handovers between the WLANs. Our focus is on the incentives for supporting handovers due solely to the improved performance...

  19. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  20. Impact of promoting longer-lasting insecticide treatment of bed nets upon malaria transmission in a rural Tanzanian setting with pre-existing high coverage of untreated nets

    NARCIS (Netherlands)

    Russell, T.L.; Lwetoijera, D.W.; Maliti, D.; Chipwaza, B.; Kihonda, J.; Charlwood, J.D.; Smith, T.A.; Lengeler, C.; Mwanyangala, M.A.; Nathan, R.; Knols, B.G.J.; Takken, W.; Killeen, G.F.

    2010-01-01

    Background The communities of Namawala and Idete villages in southern Tanzania experienced extremely high malaria transmission in the 1990s. By 2001-03, following high usage rates (75% of all age groups) of untreated bed nets, a 4.2-fold reduction in malaria transmission intensity was achieved.

  1. An Intelligent Monitoring Network for Detection of Cracks in Anvils of High-Press Apparatus.

    Science.gov (United States)

    Tian, Hao; Yan, Zhaoli; Yang, Jun

    2018-04-09

    Due to the endurance of alternating high pressure and temperature, the carbide anvils of the high-press apparatus, which are widely used in the synthetic diamond industry, are prone to crack. In this paper, an acoustic method is used to monitor the crack events, and the intelligent monitoring network is proposed to classify the sound samples. The pulse sound signals produced by such cracking are first extracted based on a short-time energy threshold. Then, the signals are processed with the proposed intelligent monitoring network to identify the operation condition of the anvil of the high-pressure apparatus. The monitoring network is an improved convolutional neural network that solves the problems that may occur in practice. The length of pulse sound excited by the crack growth is variable, so a spatial pyramid pooling layer is adopted to solve the variable-length input problem. An adaptive weighted algorithm for loss function is proposed in this method to handle the class imbalance problem. The good performance regarding the accuracy and balance of the proposed intelligent monitoring network is validated through the experiments finally.

  2. Global stability of stochastic high-order neural networks with discrete and distributed delays

    International Nuclear Information System (INIS)

    Wang Zidong; Fang Jianan; Liu Xiaohui

    2008-01-01

    High-order neural networks can be considered as an expansion of Hopfield neural networks, and have stronger approximation property, faster convergence rate, greater storage capacity, and higher fault tolerance than lower-order neural networks. In this paper, the global asymptotic stability analysis problem is considered for a class of stochastic high-order neural networks with discrete and distributed time-delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, several sufficient conditions are derived, which guarantee the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the stochastic high-order delayed neural networks under consideration are globally asymptotically stable in the mean square if two linear matrix inequalities (LMIs) are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also shown that the main results in this paper cover some recently published works. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria

  3. High-throughput and low-latency network communication with NetIO

    CERN Document Server

    AUTHOR|(CDS)2088631; The ATLAS collaboration

    2017-01-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low-latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS target exclusively the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They allow building distributed applications with a high-level approach and provide good performance. Unfortunately their usage usually limits developers to TCP/IP-based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath...

  4. Assuring Access to Affordable Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Under the Affordable Care Act, millions of uninsured Americans will gain access to affordable coverage through Affordable Insurance Exchanges and improvements in...

  5. Robust Synchronization in an E/I Network with Medium Synaptic Delay and High Level of Heterogeneity

    International Nuclear Information System (INIS)

    Han Fang; Wang Zhi-Jie; Gong Tao; Fan Hong

    2015-01-01

    It is known that both excitatory and inhibitory neuronal networks can achieve robust synchronization only under certain conditions, such as long synaptic delay or low level of heterogeneity. In this work, robust synchronization can be found in an excitatory/inhibitory (E/I) neuronal network with medium synaptic delay and high level of heterogeneity, which often occurs in real neuronal networks. Two effects of post-synaptic potentials (PSP) to network synchronization are presented, and the synaptic contribution of excitatory and inhibitory neurons to robust synchronization in this E/I network is investigated. It is found that both excitatory and inhibitory neurons may contribute to robust synchronization in E/I networks, especially the excitatory PSP has a more positive effect on synchronization in E/I networks than that in excitatory networks. This may explain the strong robustness of synchronization in E/I neuronal networks. (paper)

  6. Highly graphitized laterally interconnected SWCNT network synthesis via a sandwich-grown method

    International Nuclear Information System (INIS)

    Teng, I-Ju; Chen, Kai-Ling; Wang, Li-Chun; Kuo, Cheng-Tzu; Hsu, Hui-Lin; Jian, Sheng-Rui; Chen, Jung-Hsuan; Wang, Wei-Hsiang

    2011-01-01

    We present a sandwich-grown method for growing laterally interconnected single-walled carbon nanotube (SWCNT) networks with a high degree of graphitization by microwave plasma chemical vapour deposition (MPCVD). An Al 2 O 3 -supported Fe catalyst precursor layer deposited on an oxidized Si substrate with an upper Si cover is first pretreated in pure hydrogen, and then exposed to a gas mixture of methane/hydrogen for growth process at a lower growth temperature and a faster rate. The effects of various parameters, such as catalyst film thickness, gas flow rate, working pressure, growth time and plasma power, on the morphologies and structural characteristics of the SWCNT networks are investigated, and therefore provide the essential conditions for direct growth of laterally interconnected SWCNT networks. Analytical results demonstrate that the SWCNT-based lateral architecture comprises a mixture of graphene-sheet-wrapped catalyst particles and laterally interconnected nanotubes, isolated or branched or assembled into bundles. The results also show that the formation of the laterally interconnected SWCNT networks is related to the sandwich-like stack approach and the addition of an Al 2 O 3 layer in the MPCVD process. The successful growth of lateral SWCNT networks provides new experimental information for simply and efficiently preparing lateral SWCNTs on unpatterned substrates, and opens a pathway to create network-structured nanotube-based devices.

  7. Drug use Discrimination Predicts Formation of High-Risk Social Networks: Examining Social Pathways of Discrimination.

    Science.gov (United States)

    Crawford, Natalie D; Ford, Chandra; Rudolph, Abby; Kim, BoRin; Lewis, Crystal M

    2017-09-01

    Experiences of discrimination, or social marginalization and ostracism, may lead to the formation of social networks characterized by inequality. For example, those who experience discrimination may be more likely to develop drug use and sexual partnerships with others who are at increased risk for HIV compared to those without experiences of discrimination. This is critical as engaging in risk behaviors with others who are more likely to be HIV positive can increase one's risk of HIV. We used log-binomial regression models to examine the relationship between drug use, racial and incarceration discrimination with changes in the composition of one's risk network among 502 persons who use drugs. We examined both absolute and proportional changes with respect to sex partners, drug use partners, and injecting partners, after accounting for individual risk behaviors. At baseline, participants were predominately male (70%), black or Latino (91%), un-married (85%), and used crack (64%). Among those followed-up (67%), having experienced discrimination due to drug use was significantly related to increases in the absolute number of sex networks and drug networks over time. No types of discrimination were related to changes in the proportion of high-risk network members. Discrimination may increase one's risk of HIV acquisition by leading them to preferentially form risk relationships with higher-risk individuals, thereby perpetuating racial and ethnic inequities in HIV. Future social network studies and behavioral interventions should consider whether social discrimination plays a role in HIV transmission.

  8. Association between online social networking and depression in high school students: behavioral physiology viewpoint.

    Science.gov (United States)

    Pantic, Igor; Damjanovic, Aleksandar; Todorovic, Jovana; Topalovic, Dubravka; Bojovic-Jovic, Dragana; Ristic, Sinisa; Pantic, Senka

    2012-03-01

    Frequent use of Facebook and other social networks is thought to be associated with certain behavioral changes, and some authors have expressed concerns about its possible detrimental effect on mental health. In this work, we investigated the relationship between social networking and depression indicators in adolescent population. Total of 160 high school students were interviewed using an anonymous, structured questionnaire and Back Depression Inventory - second edition (BDI-II-II). Apart from BDI-II-II, students were asked to provide the data for height and weight, gender, average daily time spent on social networking sites, average time spent watching TV, and sleep duration in a 24-hour period. Average BDI-II-II score was 8.19 (SD=5.86). Average daily time spent on social networking was 1.86 h (SD=2.08 h), and average time spent watching TV was 2.44 h (SD=1.74 h). Average body mass index of participants was 21.84 (SD=3.55) and average sleep duration was 7.37 (SD=1.82). BDI-II-II score indicated minimal depression in 104 students, mild depression in 46 students, and moderate depression in 10 students. Statistically significant positive correlation (psocial networking. Our results indicate that online social networking is related to depression. Additional research is required to determine the possible causal nature of this relationship.

  9. High-accuracy local positioning network for the alignment of the Mu2e experiment.

    Energy Technology Data Exchange (ETDEWEB)

    Hejdukova, Jana B. [Czech Technical Univ., Prague (Czech Republic)

    2017-06-01

    This Diploma thesis describes the establishment of a high-precision local positioning network and accelerator alignment for the Mu2e physics experiment. The process of establishing new network consists of few steps: design of the network, pre-analysis, installation works, measurements of the network and making adjustments. Adjustments were performed using two approaches. First is a geodetic approach of taking into account the Earth’s curvature and the metrological approach of a pure 3D Cartesian system on the other side. The comparison of those two approaches is performed and evaluated in the results and compared with expected differences. The effect of the Earth’s curvature was found to be significant for this kind of network and should not be neglected. The measurements were obtained with Absolute Tracker AT401, leveling instrument Leica DNA03 and gyrotheodolite DMT Gyromat 2000. The coordinates of the points of the reference network were determined by the Least Square Meth od and the overall view is attached as Annexes.

  10. A meso-network of eddy covariance towers across the Northwest Territories to assess high-latitude carbon and water budgets under increasing pressure

    Science.gov (United States)

    Hurkuck, M.; Marsh, P.; Quinton, W. L.; Humphreys, E.; Lafleur, P.; Helbig, M.; Hould Gosselin, G.; Sonnentag, O.

    2017-12-01

    Given their large areal coverage, high carbon densities, unique land surface properties, and disturbance regimes, Canada's diverse high-latitude ecosystems across its multiple Arctic, subarctic and boreal ecozones are integral components of the global and regional climate systems. In northwestern Canada, large portions of these ecozones contain permafrost, i.e., perennially cryotic ground. Here, we describe efforts towards a meso-network of nine eddy covariance towers to measure carbon, water and energy fluxes across the Northwest Territories to shed light on high-latitude carbon and water budgets and their rapidly changing biotic and abiotic controls in response to increasing natural and anthropogenic pressures. Distributed across six research sites (Trail Valley Creek, 68.7°N, 133.3°W; Havikpak Creek, 68.3°N, 133.3°W; Daring Lake, 64.8°N, 111.5°W; Smith Creek, 63.1°N, 123.2°W; Scotty Creek, 63.1°N, 123.2°W; Yellowknife, 62.5°N, 114.4°W), the meso-network spans the central portion of the extended ABoVE Study Domain, covering two ecozones (Taiga Plains, Southern Arctic) with differing permafrost regimes (sporadic, discontinuous, continuous), climatic settings (coastal, interior), and seven high-latitude ecosystem types: forested permafrost peat plateau, permafrost-free collapse-scar bog, subarctic woodland, mixed and dwarf-shrub tundra, and sedge fen. With our contribution, we report on the current status of the meso-network development and present results from various synthesis activities examining the role of climatic setting and resulting tundra carbon and water budgets, quantifying the impact of permafrost thaw and associated wetland expansion on boreal forest carbon and water budgets, and determining the relative importance of treeline advance compared to shrub proliferation on tundra carbon and water budgets.

  11. High-Capacity Hybrid Optical Fiber-Wireless Communications Links in Access Networks

    DEFF Research Database (Denmark)

    Pang, Xiaodan

    of broadband services access. To realize the seamless convergence between the two network segments, the lower capacity of wireless systems need to be increased to match the continuously increasing bandwidth of fiber-optic systems. The research works included in this thesis are devoted to experimental...... investigations of photonic-wireless links with record high capacities to fulfill the requirements of next generation hybrid optical fiber-wireless access networks. The main contributions of this thesis have expanded the state-of-the-art in two main areas: high speed millimeter-wave (mm-wave) communication links......Integration between fiber-optic and wireless communications systems in the "last mile" access networks is currently considered as a promising solution for both service providers and users, in terms of minimizing deployment cost, shortening upgrading period and increasing mobility and flexibility...

  12. Pixel-Wise Classification Method for High Resolution Remote Sensing Imagery Using Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2018-03-01

    Full Text Available Considering the classification of high spatial resolution remote sensing imagery, this paper presents a novel classification method for such imagery using deep neural networks. Deep learning methods, such as a fully convolutional network (FCN model, achieve state-of-the-art performance in natural image semantic segmentation when provided with large-scale datasets and respective labels. To use data efficiently in the training stage, we first pre-segment training images and their labels into small patches as supplements of training data using graph-based segmentation and the selective search method. Subsequently, FCN with atrous convolution is used to perform pixel-wise classification. In the testing stage, post-processing with fully connected conditional random fields (CRFs is used to refine results. Extensive experiments based on the Vaihingen dataset demonstrate that our method performs better than the reference state-of-the-art networks when applied to high-resolution remote sensing imagery classification.

  13. OSCAR experiment high-density network data report: Event 3 - April 16-17, 1981

    Energy Technology Data Exchange (ETDEWEB)

    Dana, M.T.; Easter, R.C.; Thorp, J.M.

    1984-12-01

    The OSCAR (Oxidation and Scavenging Characteristics of April Rains) experiment, conducted during April 1981, was a cooperative field investigation of wet removal in cyclonic storm systems. The high-density component of OSCAR was located in northeast Indiana and included sequential precipitation chemistry measurements on a 100 by 100 km network, as well as airborne air chemistry and cloud chemistry measurements, surface air chemistry measurements, and supporting meteorological measurements. Four separate storm events were studied during the experiment. This report summarizes data taken by Pacific Northwest Laboratory (PNL) during the third storm event, April 16-17. The report contains the high-density network precipitation chemistry data, air chemistry and cloud chemistry data from the PNL aircraft, and meteorological data for the event, including standard National Weather Service products and radar and rawindsonde data from the network. 4 references, 76 figures, 6 tables.

  14. OSCAR experiment high-density network data report: Event 1 - April 8-9, 1981

    Energy Technology Data Exchange (ETDEWEB)

    Dana, M.T.; Easter, R.C.; Thorp, J.M.

    1984-12-01

    The OSCAR (Oxidation and Scavenging Characteristics of April Rains) experiment, conducted during April 1981, was a cooperative field investigation of wet removal in cyclonic storm systems. The high-densiy component of OSCAR was located in northeast Indiana and included sequential precipitation chemistry measurements on a 100 by 100 km network, as well as airborne air chemistry and cloud chemistry measurements, surface air chemistry measurements, and supporting meteorological measurements. Four separate storm events were studied during the experiment. This report summarizes data taken by Pacific Northwest Laboratory (PNL) during the first storm event, April 8-9. The report contains the high-density network precipitation chemistry data, air chemistry data from the PNL aircraft, and meteorological data for the event, including standard National Weather Service products and radar data from the network. 4 references, 72 figures, 5 tables.

  15. Creating Value for Customer in Business Networks of High-Tech Goods Manufacturers

    Directory of Open Access Journals (Sweden)

    Joanna Wiechoczek

    2016-01-01

    Full Text Available The main paper goal is to recognize the category of value for customer with respect to high-tech products, and to propose a model of creation of this value in business networks established by manufacturers. The research methods include critical analysis of the literature, documentation method, as well as the case research method and observation method. The results of the research proved that the value offered to buyers is characterized by growing multidimensionality which results in increasing complexity of the creation process of this value by their manufacturers. Due to the fact that they do not have complex skills and resources to create the value independently, they form business networks. These networks include increasingly larger group of entities, in which the importance of individual cooperants is highly diversified.

  16. Control of Grid Interactive PV Inverters for High Penetration in Low Voltage Distribution Networks

    DEFF Research Database (Denmark)

    Demirok, Erhan

    Regarding of high density deployment of PV installations in electricity grids, new technical challenges such as voltage rise, thermal loading of network components, voltage unbalance, harmonic interaction and fault current contributions are being added to tasks list of distribution system operators...... of these inverters may depend on grid connection rules which are forced by DSOs. Minimum requirement expected from PV inverters is to transfer maximum power by taking direct current (DC) form from PV modules and release it into AC grid and also continuously keep the inverters synchronized to the grid even under...... for this problem but PV inverters connected to highly capacitive networks are able to employ extra current and voltage harmonics compensation to avoid triggering network resonances at low order frequencies. The barriers such as harmonics interaction, flicker, fault current contribution and dc current injections...

  17. High dimensional ICA analysis detects within-network functional connectivity damage of default mode and sensory motor networks in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Ottavia eDipasquale

    2015-02-01

    Full Text Available High dimensional independent component analysis (ICA, compared to low dimensional ICA, allows performing a detailed parcellation of the resting state networks. The purpose of this study was to give further insight into functional connectivity (FC in Alzheimer’s disease (AD using high dimensional ICA. For this reason, we performed both low and high dimensional ICA analyses of resting state fMRI (rfMRI data of 20 healthy controls and 21 AD patients, focusing on the primarily altered default mode network (DMN and exploring the sensory motor network (SMN. As expected, results obtained at low dimensionality were in line with previous literature. Moreover, high dimensional results allowed us to observe either the presence of within-network disconnections and FC damage confined to some of the resting state sub-networks. Due to the higher sensitivity of the high dimensional ICA analysis, our results suggest that high-dimensional decomposition in sub-networks is very promising to better localize FC alterations in AD and that FC damage is not confined to the default mode network.

  18. Application of artificial neural networks in fault diagnosis for 10MW high-temperature gas-cooled reactor

    International Nuclear Information System (INIS)

    Li Hui; Wang Ruipian; Hu Shouyin

    2003-01-01

    This paper makes researches on 10 MW High-Temperature Gas-Cooled Reactor fault diagnosis system using Artificial Neural Network, and uses the tendency value and real value of the data under the accidents to train and test two BP networks respectively. The final diagnostic result is the combination of the results of the two networks. The compound system can enhance the accuracy and adaptability of the diagnosis compared to the single network system

  19. Coverage: An Ethical Dilemma.

    Science.gov (United States)

    Feldman, Brenda J.

    1994-01-01

    Discusses how a high school journalism advisor dealt with ethical issues (including source attribution) surrounding the publication of a story about violation of school board policies by representatives of a nationally recognized ring company. (RS)

  20. Network Neurodegeneration in Alzheimer’s Disease via MRI based Shape Diffeomorphometry and High Field Atlasing

    Directory of Open Access Journals (Sweden)

    Michael I Miller

    2015-05-01

    Full Text Available This paper examines MRI analysis of neurodegeneration in Alzheimer’s Disease (AD in a network of structures within the medial temporal lobe using diffeomorphometry methods coupled with high-field atlasing in which the entorhinal cortex is partitioned into nine subareas. The morphometry markers for three groups of subjects (controls, preclinical AD and symptomatic AD are indexed to template coordinates measured with respect to these nine subareas. The location and timing of changes are examined within the subareas as it pertains to the classic Braak and Braak staging by comparing the three groups. We demonstrate that the earliest preclinical changes in the population occur in the lateral most sulcal extent in the entorhinal cortex (alluded to as trans entorhinal cortex by Braak and Braak, and then proceeds medially which is consistent with the Braak and Braak staging. We use high field 11T atlasing to demonstrate that the network changes are occurring at the junctures of the substructures in this medial temporal lobe network. Temporal progression of the disease through the network is also examined via changepoint analysis demonstrating earliest changes in entorhinal cortex. The differential expression of rate of atrophy with progression signaling the changepoint time across the network is demonstrated to be signaling in the intermediate caudal subarea of the entorhinal cortex, which has been noted to be proximal to the hippocampus. This coupled to the findings of the nearby basolateral involvement in amygdala demonstrates the selectivity of neurodegeneration in early AD.

  1. XMRF: an R package to fit Markov Networks to high-throughput genetics data.

    Science.gov (United States)

    Wan, Ying-Wooi; Allen, Genevera I; Baker, Yulia; Yang, Eunho; Ravikumar, Pradeep; Anderson, Matthew; Liu, Zhandong

    2016-08-26

    Technological advances in medicine have led to a rapid proliferation of high-throughput "omics" data. Tools to mine this data and discover disrupted disease networks are needed as they hold the key to understanding complicated interactions between genes, mutations and aberrations, and epi-genetic markers. We developed an R software package, XMRF, that can be used to fit Markov Networks to various types of high-throughput genomics data. Encoding the models and estimation techniques of the recently proposed exponential family Markov Random Fields (Yang et al., 2012), our software can be used to learn genetic networks from RNA-sequencing data (counts via Poisson graphical models), mutation and copy number variation data (categorical via Ising models), and methylation data (continuous via Gaussian graphical models). XMRF is the only tool that allows network structure learning using the native distribution of the data instead of the standard Gaussian. Moreover, the parallelization feature of the implemented algorithms computes the large-scale biological networks efficiently. XMRF is available from CRAN and Github ( https://github.com/zhandong/XMRF ).

  2. Receiver-Assisted Congestion Control to Achieve High Throughput in Lossy Wireless Networks

    Science.gov (United States)

    Shi, Kai; Shu, Yantai; Yang, Oliver; Luo, Jiarong

    2010-04-01

    Many applications would require fast data transfer in high-speed wireless networks nowadays. However, due to its conservative congestion control algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the network capacity in lossy wireless networks. In this paper, we propose a receiver-assisted congestion control mechanism (RACC) in which the sender performs loss-based control, while the receiver is performing delay-based control. The receiver measures the network bandwidth based on the packet interarrival interval and uses it to compute a congestion window size deemed appropriate for the sender. After receiving the advertised value feedback from the receiver, the sender then uses the additive increase and multiplicative decrease (AIMD) mechanism to compute the correct congestion window size to be used. By integrating the loss-based and the delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. Simulation and experiment results in various scenarios show that our mechanism can outperform conventional TCP in high-speed and lossy wireless environments.

  3. High combined individual and neighborhood socioeconomic status correlated with better survival of patients with lymphoma in post-rituximab era despite universal health coverage

    Directory of Open Access Journals (Sweden)

    Chung-Lin Hung

    2016-12-01

    After adjusting for patient characteristics, treatment modalities, and hospital characteristics, HL patients with high individual SES in advantaged neighborhoods showed a decreased risk of mortality (HR 0.33, 95%, CI 0.10–0.99. NHL patients with high individual SES in advantaged neighborhoods showed a moderate decreased risk of death (HR 0.62; 95% CI 0.51–0.75, compared to those with low SES in disadvantaged neighborhoods. In the future, public health strategies and welfare policies must continue to focus on this vulnerable group.

  4. Coverage Range and Cost Comparison of Remote Antenna Unit Designs for Inbuilding Radio over Fiber Technology

    Directory of Open Access Journals (Sweden)

    Razali Ngah

    2013-09-01

    Full Text Available Future communication needs to be ubiquitous, broadband, convergent, and seamless. Radio over fiber (RoF technology is one of the most important enabler in access network for the technologies. Adoption of RoF faces bottleneck in optoelectronics, that they are still expensive, high power consumption, and limited in bandwidth. To solve the problem, transceiver in remote antenna unit (RAU is developed, i.e. electroabsorption transceiver (EAT and asymmetric FabryPerot modulator (AFPM. This paper compares their coverage range and cost in providing WCDMA and WLAN services. Needed gain of RF amplifier for supporting picocell is also discussed.

  5. EBG structures on high permittivity substrate to reduce noise in power distribution networks

    NARCIS (Netherlands)

    Tereshchenko, O.V.; Buesink, Frederik Johannes Karel; Leferink, Frank Bernardus Johannes

    2012-01-01

    The noise reduction effect in a Power Distribution Network (PDN) by implementing Electromagnetic Band Gap structures (EBG) on standard and high permittivity substrates has been investigated. Boards with different EBG structures have been modelled and designed. Using the EBG structures the Power

  6. Drainage network extraction from a high-resolution DEM using parallel programming in the .NET Framework

    Science.gov (United States)

    Du, Chao; Ye, Aizhong; Gan, Yanjun; You, Jinjun; Duan, Qinyun; Ma, Feng; Hou, Jingwen

    2017-12-01

    High-resolution Digital Elevation Models (DEMs) can be used to extract high-accuracy prerequisite drainage networks. A higher resolution represents a larger number of grids. With an increase in the number of grids, the flow direction determination will require substantial computer resources and computing time. Parallel computing is a feasible method with which to resolve this problem. In this paper, we proposed a parallel programming method within the .NET Framework with a C# Compiler in a Windows environment. The basin is divided into sub-basins, and subsequently the different sub-basins operate on multiple threads concurrently to calculate flow directions. The method was applied to calculate the flow direction of the Yellow River basin from 3 arc-second resolution SRTM DEM. Drainage networks were extracted and compared with HydroSHEDS river network to assess their accuracy. The results demonstrate that this method can calculate the flow direction from high-resolution DEMs efficiently and extract high-precision continuous drainage networks.

  7. Efficacy and Social Validity of Peer Network Interventions for High School Students with Severe Disabilities

    Science.gov (United States)

    Asmus, Jennifer M.; Carter, Erik W.; Moss, Colleen K.; Biggs, Elizabeth E.; Bolt, Daniel M.; Born, Tiffany L.; Bottema-Beutel, Kristen; Brock, Matthew E.; Cattey, Gillian N.; Cooney, Molly; Fesperman, Ethan S.; Hochman, Julia M.; Huber, Heartley B.; Lequia, Jenna L.; Lyons, Gregory L.; Vincent, Lori B.; Weir, Katie

    2017-01-01

    This randomized controlled trial examined the efficacy of peer network interventions to improve the social connections of 47 high school students with severe disabilities. School staff invited, trained, and supported 192 peers without disabilities to participate in individualized social groups that met throughout one semester. Compared to…

  8. Dynamic analysis of high-order Cohen-Grossberg neural networks with time delay

    International Nuclear Information System (INIS)

    Chen Zhang; Zhao Donghua; Ruan Jiong

    2007-01-01

    In this paper, a class of high-order Cohen-Grossberg neural networks with time delay is studied. Several sufficient conditions are obtained for global asymptotic stability and global exponential stability using Lyapunov and LMI method. Finally, two examples are given to illustrate the effectiveness of our method

  9. High Bandwidth Communications Links Between Heterogeneous Autonomous Vehicles Using Sensor Network Modeling and Extremum Control Approaches

    Science.gov (United States)

    2008-12-01

    In future network-centric warfare environments, teams of autonomous vehicles will be deployed in a coorperative manner to conduct wide-area...of data back to the command station, autonomous vehicles configured with high bandwidth communication system are positioned between the command

  10. On the use of neural networks in high-energy physics experiments

    International Nuclear Information System (INIS)

    Humpert, B.

    1990-01-01

    We investigate the possibilities for applying neutral networks in high-energy physics experiments. After a survey of the main 'intelligent behavior paradigms', we discuss a number of possibilities where this new technology finds its application in particle detectors and particle accelerators. Finally, we comment on commercially available NN-tools, point to their limitations and extrapolate into the future. (orig.)

  11. Controlling coverage of solution cast materials with unfavourable surface interactions

    KAUST Repository

    Burlakov, V. M.; Eperon, G. E.; Snaith, H. J.; Chapman, S. J.; Goriely, A.

    2014-01-01

    Creating uniform coatings of a solution-cast material is of central importance to a broad range of applications. Here, a robust and generic theoretical framework for calculating surface coverage by a solid film of material de-wetting a substrate is presented. Using experimental data from semiconductor thin films as an example, we calculate surface coverage for a wide range of annealing temperatures and film thicknesses. The model generally predicts that for each value of the annealing temperature there is a range of film thicknesses leading to poor surface coverage. The model accurately reproduces solution-cast thin film coverage for organometal halide perovskites, key modern photovoltaic materials, and identifies processing windows for both high and low levels of surface coverage. © 2014 AIP Publishing LLC.

  12. Controlling coverage of solution cast materials with unfavourable surface interactions

    KAUST Repository

    Burlakov, V. M.

    2014-03-03

    Creating uniform coatings of a solution-cast material is of central importance to a broad range of applications. Here, a robust and generic theoretical framework for calculating surface coverage by a solid film of material de-wetting a substrate is presented. Using experimental data from semiconductor thin films as an example, we calculate surface coverage for a wide range of annealing temperatures and film thicknesses. The model generally predicts that for each value of the annealing temperature there is a range of film thicknesses leading to poor surface coverage. The model accurately reproduces solution-cast thin film coverage for organometal halide perovskites, key modern photovoltaic materials, and identifies processing windows for both high and low levels of surface coverage. © 2014 AIP Publishing LLC.

  13. Worker Sorting, Taxes and Health Insurance Coverage

    OpenAIRE

    Kevin Lang; Hong Kang

    2007-01-01

    We develop a model in which firms hire heterogeneous workers but must offer all workers insurance benefits under similar terms. In equilibrium, some firms offer free health insurance, some require an employee premium payment and some do not offer insurance. Making the employee contribution pre-tax lowers the cost to workers of a given employee premium and encourages more firms to charge. This increases the offer rate, lowers the take-up rate, increases (decreases) coverage among high (low) de...

  14. Social integration in friendship networks: The synergy of network structure and peer influence in relation to cigarette smoking among high risk adolescents

    Science.gov (United States)

    Lakon, Cynthia M.; Valente, Thomas W.

    2013-01-01

    Using data from a study of high risk adolescents in Southern California, U.S.A. (N = 851), this study examined synergy between social network measures of social integration and peer influence in relation to past month cigarette smoking. Using Hierarchical Linear Modeling, results indicated that being central in networks was significantly and positively related to past month cigarette smoking, across all study models. In addition, there is modest evidence that the number of reciprocated friendship ties was positively related to past month cigarette smoking. There is also some modest evidence that the relationship between having reciprocated friendships and past month cigarette smoking was moderated by a network peer influence process, smoking with those in youths’ best friend networks. Findings indicate that being integrated within a social network context of peer influences favoring drug use relates to more smoking among these high risk youth. PMID:22436575

  15. An Analysis of Television's Coverage of the "Iran Crisis": 5 November 1979 to 15 January 1980.

    Science.gov (United States)

    Miller, Christine

    The three television networks, acting under severe restrictions imposed by the Iranian government, all provided comprehensive coverage of the hostage crisis. A study was conducted to examine what, if any, salient differences arose or existed in this coverage from November 5, 1979, until January 15, 1980. A research procedure combining qualitative…

  16. Towards a Framework for Self-Adaptive Reliable Network Services in Highly-Uncertain Environments

    DEFF Research Database (Denmark)

    Grønbæk, Lars Jesper; Schwefel, Hans-Peter; Ceccarelli, Andrea

    2010-01-01

    In future inhomogeneous, pervasive and highly dynamic networks, end-nodes may often only rely on unreliable and uncertain observations to diagnose hidden network states and decide upon possible remediation actions. Inherent challenges exists to identify good and timely decision strategies to impr...... execution (and monitoring) of remediation actions. We detail the motivations to the ODDR design, then we present its architecture, and finally we describe our current activities towards the realization and assessment of the framework services and the main results currently achieved....

  17. Network Provisioning for High Speed Vehicles Moving along Predictable Routes - Part 1: Spiderman Handover

    OpenAIRE

    Maureira , Juan-Carlos; Dujovne , Diego; Dalle , Olivier

    2009-01-01

    This report presents our on-going work on a new system designed to provide a continuous network connectivity to communicating devices located on-board a vehicle moving at ”high speed” with a predictable trajectory such as trains, subways or buses. The devices on-board the vehicle form a sub-network called the ”in-motion network”. This system we propose is composed of two parts. The mobile part, called Spiderman Device (SD), installed on the roof of the vehicle, and the fixed part is composed ...

  18. HIGH QUALITY FACADE SEGMENTATION BASED ON STRUCTURED RANDOM FOREST, REGION PROPOSAL NETWORK AND RECTANGULAR FITTING

    Directory of Open Access Journals (Sweden)

    K. Rahmani

    2018-05-01

    Full Text Available In this paper we present a pipeline for high quality semantic segmentation of building facades using Structured Random Forest (SRF, Region Proposal Network (RPN based on a Convolutional Neural Network (CNN as well as rectangular fitting optimization. Our main contribution is that we employ features created by the RPN as channels in the SRF.We empirically show that this is very effective especially for doors and windows. Our pipeline is evaluated on two datasets where we outperform current state-of-the-art methods. Additionally, we quantify the contribution of the RPN and the rectangular fitting optimization on the accuracy of the result.

  19. Brain network characterization of high-risk preterm-born school-age children

    Directory of Open Access Journals (Sweden)

    Elda Fischi-Gomez

    2016-01-01

    Full Text Available Higher risk for long-term cognitive and behavioral impairments is one of the hallmarks of extreme prematurity (EP and pregnancy-associated fetal adverse conditions such as intrauterine growth restriction (IUGR. While neurodevelopmental delay and abnormal brain function occur in the absence of overt brain lesions, these conditions have been recently associated with changes in microstructural brain development. Recent imaging studies indicate changes in brain connectivity, in particular involving the white matter fibers belonging to the cortico-basal ganglia-thalamic loop. Furthermore, EP and IUGR have been related to altered brain network architecture in childhood, with reduced network global capacity, global efficiency and average nodal strength. In this study, we used a connectome analysis to characterize the structural brain networks of these children, with a special focus on their topological organization. On one hand, we confirm the reduced averaged network node degree and strength due to EP and IUGR. On the other, the decomposition of the brain networks in an optimal set of clusters remained substantially different among groups, talking in favor of a different network community structure. However, and despite the different community structure, the brain networks of these high-risk school-age children maintained the typical small-world, rich-club and modularity characteristics in all cases. Thus, our results suggest that brain reorganizes after EP and IUGR, prioritizing a tight modular structure, to maintain the small-world, rich-club and modularity characteristics. By themselves, both extreme prematurity and IUGR bear a similar risk for neurocognitive and behavioral impairment, and the here defined modular network alterations confirm similar structural changes both by IUGR and EP at school age compared to control. Interestingly, the combination of both conditions (IUGR + EP does not result in a worse outcome. In such cases, the alteration

  20. Development and application of computer network for working out of researches on high energy physics

    International Nuclear Information System (INIS)

    Boos, Eh.G.; Tashimov, M.A.

    2001-01-01

    conducting of collaborative studies with other centers on high energy physics (as CERN, JINR and other). Development of the computer network with introduction of a new information technologies allow to improve of consumers operation effectiveness, carrying out of investigations both in the physics of elementary particles and in the applied fields at the FTI of MSE RK

  1. Control of Grid Interactive PV Inverters for High Penetration in Low Voltage Distribution Networks

    OpenAIRE

    Demirok, Erhan

    2012-01-01

    Regarding of high density deployment of PV installations in electricity grids, new technical challenges such as voltage rise, thermal loading of network components, voltage unbalance, harmonic interaction and fault current contributions are being added to tasks list of distribution system operators (DSOs) in order to maintain at least the same power quality as before PVs were not revealed. Potential problems caused by high amount of PV installations can be avoided with technical study of both...

  2. Introduction and sustained high coverage of the HPV bivalent vaccine leads to a reduction in prevalence of HPV 16/18 and closely related HPV types.

    Science.gov (United States)

    Kavanagh, K; Pollock, K G J; Potts, A; Love, J; Cuschieri, K; Cubie, H; Robertson, C; Donaghy, M

    2014-05-27

    In 2008, a national human papillomavirus (HPV) immunisation programme began in Scotland for 12-13 year old females with a three-year catch-up campaign for those under the age of 18. Since 2008, three-dose uptake of bivalent vaccine in the routine cohort aged 12-13 has exceeded 90% annually, while in the catch-up cohort overall uptake is 66%. To monitor the impact of HPV immunisation, a programme of national surveillance was established (pre and post introduction) which included yearly sampling and HPV genotyping of women attending for cervical screening at age 20. By linking individual vaccination, screening and HPV testing records, we aim to determine the impact of the immunisation programme on circulating type-specific HPV infection particularly for four outcomes: (i) the vaccine types HPV 16 or 18 (ii) types considered to be associated with cross-protection: HPV 31, 33 or 45; (iii) all other high-risk types and (iv) any HPV. From a total of 4679 samples tested, we demonstrate that three doses (n=1100) of bivalent vaccine are associated with a significant reduction in prevalence of HPV 16 and 18 from 29.8% (95% confidence interval 28.3, 31.3%) to 13.6% (95% confidence interval 11.7, 15.8%). The data also suggest cross-protection against HPV 31, 33 and 45. HPV 51 and 56 emerged as the most prevalent (10.5% and 9.6%, respectively) non-vaccine high-risk types in those vaccinated, but at lower rates than HPV 16 (25.9%) in those unvaccinated. This data demonstrate the positive impact of bivalent vaccination on the prevalence of HPV 16, 18, 31, 33 and 45 in the target population and is encouraging for countries which have achieved high-vaccine uptake.

  3. Wireless Sensor Network for Radiometric Detection and Assessment of Partial Discharge in High-Voltage Equipment

    Science.gov (United States)

    Upton, D. W.; Saeed, B. I.; Mather, P. J.; Lazaridis, P. I.; Vieira, M. F. Q.; Atkinson, R. C.; Tachtatzis, C.; Garcia, M. S.; Judd, M. D.; Glover, I. A.

    2018-03-01

    Monitoring of partial discharge (PD) activity within high-voltage electrical environments is increasingly used for the assessment of insulation condition. Traditional measurement techniques employ technologies that either require off-line installation or have high power consumption and are hence costly. A wireless sensor network is proposed that utilizes only received signal strength to locate areas of PD activity within a high-voltage electricity substation. The network comprises low-power and low-cost radiometric sensor nodes which receive the radiation propagated from a source of PD. Results are reported from several empirical tests performed within a large indoor environment and a substation environment using a network of nine sensor nodes. A portable PD source emulator was placed at multiple locations within the network. Signal strength measured by the nodes is reported via WirelessHART to a data collection hub where it is processed using a location algorithm. The results obtained place the measured location within 2 m of the actual source location.

  4. Whole-Pelvis Radiotherapy in Combination With Interstitial Brachytherapy: Does Coverage of the Pelvic Lymph Nodes Improve Treatment Outcome in High-Risk Prostate Cancer?

    International Nuclear Information System (INIS)

    Bittner, Nathan; Merrick, Gregory S.; Wallner, Kent E.; Butler, Wayne M.; Galbreath, Robert; Adamovich, Edward

    2010-01-01

    Purpose: To compare biochemical progression-free survival (bPFS), cause-specific survival (CSS), and overall survival (OS) rates among high-risk prostate cancer patients treated with brachytherapy and supplemental external beam radiation (EBRT) using either a mini-pelvis (MP) or a whole-pelvis (WP) field. Methods and Materials: From May 1995 to October 2005, 186 high-risk prostate cancer patients were treated with brachytherapy and EBRT with or without androgen-deprivation therapy (ADT). High-risk prostate cancer was defined as a Gleason score of ≥8 and/or a prostate-specific antigen (PSA) concentration of ≥20 ng/ml. Results: With a median follow-up of 6.7 years, the 10-year bPFS, CSS, and OS rates for the WP vs. the MP arms were 91.7% vs. 84.4% (p = 0.126), 95.5% vs. 92.6% (p = 0.515), and 79.5% vs. 67.1% (p = 0.721), respectively. Among those patients who received ADT, the 10-year bPFS, CSS, and OS rates for the WP vs. the MP arms were 93.6% vs. 90.1% (p = 0.413), 94.2% vs. 96.0% (p = 0.927), and 73.7% vs. 70.2% (p = 0.030), respectively. Among those patients who did not receive ADT, the 10-year bPFS, CSS, and OS rates for the WP vs. the MP arms were 82.4% vs. 75.0% (p = 0.639), 100% vs. 88% (p = 0.198), and 87.5% vs. 58.8% (p = 0.030), respectively. Based on multivariate analysis, none of the evaluated parameters predicted for CSS, while bPFS was best predicted by ADT and percent positive biopsy results. OS was best predicted by age and percent positive biopsy results. Conclusions: For high-risk prostate cancer patients receiving brachytherapy, there is a nonsignificant trend toward improved bPFS, CSS, and OS rates when brachytherapy is given with WPRT. This trend is most apparent among ADT-naive patients, for whom a significant improvement in OS was observed.

  5. Ground-fault protection of insulated high-voltage power networks in mines

    Energy Technology Data Exchange (ETDEWEB)

    Pudelko, H

    1976-09-01

    Safety of power networks is discussed in underground black coal mines in Poland. Safety in mines with a long service life was compared with safety in mines constructed since 1950. Power networks and systems protecting against electric ground-faults in the 2 mine groups are comparatively evaluated. Systems for protection against electric ground-faults in mine high-voltage networks with an insulated star point of the transformer are characterized. Fluctuations of resistance of electrical insulation under conditions of changing load are analyzed. The results of analyses are given in 14 diagrams. Recommendations for design of systems protecting against electric ground-faults in 6 kV mine power systems are made. 7 references.

  6. Integration of electric drive vehicles in the Danish electricity network with high wind power penetration

    DEFF Research Database (Denmark)

    Chandrashekhara, Divya K; Østergaard, Jacob; Larsen, Esben

    2010-01-01

    /conventional) which are likely to fuel these cars. The study was carried out considering the Danish electricity network state around 2025, when the EDV penetration levels would be significant enough to have an impact on the power system. Some of the interesting findings of this study are - EDV have the potential......This paper presents the results of a study carried out to examine the feasibility of integrating electric drive vehicles (EDV) in the Danish electricity network which is characterised by high wind power penetration. One of the main aims of this study was to examine the effect of electric drive...... vehicles on the Danish electricity network, wind power penetration and electricity market. In particular the study examined the effect of electric drive vehicles on the generation capacity constraints, load curve, cross border transmission capacity and the type of generating sources (renewable...

  7. A real-time hybrid neuron network for highly parallel cognitive systems.

    Science.gov (United States)

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  8. Dynamic Hierarchical Sleep Scheduling for Wireless Ad-Hoc Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chih-Yu Wen

    2009-05-01

    Full Text Available This paper presents two scheduling management schemes for wireless sensor networks, which manage the sensors by utilizing the hierarchical network structure and allocate network resources efficiently. A local criterion is used to simultaneously establish the sensing coverage and connectivity such that dynamic cluster-based sleep scheduling can be achieved. The proposed schemes are simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithms provide efficient network power control and can achieve high scalability in wireless sensor networks.

  9. Dynamic hierarchical sleep scheduling for wireless ad-hoc sensor networks.

    Science.gov (United States)

    Wen, Chih-Yu; Chen, Ying-Chih

    2009-01-01

    This paper presents two scheduling management schemes for wireless sensor networks, which manage the sensors by utilizing the hierarchical network structure and allocate network resources efficiently. A local criterion is used to simultaneously establish the sensing coverage and connectivity such that dynamic cluster-based sleep scheduling can be achieved. The proposed schemes are simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithms provide efficient network power control and can achieve high scalability in wireless sensor networks.

  10. Graphene transfer process and optimization of graphene coverage

    OpenAIRE

    Sabki Syarifah Norfaezah; Shamsuri Shafiq Hafly; Fauzi Siti Fazlina; Chon-Ki Meghashama Lim; Othman Noraini

    2017-01-01

    Graphene grown on transition metal is known to be high in quality due to its controlled amount of defects and potentially used for many electronic applications. The transfer process of graphene grown on transition metal to a new substrate requires optimization in order to ensure that high graphene coverage can be obtained. In this work, an improvement in the graphene transfer process is performed from graphene grown on copper foil. It has been observed that the graphene coverage is affected b...

  11. High-Quality Ultra-Compact Grid Layout of Grouped Networks.

    Science.gov (United States)

    Yoghourdjian, Vahan; Dwyer, Tim; Gange, Graeme; Kieffer, Steve; Klein, Karsten; Marriott, Kim

    2016-01-01

    Prior research into network layout has focused on fast heuristic techniques for layout of large networks, or complex multi-stage pipelines for higher quality layout of small graphs. Improvements to these pipeline techniques, especially for orthogonal-style layout, are difficult and practical results have been slight in recent years. Yet, as discussed in this paper, there remain significant issues in the quality of the layouts produced by these techniques, even for quite small networks. This is especially true when layout with additional grouping constraints is required. The first contribution of this paper is to investigate an ultra-compact, grid-like network layout aesthetic that is motivated by the grid arrangements that are used almost universally by designers in typographical layout. Since the time when these heuristic and pipeline-based graph-layout methods were conceived, generic technologies (MIP, CP and SAT) for solving combinatorial and mixed-integer optimization problems have improved massively. The second contribution of this paper is to reassess whether these techniques can be used for high-quality layout of small graphs. While they are fast enough for graphs of up to 50 nodes we found these methods do not scale up. Our third contribution is a large-neighborhood search meta-heuristic approach that is scalable to larger networks.

  12. [Social network analysis and high risk behavior characteristics of recreational drug users: a qualitative study].

    Science.gov (United States)

    Wu, Di; Wang, Zhenhong; Jiang, Zhenxia; Fu, Xiaojing; Li, Hui; Zhang, Dapeng; Liu, Hui; Hu, Yifei

    2014-11-01

    To understand the characteristics of recreational drug users' behaviors and social network, as well as their potential impact to the transmission of sexual transmitted infections (STI). Qualitative interview was used to collect information on rough estimation of population size and behavior change before and after recreational drug use. A total of 120 participants were recruited by convenient sampling from April to October, 2013 in a community of Qingdao city. Blood specimens were taken for HIV/syphilis serological testing and social network analysis was performed to understand the characteristics of their behavior and social network. All participants used methamphetamine and 103 of them showed social connection. The prevalence of syphilis and HIV were 24.2% (29/120) and 2.5% (3/120) respectively. The estimated size of recreational drug users was big with a wide diversity of occupations and age range, and males were more frequent than females. Drug use may affect condom use and frequent drug users showed symptom of psychosis and neuro-toxicities. The size of social network was 2.45 ± 1.63 in the past 6 months, which indicated an increasing trend of the sexual partner number and risky behaviors. Recreational drug use could increase the size of social network among sex partners, the frequency of risky sexual behaviors and syphilis prevalence, which indicate a high risk of HIV/STI among this population as well as a huge burden of disease prevention and control in the future.

  13. Space Link Extension (SLE) Emulation for High-Throughput Network Communication

    Science.gov (United States)

    Murawski, Robert W.; Tchorowski, Nicole; Golden, Bert

    2014-01-01

    As the data rate requirements for space communications increases, significant stress is placed not only on the wireless satellite communication links, but also on the ground networks which forward data from end-users to remote ground stations. These wide area network (WAN) connections add delay and jitter to the end-to-end satellite communication link, effects which can have significant impacts on the wireless communication link. It is imperative that any ground communication protocol can react to these effects such that the ground network does not become a bottleneck in the communication path to the satellite. In this paper, we present our SCENIC Emulation Lab testbed which was developed to test the CCSDS SLE protocol implementations proposed for use on future NASA communication networks. Our results show that in the presence of realistic levels of network delay, high-throughput SLE communication links can experience significant data rate throttling. Based on our observations, we present some insight into why this data throttling happens, and trace the probable issue back to non-optimal blocking communication which is sup-ported by the CCSDS SLE API recommended practices. These issues were presented as well to the SLE implementation developers which, based on our reports, developed a new release for SLE which we show fixes the SLE blocking issue and greatly improves the protocol throughput. In this paper, we also discuss future developments for our end-to-end emulation lab and how these improvements can be used to develop and test future space communication technologies.

  14. Development of high-reliable real-time communication network protocol for SMART

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ki Sang; Kim, Young Sik [Korea National University of Education, Chongwon (Korea); No, Hee Chon [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-04-01

    In this research, we first define protocol subsets for SMART(System-integrated Modular Advanced Reactor) communication network based on the requirement of SMART MMIS transmission delay and traffic requirements and OSI(Open System Interconnection) 7 layers' network protocol functions. Also, current industrial purpose LAN protocols are analyzed and the applicability of commercialized protocols are checked. For the suitability test, we have applied approximated SMART data traffic and maximum allowable transmission delay requirement. With the simulation results, we conclude that IEEE 802.5 and FDDI which is an ANSI standard, is the most suitable for SMART. We further analyzed the FDDI and token ring protocols for SMART and nuclear plant network environment including IEEE 802.4, IEEE 802.5, and ARCnet. The most suitable protocol for SMART is FDDI and FDDI MAC and RMT protocol specifications have been verified with LOTOS and the verification results show that FDDI MAC and RMT satisfy the reachability and liveness, but does not show deadlock and livelock. Therefore, we conclude that FDDI MAC and RMT is highly reliable protocol for SMART MMIS network. After that, we consider the stacking fault of IEEE 802.5 token ring protocol and propose a fault tolerant MAM(Modified Active Monitor) protocol. The simulation results show that the MAM protocol improves lower priority traffic service rate when stacking fault occurs. Therefore, proposed MAM protocol can be applied to SMART communication network for high reliability and hard real-time communication purpose in data acquisition and inter channel network. (author). 37 refs., 79 figs., 39 tabs.

  15. Monitoring of high voltage supply using the Controller Area Network protocol

    Energy Technology Data Exchange (ETDEWEB)

    Luz, Igo Amauri dos S.; Farias, Paulo Cesar M.A.; Guedes, Germano P. [Universidade Estadual de Feira de Santana (UEFS), BA (Brazil)

    2011-07-01

    Full text: In recent years, experimental physics has made great progress in the investigation of the phenomenology of neutrinos, with significant contribution from experiments using nuclear reactors as source of particles. In this context, The Neutrinos Angra Project proposes the use of an anti-neutrinos detector with ability to monitor parameters related to the activity of nuclear reactors. One of the tasks defined in the project is the development of a system to control and to monitor the high voltage supply units used by the photomultiplier tubes (PMTs) of the detector. The solution proposed in this work is based on the use of microcontrollers, from Microchip PIC family to adjust the operating point of the high voltage supply units and to acquire the current and output voltage data. Analysis of these data allows the effective control of the gain of the PMTs and to identify anomalous operational conditions. In this work is proposed the study of the Controller Area Network (CAN) protocol and the implementation of a laboratory network to reproduce the typical operations of data acquisition and information transfer between the nodes. The development of this network is divided in two stages. The first part consisted of the setup of a CAN network, using the PIC18F2680 microcontroller, which has the CAN protocol internally implemented. This network serves as a reduced model of the final system, allowing simulation of typical situations of data acquisition and transmission between the nodes and a computer. In the second part of the work, the PIC18F4550 microcontroller was associated with the external CAN controller MCP2515 to develop a CAN/USB converter. This converter provides a new communication channel between network nodes and the computer, in addition to the RS232 interface. (author)

  16. Enhancing Cellular Coverage Quality by Virtual Access Point and Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    Jinsong Gui

    2018-01-01

    Full Text Available The ultradensification deploying for cellular networks is a direct and effective method for the improvement of network capacity. However, the benefit is achieved at the cost of network infrastructure investment and operating overheads, especially when there is big gap between peak-hour Internet traffic and average one. Therefore, we put forward the concept of virtual cellular coverage area, where wireless terminals with high-end configuration are motivated to enhance cellular coverage quality by both providing RF energy compensation and rewarding free traffic access to Internet. This problem is formulated as the Stackelberg game based on three-party circular decision, where a Macro BS (MBS acts as the leader to offer a charging power to Energy Transferring Relays (ETRs, and the ETRs and their associating Virtual Access Points (VAPs act as the followers to make their decisions, respectively. According to the feedback from the followers, the leader may readjust its strategy. The circular decision is repeated until the powers converge. Also, the better response algorithm for each game player is proposed to iteratively achieve the Stackelberg-Nash Equilibrium (SNE. Theoretical analysis proves the convergence of the proposed game scheme, and simulation results demonstrate its effectiveness.

  17. Neural Networks Mediating High-Level Mentalizing in Patients With Right Cerebral Hemispheric Gliomas

    Directory of Open Access Journals (Sweden)

    Riho Nakajima

    2018-03-01

    Full Text Available Mentalizing is the ability to understand others’ mental state through external cues. It consists of two networks, namely low-level and high-level metalizing. Although it is an essential function in our daily social life, surgical resection of right cerebral hemisphere disturbs mentalizing processing with high possibility. In the past, little was known about the white matter related to high-level mentalizing, and the conservation of high-level mentalizing during surgery has not been a focus of attention. Therefore, the main purpose of this study was to examine the neural networks underlying high-level mentalizing and then, secondarily, investigate the usefulness of awake surgery in preserving the mentalizing network. A total of 20 patients with glioma localized in the right hemisphere who underwent awake surgery participated in this study. All patients were assigned to two groups: with or without intraoperative assessment of high-level mentalizing. Their high-level mentalizing abilities were assessed before surgery and 1 week and 3 months after surgery. At 3 months after surgery, only patients who received the intraoperative high-level mentalizing test showed the same score as normal healthy volunteers. The tract-based lesion symptom analysis was performed to confirm the severity of damage of associated fibers and high-level mentalizing accuracy. This analysis revealed the superior longitudinal fascicles (SLF III and fronto-striatal tract (FST to be associated with high-level mentalizing processing. Moreover, the voxel-based lesion symptom analysis demonstrated that resection of orbito-frontal cortex (OFC causes persistent mentalizing dysfunction. Our study indicates that damage of the OFC and structural connectivity of the SLF and FST causes the disorder of mentalizing after surgery, and assessing high-level mentalizing during surgery may be useful to preserve these pathways.

  18. Fault-tolerance techniques for high-speed fiber-optic networks

    Science.gov (United States)

    Deruiter, John

    1991-01-01

    Four fiber optic network topologies (linear bus, ring, central star, and distributed star) are discussed relative to their application to high data throughput, fault tolerant networks. The topologies are also examined in terms of redundancy and the need to provide for single point, failure free (or better) system operation. Linear bus topology, although traditionally the method of choice for wire systems, presents implementation problems when larger fiber optic systems are considered. Ring topology works well for high speed systems when coupled with a token passing protocol, but it requires a significant increase in protocol complexity to manage system reconfiguration due to ring and node failures. Star topologies offer a natural fault tolerance, without added protocol complexity, while still providing high data throughput capability.

  19. Fault diagnosis and performance evaluation for high current LIA based on radial basis function neural network

    International Nuclear Information System (INIS)

    Yang Xinglin; Wang Huacen; Chen Nan; Dai Wenhua; Li Jin

    2006-01-01

    High current linear induction accelerator (LIA) is a complicated experimental physics device. It is difficult to evaluate and predict its performance. this paper presents a method which combines wavelet packet transform and radial basis function (RBF) neural network to build fault diagnosis and performance evaluation in order to improve reliability of high current LIA. The signal characteristics vectors which are extracted based on energy parameters of wavelet packet transform can well present the temporal and steady features of pulsed power signal, and reduce data dimensions effectively. The fault diagnosis system for accelerating cell and the trend classification system for the beam current based on RBF networks can perform fault diagnosis and evaluation, and provide predictive information for precise maintenance of high current LIA. (authors)

  20. A Highly Infectious Disease Care Network in the US Healthcare System.

    Science.gov (United States)

    Le, Aurora B; Biddinger, Paul D; Smith, Philip W; Herstein, Jocelyn J; Levy, Deborah A; Gibbs, Shawn G; Lowe, John J

    During the 2014-15 Ebola outbreak in West Africa, the United States responded by stratifying hospitals into 1 of 3 Centers for Disease Control and Prevention (CDC)-designated categories-based on the hospital's ability to identify, isolate, assess, and provide care to patients with suspected or confirmed Ebola virus disease (EVD)-in an attempt to position the US healthcare system to safely isolate and care for potential patients. Now, with the Ebola epidemic quelled, it is crucial that we act on the lessons learned from the EVD response to broaden our national perspective on infectious disease mitigation and management, build on our newly enhanced healthcare capabilities to respond to infectious disease threats, develop a more cost-effective and sustainable model of infectious disease prevention, and continue to foster training so that the nation is not in a vulnerable position once more. We propose the formal creation of a US Highly Infectious Disease Care Network (HIDCN) modeled after 2 previous highly infectious disease consensus efforts in the United States and the European Union. A US Highly Infectious Disease Care Network can provide a common platform for the exchange of training, protocols, research, knowledge, and capability sharing among high-level isolation units. Furthermore, we envision the network will cultivate relationships among facilities and serve as a means of establishing national standards for infectious disease response, which will strengthen domestic preparedness and the nation's ability to respond to the next highly infectious disease threat.

  1. High-Throughput and Low-Latency Network Communication with NetIO

    Science.gov (United States)

    Schumacher, Jörn; Plessl, Christian; Vandelli, Wainer

    2017-10-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low- latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS exclusively target the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They make it possible to build distributed applications with a high-level approach and provide good performance. Unfortunately, their usage usually limits developers to TCP/IP- based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath, this approach may not be very efficient compared to a direct use of native APIs. NetIO is a simple, novel asynchronous message service that can operate on Ethernet, Infiniband and similar network fabrics. In this paper the design and implementation of NetIO is presented and described, and its use is evaluated in comparison to other approaches. NetIO supports different high-level programming models and typical workloads of HEP applications. The ATLAS FELIX project [1] successfully uses NetIO as its central communication platform. The architecture of NetIO is described in this paper, including the user-level API and the internal data-flow design. The paper includes a performance evaluation of NetIO including throughput and latency measurements. The performance is compared against the state-of-the- art ZeroMQ message service. Performance measurements are performed in a lab environment with Ethernet and FDR Infiniband networks.

  2. Realizations of highly heterogeneous collagen networks via stochastic reconstruction for micromechanical analysis of tumor cell invasion

    Science.gov (United States)

    Nan, Hanqing; Liang, Long; Chen, Guo; Liu, Liyu; Liu, Ruchuan; Jiao, Yang

    2018-03-01

    Three-dimensional (3D) collective cell migration in a collagen-based extracellular matrix (ECM) is among one of the most significant topics in developmental biology, cancer progression, tissue regeneration, and immune response. Recent studies have suggested that collagen-fiber mediated force transmission in cellularized ECM plays an important role in stress homeostasis and regulation of collective cellular behaviors. Motivated by the recent in vitro observation that oriented collagen can significantly enhance the penetration of migrating breast cancer cells into dense Matrigel which mimics the intravasation process in vivo [Han et al. Proc. Natl. Acad. Sci. USA 113, 11208 (2016), 10.1073/pnas.1610347113], we devise a procedure for generating realizations of highly heterogeneous 3D collagen networks with prescribed microstructural statistics via stochastic optimization. Specifically, a collagen network is represented via the graph (node-bond) model and the microstructural statistics considered include the cross-link (node) density, valence distribution, fiber (bond) length distribution, as well as fiber orientation distribution. An optimization problem is formulated in which the objective function is defined as the squared difference between a set of target microstructural statistics and the corresponding statistics for the simulated network. Simulated annealing is employed to solve the optimization problem by evolving an initial network via random perturbations to generate realizations of homogeneous networks with randomly oriented fibers, homogeneous networks with aligned fibers, heterogeneous networks with a continuous variation of fiber orientation along a prescribed direction, as well as a binary system containing a collagen region with aligned fibers and a dense Matrigel region with randomly oriented fibers. The generation and propagation of active forces in the simulated networks due to polarized contraction of an embedded ellipsoidal cell and a small group

  3. Mediating Trust in Terrorism Coverage

    DEFF Research Database (Denmark)

    Mogensen, Kirsten

    crisis. While the framework is presented in the context of television coverage of a terror-related crisis situation, it can equally be used in connection with all other forms of mediated trust. Key words: National crisis, risk communication, crisis management, television coverage, mediated trust.......Mass mediated risk communication can contribute to perceptions of threats and fear of “others” and/or to perceptions of trust in fellow citizens and society to overcome problems. This paper outlines a cross-disciplinary holistic framework for research in mediated trust building during an acute...

  4. Synchronized High-Speed Vision Sensor Network for Expansion of Field of View

    Directory of Open Access Journals (Sweden)

    Akihito Noda

    2018-04-01

    Full Text Available We propose a 500-frames-per-second high-speed vision (HSV sensor network that acquires frames at a timing that is precisely synchronized across the network. Multiple vision sensor nodes, individually comprising a camera and a PC, are connected via Ethernet for data transmission and for clock synchronization. A network of synchronized HSV sensors provides a significantly expanded field-of-view compared with that of each individual HSV sensor. In the proposed system, the shutter of each camera is controlled based on the clock of the PC locally provided inside the node, and the shutters are globally synchronized using the Precision Time Protocol (PTP over the network. A theoretical analysis and experiment results indicate that the shutter trigger skew among the nodes is a few tens of microseconds at most, which is significantly smaller than the frame interval of 1000-fps-class high-speed cameras. Experimental results obtained with the proposed system comprising four nodes demonstrated the ability to capture the propagation of a small displacement along a large-scale structure.

  5. Hourglass-ShapeNetwork Based Semantic Segmentation for High Resolution Aerial Imagery

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2017-05-01

    Full Text Available A new convolution neural network (CNN architecture for semantic segmentation of high resolution aerial imagery is proposed in this paper. The proposed architecture follows an hourglass-shaped network (HSN design being structured into encoding and decoding stages. By taking advantage of recent advances in CNN designs, we use the composed inception module to replace common convolutional layers, providing the network with multi-scale receptive areas with rich context. Additionally, in order to reduce spatial ambiguities in the up-sampling stage, skip connections with residual units are also employed to feed forward encoding-stage information directly to the decoder. Moreover, overlap inference is employed to alleviate boundary effects occurring when high resolution images are inferred from small-sized patches. Finally, we also propose a post-processing method based on weighted belief propagation to visually enhance the classification results. Extensive experiments based on the Vaihingen and Potsdam datasets demonstrate that the proposed architectures outperform three reference state-of-the-art network designs both numerically and visually.

  6. Visualization of the microcirculatory network in skin by high frequency optoacoustic mesoscopy

    Science.gov (United States)

    Schwarz, Mathias; Aguirre, Juan; Buehler, Andreas; Omar, Murad; Ntziachristos, Vasilis

    2015-07-01

    Optoacoustic (photoacoustic) imaging has a high potential for imaging melanin-rich structures in skin and the microvasculature of the dermis due to the natural chromophores (de)oxyhemoglobin, and melanin. The vascular network in human dermis comprises a large network of arterioles, capillaries, and venules, ranging from 5 μm to more than 100 μm in diameter. The frequency spectrum of the microcirculatory network in human skin is intrinsically broadband, due to the large variety in size of absorbers. In our group we have developed raster-scan optoacoustic mesoscopy (RSOM) that applies a 100 MHz transducer with ultra-wide bandwidth in raster-scan mode achieving lateral resolution of 18 μm. In this study, we applied high frequency RSOM to imaging human skin in a healthy volunteer. We analyzed the frequency spectrum of anatomical structures with respect to depth and show that frequencies >60 MHz contain valuable information of structures in the epidermis and the microvasculature of the papillary dermis. We illustrate that RSOM is capable of visualizing the fine vascular network at and beneath the epidermal-dermal junction, revealing the vascular fingerprint of glabrous skin, as well as the larger venules deeper inside the dermis. We evaluate the ability of the RSOM system in measuring epidermal thickness in both hairy and glabrous skin. Finally, we showcase the capability of RSOM in visualizing benign nevi that will potentially help in imaging the penetration depth of melanoma.

  7. Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks.

    Science.gov (United States)

    Cheng, Phillip M; Tejura, Tapas K; Tran, Khoa N; Whang, Gilbert

    2018-05-01

    The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78-0.89). At the maximum Youden index (sensitivity + specificity-1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs.

  8. Experiences from Implementing a Mobile Multiplayer Real-Time Game for Wireless Networks with High Latency

    Directory of Open Access Journals (Sweden)

    Alf Inge Wang

    2009-01-01

    Full Text Available This paper describes results and experiences from designing, implementing, and testing a multiplayer real-time game over mobile networks with high latency. The paper reports on network latency and bandwidth measurements from playing the game live over GPRS, EDGE, UMTS, and WLAN using the TCP and the UDP protocols. These measurements describe the practical constraints of various wireless networks and protocols when used for mobile multiplayer game purposes. Further, the paper reports on experiences from implementing various approaches to minimize issues related to high latency. Specifically, the paper focuses on a discussion about how much of the game should run locally on the client versus on the server to minimize the load on the mobile device and obtain sufficient consistency in the game. The game was designed to reveal all kinds of implementation issues of mobile network multiplayer games. The goal of the game is for a player to push other players around and into traps where they loose their lives. The game relies heavily on collision detection between the players and game objects. The paper presents experiences from experimenting with various approaches that can be used to handle such collisions, and highlights the advantages and disadvantages of the various approaches.

  9. High reliable and Real-time Data Communication Network Technology for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Jeong, K. I.; Lee, J. K.; Choi, Y. R.; Lee, J. C.; Choi, Y. S.; Cho, J. W.; Hong, S. B.; Jung, J. E.; Koo, I. S.

    2008-03-01

    As advanced digital Instrumentation and Control (I and C) system of NPP(Nuclear Power Plant) are being introduced to replace analog systems, a Data Communication Network(DCN) is becoming the important system for transmitting the data generated by I and C systems in NPP. In order to apply the DCNs to NPP I and C design, DCNs should conform to applicable acceptance criteria and meet the reliability and safety goals of the system. As response time is impacted by the selected protocol, network topology, network performance, and the network configuration of I and C system, DCNs should transmit a data within time constraints and response time required by I and C systems to satisfy response time requirements of I and C system. To meet these requirements, the DCNs of NPP I and C should be a high reliable and real-time system. With respect to high reliable and real-time system, several reports and techniques having influences upon the reliability and real-time requirements of DCNs are surveyed and analyzed

  10. Cisco Networking Academy Program for high school students: Formative & summative evaluation

    Science.gov (United States)

    Cranford-Wesley, Deanne

    This study examined the effectiveness of the Cisco Network Technology Program in enhancing students' technology skills as measured by classroom strategies, student motivation, student attitude, and student learning. Qualitative and quantitative methods were utilized to determine the effectiveness of this program. The study focused on two 11th grade classrooms at Hamtramck High School. Hamtramck, an inner-city community located in Detroit, is racially and ethnically diverse. The majority of students speak English as a second language; more than 20 languages are represented in the school district. More than 70% of the students are considered to be economically at risk. Few students have computers at home, and their access to the few computers at school is limited. Purposive sampling was conducted for this study. The sample consisted of 40 students, all of whom were trained in Cisco Networking Technologies. The researcher examined viable learning strategies in teaching a Cisco Networking class that focused on a web-based approach. Findings revealed that the Cisco Networking Academy Program was an excellent vehicle for teaching networking skills and, therefore, helping to enhance computer skills for the participating students. However, only a limited number of students were able to participate in the program, due to limited computer labs and lack of qualified teaching personnel. In addition, the cumbersome technical language posed an obstacle to students' success in networking. Laboratory assignments were preferred by 90% of the students over lecture and PowerPoint presentations. Practical applications, lab projects, interactive assignments, PowerPoint presentations, lectures, discussions, readings, research, and assessment all helped to increase student learning and proficiency and to enrich the classroom experience. Classroom strategies are crucial to student success in the networking program. Equipment must be updated and utilized to ensure that students are

  11. The Dallas-Fort Worth (DFW) Urban Radar Network: Enhancing Resilience in the Presence of Floods, Tornadoes, Hail and High Winds

    Science.gov (United States)

    Chandra*, Chandrasekar V.; the full DFW Team

    2015-04-01

    Currently, the National Weather Service (NWS) Next Generation Weather Radar (NEXRAD) provides observations updated every five-six minutes across the United States. However, at the maximum NEXRAD operating range of 230 km, the 0.5 degree radar beam (lowest tilt) height is about 5.4 km above ground level (AGL) because of the effect of Earth curvature. Consequently, much of the lower atmosphere (1-3 km AGL) cannot be observed by the NEXRAD. To overcome the fundamental coverage limitations of today's weather surveillance radars, and improve the spatial and temporal resolution issues, at urban scale, the National Science Foundation Engineering Research Center (NSF-ERC) for Collaborative Adaptive Sensing of the Atmosphere (CASA) has embarked the development of Dallas-Fort worth (DFW) urban remote sensing network to conduct high-resolution sensing in the lower atmosphere for a metropolitan environment, communicate high resolution observations and nowcasting of severe weather including flash floods, hail storms and high wind events. Being one of the largest inland metropolitan areas in the U.S., the DFW Metroplex is home to over 6.5 million people by 2012 according to the North Central Texas Council of Governments (NCTCOG). It experiences a wide range of natural weather hazards, including urban flash flood, high wind, tornado, and hail, etc. Successful monitoring of the rapid changing meteorological conditions in such a region is necessary for emergency management and decision making. Therefore, it is an ideal location to investigate the impacts of hazardous weather phenomena, to enhance resilience in an urban setting and demonstrate the CASA concept in a densely populated urban environment. The DFW radar network consists of 8 dual-polarization X-band weather radars and standard NEXRAD S-band radar, covering the greater DFW metropolitan region. This paper will present high resolution observation of tornado, urban flood, hail storm and damaging wind event all within the

  12. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  13. Monitoring intervention coverage in the context of universal health coverage.

    Directory of Open Access Journals (Sweden)

    Ties Boerma

    2014-09-01

    Full Text Available Monitoring universal health coverage (UHC focuses on information on health intervention coverage and financial protection. This paper addresses monitoring intervention coverage, related to the full spectrum of UHC, including health promotion and disease prevention, treatment, rehabilitation, and palliation. A comprehensive core set of indicators most relevant to the country situation should be monitored on a regular basis as part of health progress and systems performance assessment for all countries. UHC monitoring should be embedded in a broad results framework for the country health system, but focus on indicators related to the coverage of interventions that most directly reflect the results of UHC investments and strategies in each country. A set of tracer coverage indicators can be selected, divided into two groups-promotion/prevention, and treatment/care-as illustrated in this paper. Disaggregation of the indicators by the main equity stratifiers is critical to monitor progress in all population groups. Targets need to be set in accordance with baselines, historical rate of progress, and measurement considerations. Critical measurement gaps also exist, especially for treatment indicators, covering issues such as mental health, injuries, chronic conditions, surgical interventions, rehabilitation, and palliation. Consequently, further research and proxy indicators need to be used in the interim. Ideally, indicators should include a quality of intervention dimension. For some interventions, use of a single indicator is feasible, such as management of hypertension; but in many areas additional indicators are needed to capture quality of service provision. The monitoring of UHC has significant implications for health information systems. Major data gaps will need to be filled. At a minimum, countries will need to administer regular household health surveys with biological and clinical data collection. Countries will also need to improve the

  14. How home HIV testing and counselling with follow-up support achieves high testing coverage and linkage to treatment and prevention: a qualitative analysis from Uganda.

    Science.gov (United States)

    Ware, Norma C; Wyatt, Monique A; Asiimwe, Stephen; Turyamureeba, Bosco; Tumwesigye, Elioda; van Rooyen, Heidi; Barnabas, Ruanne V; Celum, Connie L

    2016-01-01

    The successes of HIV treatment scale-up and the availability of new prevention tools have raised hopes that the epidemic can finally be controlled and ended. Reduction in HIV incidence and control of the epidemic requires high testing rates at population levels, followed by linkage to treatment or prevention. As effective linkage strategies are identified, it becomes important to understand how these strategies work. We use qualitative data from The Linkages Study, a recent community intervention trial of community-based testing with linkage interventions in sub-Saharan Africa, to show how lay counsellor home HIV testing and counselling (home HTC) with follow-up support leads to linkage to clinic-based HIV treatment and medical male circumcision services. We conducted 99 semi-structured individual interviews with study participants and three focus groups with 16 lay counsellors in Kabwohe, Sheema District, Uganda. The participant sample included both HIV+ men and women (N=47) and HIV-uncircumcised men (N=52). Interview and focus group audio-recordings were translated and transcribed. Each transcript was summarized. The summaries were analyzed inductively to identify emergent themes. Thematic concepts were grouped to develop general constructs and framing propositional statements. Trial participants expressed interest in linking to clinic-based services at testing, but faced obstacles that eroded their initial enthusiasm. Follow-up support by lay counsellors intervened to restore interest and inspire action. Together, home HTC and follow-up support improved morale, created a desire to reciprocate, and provided reassurance that services were trustworthy. In different ways, these functions built links to the health service system. They worked to strengthen individuals' general sense of capability, while making the idea of accessing services more manageable and familiar, thus reducing linkage barriers. Home HTC with follow-up support leads to linkage by building

  15. Pertussis: herd immunity and vaccination coverage in St Lucia.

    Science.gov (United States)

    Cooper, E; Fitch, L

    1983-11-12

    In a single complete epidemic in St Lucia, an island too small to support constant clinical pertussis, the pertussis case rates in small communities (villages and small towns) with differing levels of vaccination coverage of young children were compared. The association between greater vaccination coverage and greater herd immunity was clear, despite the imperfect protection given to individuals. An analysis in terms of population dynamics is evidence against the theory that endemic subclinical pertussis maintains transmission in a highly vaccinated population. We suggest that with a homogeneous vaccination coverage of 80% of 2-year-old children pertussis might be eradicated from the island, and that this is a practicable experiment.

  16. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  17. LEONA: Transient Luminous Event and Thunderstorm High Energy Emission Collaborative Network in Latin America

    Science.gov (United States)

    Sao Sabbas, F. T.

    2012-12-01

    This project has the goal of establishing the Collaborative Network LEONA, to study the electrodynamical coupling of the atmospheric layers signaled by Transient Luminous Events - TLEs and high energy emissions from thunderstorms. We will develop and install a remotely controlled network of cameras to perform TLE observations in different locations in South America and one neutron detector in southern Brazil. The camera network will allow building a continuous data set of the phenomena studied in this continent. The first two trial units of the camera network are already installed, in Brazil and Peru, and two more will be installed until December 2012, in Argentina and Brazil. We expect to determine the TLE geographic distribution, occurrence rate, morphology, and possible coupling with other geophysical phenomena in South America, such as the South Atlantic Magnetic Anomaly - SAMA. We also expect to study thunderstorm neutron emissions in a region of intense electrical activity, measuring neutron fluxes with high time resolution simultaneously with TLEs and lightning for the first time in South America. Using an intensified high-speed camera for TLE observation during 2 campaigns we expect to be able to determine the duration and spatial- temporal development of the TLEs observed, to study the structure and initiation of sprites and to measure the velocity of development of sprite structures and the sprite delay. The camera was acquired via the FAPESP project DEELUMINOS (2005-2010), which also nucleated our research group Atmospheric Electrodynamical Coupling - ACATMOS. LEONA will nucleate this research in other institutions in Brazil and other countries in South America, providing continuity for this important research in our region. The camera network will be an unique tool to perform consistent long term TLE observation, and in fact is the only way to accumulate a data set for a climatological study of South America, since satellite instrumentation turns off in

  18. Security for multihop wireless networks

    CERN Document Server

    Khan, Shafiullah

    2014-01-01

    Security for Multihop Wireless Networks provides broad coverage of the security issues facing multihop wireless networks. Presenting the work of a different group of expert contributors in each chapter, it explores security in mobile ad hoc networks, wireless sensor networks, wireless mesh networks, and personal area networks.Detailing technologies and processes that can help you secure your wireless networks, the book covers cryptographic coprocessors, encryption, authentication, key management, attacks and countermeasures, secure routing, secure medium access control, intrusion detection, ep

  19. HIGH SPEED RAILWAY LINES – FUTURE PART OF CZECH RAILWAY NETWORK?

    Directory of Open Access Journals (Sweden)

    Lukáš Týfa

    2017-08-01

    Full Text Available The paper first describes high speed rail generally and explains the relationship between high speed and conventional railway networks (according to the vehicle types in operation on the network. The core of the paper is comprised of the methodology for choosing the best route for a railway line and its application to the high speed railway connection Praha – Brno. The Algorithm used assumes the existence of more route proposals, which could be different in terms of the operational conception, line routing or types of vehicles used. The optimal variant is the one with the lowest daily cost, which includes infrastructure and vehicle costs; investment and operational costs. The results from applying this model confirmed the assumption, that a dedicated high speed railway line, only for high speed trains, has the same or lower investment costs than a line for both high speed and conventional trains. Furthermore, a dedicated high line also has a lower cost for infrastructure maintenance but a higher cost for buying high speed multiple units.

  20. Neural network model for proton-proton collision at high energy

    International Nuclear Information System (INIS)

    El-Bakry, M.Y.; El-Metwally, K.A.

    2003-01-01

    Developments in artificial intelligence (AI) techniques and their applications to physics have made it feasible to develop and implement new modeling techniques for high-energy interactions. In particular, AI techniques of artificial neural networks (ANN) have recently been used to design and implement more effective models. The primary purpose of this paper is to model the proton-proton (p-p) collision using the ANN technique. Following a review of the conventional techniques and an introduction to the neural network, the paper presents simulation test results using an p-p based ANN model trained with experimental data. The p-p based ANN model calculates the multiplicity distribution of charged particles and the inelastic cross section of the p-p collision at high energies. The results amply demonstrate the feasibility of such new technique in extracting the collision features and prove its effectiveness

  1. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, Jason; Gunter, Dan; Tierney, Brian; Allcock, Bill; Bester, Joe; Bresnahan, John; Tuecke, Steve

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. Ensuring that the data is there in time for the computation in today's Internet is a massive problem. From our work developing a scalable distributed network cache, we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). In this paper, we discuss several hardware and software design techniques and issues, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. We also describe results from two applications using these techniques, which were obtained at the Supercomputing 2000 conference

  2. N-Doped carbon spheres with hierarchical micropore-nanosheet networks for high performance supercapacitors.

    Science.gov (United States)

    Wang, Shoupei; Zhang, Jianan; Shang, Pei; Li, Yuanyuan; Chen, Zhimin; Xu, Qun

    2014-10-18

    N-doped carbon spheres with hierarchical micropore-nanosheet networks (HPSCSs) were facilely fabricated by a one-step carbonization and activation process of N containing polymer spheres by KOH. With the synergy effect of the multiple structures, HPSCSs exhibit a very high specific capacitance of 407.9 F g(-1) at 1 mV s(-1) (1.2 times higher than that of porous carbon spheres) and a robust cycling stability for supercapacitors.

  3. Rad-Tolerant, Thermally Stable, High-Speed Fiber-Optic Network for Harsh Environments

    Science.gov (United States)

    Leftwich, Matt; Hull, Tony; Leary, Michael; Leftwich, Marcus

    2013-01-01

    Future NASA destinations will be challenging to get to, have extreme environmental conditions, and may present difficulty in retrieving a spacecraft or its data. Space Photonics is developing a radiation-tolerant (rad-tolerant), high-speed, multi-channel fiber-optic transceiver, associated reconfigurable intelligent node communications architecture, and supporting hardware for intravehicular and ground-based optical networking applications. Data rates approaching 3.2 Gbps per channel will be achieved.

  4. Social network usage, shame, guilt and pride among high school students: Model testing

    OpenAIRE

    Doğan, Uğur; Çelik, Eyüp; Karakaş, Yahya

    2016-01-01

    This study was aimed at testing a model which applies structural equation modeling (SEM) to explain social networking sites (SNS) usage. Performing SEM with a sample of 500 high school students (40% male, 60% female), the model examined the relationships among shame, guilt and pride on SNS, such Facebook and Twitter. It was hypothesized that SNS usage was predicted directly by shame and indirectly by pride and guilt. The SEM showed that shame affected SNS usage directly and positively, while ...

  5. Terrorism and nuclear damage coverage

    International Nuclear Information System (INIS)

    Horbach, N. L. J. T.; Brown, O. F.; Vanden Borre, T.

    2004-01-01

    This paper deals with nuclear terrorism and the manner in which nuclear operators can insure themselves against it, based on the international nuclear liability conventions. It concludes that terrorism is currently not covered under the treaty exoneration provisions on 'war-like events' based on an analysis of the concept on 'terrorism' and travaux preparatoires. Consequently, operators remain liable for nuclear damage resulting from terrorist acts, for which mandatory insurance is applicable. Since nuclear insurance industry looks at excluding such insurance coverage from their policies in the near future, this article aims to suggest alternative means for insurance, in order to ensure adequate compensation for innocent victims. The September 11, 2001 attacks at the World Trade Center in New York City and the Pentagon in Washington, DC resulted in the largest loss in the history of insurance, inevitably leading to concerns about nuclear damage coverage, should future such assaults target a nuclear power plant or other nuclear installation. Since the attacks, some insurers have signalled their intentions to exclude coverage for terrorism from their nuclear liability and property insurance policies. Other insurers are maintaining coverage for terrorism, but are establishing aggregate limits or sublimits and are increasing premiums. Additional changes by insurers are likely to occur. Highlighted by the September 11th events, and most recently by those in Madrid on 11 March 2004, are questions about how to define acts of terrorism and the extent to which such are covered under the international nuclear liability conventions and various domestic nuclear liability laws. Of particular concern to insurers is the possibility of coordinated simultaneous attacks on multiple nuclear facilities. This paper provides a survey of the issues, and recommendations for future clarifications and coverage options.(author)

  6. Camera Coverage Estimation Based on Multistage Grid Subdivision

    Directory of Open Access Journals (Sweden)

    Meizhen Wang

    2017-04-01

    Full Text Available Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those involving several cameras. This paper proposes a new method to efficiently estimate superior camera coverage. First, the geographic area that is covered by the camera and its minimum bounding rectangle (MBR without considering obstacles is computed using the camera parameters. Second, the MBR is divided into grids using the initial grid size. The status of the four corners of each grid is estimated by a line of sight (LOS algorithm. If the camera, considering obstacles, covers a corner, the status is represented by 1, otherwise by 0. Consequently, the status of a grid can be represented by a code that is a combination of 0s or 1s. If the code is not homogeneous (not four 0s or four 1s, the grid will be divided into four sub-grids until the sub-grids are divided into a specific maximum level or their codes are homogeneous. Finally, after performing the process above, total camera coverage is estimated according to the size and status of all grids. Experimental results illustrate that the proposed method’s accuracy is determined by the method that divided the coverage area into the smallest grids at the maximum level, while its efficacy is closer to the method that divided the coverage area into the initial grids. It considers both efficiency and accuracy. The initial grid size and maximum level are two critical influences on the proposed method, which can be determined by weighing efficiency and accuracy.

  7. Extraction of drainage networks from large terrain datasets using high throughput computing

    Science.gov (United States)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  8. Aspects of artificial neural networks - with applications in high energy physics

    International Nuclear Information System (INIS)

    Roegnvaldsson, T.S.

    1994-02-01

    Different aspects of artificial neural networks are studied and discussed. They are demonstrated to be powerful general purpose algorithms, applicable to many different problem areas like pattern recognition, function fitting and prediction. Multi-layer perceptron (MPL) models are shown to out perform previous standard approaches on both off-line and on-line analysis tasks in high energy physics, like quark flavour tagging and mass reconstruction, as well as being powerful tools for prediction tasks. It is also demonstrated how a self-organizing network can be employed to extract information from data, for instance to track down origins of unexpected model discrepancies. Furthermore, it is proved that the MPL is more efficient than the learning vector quantization technique on classification problems, by producing smoother discrimination surfaces, and that an MPL network should be trained with a noisy updating schedule if the Hessian is ill-conditioned - A result that is especially important for MPL network with more than just one hidden layer. 81 refs, 6 figs

  9. Ecological network analysis for a low-carbon and high-tech industrial park.

    Science.gov (United States)

    Lu, Yi; Su, Meirong; Liu, Gengyuan; Chen, Bin; Zhou, Shiyi; Jiang, Meiming

    2012-01-01

    Industrial sector is one of the indispensable contributors in global warming. Even if the occurrence of ecoindustrial parks (EIPs) seems to be a good improvement in saving ecological crises, there is still a lack of definitional clarity and in-depth researches on low-carbon industrial parks. In order to reveal the processes of carbon metabolism in a low-carbon high-tech industrial park, we selected Beijing Development Area (BDA) International Business Park in Beijing, China as case study, establishing a seven-compartment- model low-carbon metabolic network based on the methodology of Ecological Network Analysis (ENA). Integrating the Network Utility Analysis (NUA), Network Control Analysis (NCA), and system-wide indicators, we compartmentalized system sectors into ecological structure and analyzed dependence and control degree based on carbon metabolism. The results suggest that indirect flows reveal more mutuality and exploitation relation between system compartments and they are prone to positive sides for the stability of the whole system. The ecological structure develops well as an approximate pyramidal structure, and the carbon metabolism of BDA proves self-mutualistic and sustainable. Construction and waste management were found to be two active sectors impacting carbon metabolism, which was mainly regulated by internal and external environment.

  10. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    Science.gov (United States)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  11. A Control Simulation Method of High-Speed Trains on Railway Network with Irregular Influence

    International Nuclear Information System (INIS)

    Yang Lixing; Li Xiang; Li Keping

    2011-01-01

    Based on the discrete time method, an effective movement control model is designed for a group of highspeed trains on a rail network. The purpose of the model is to investigate the specific traffic characteristics of high-speed trains under the interruption of stochastic irregular events. In the model, the high-speed rail traffic system is supposed to be equipped with the moving-block signalling system to guarantee maximum traversing capacity of the railway. To keep the safety of trains' movements, some operational strategies are proposed to control the movements of trains in the model, including traction operation, braking operation, and entering-station operation. The numerical simulations show that the designed model can well describe the movements of high-speed trains on the rail network. The research results can provide the useful information not only for investigating the propagation features of relevant delays under the irregular disturbance but also for rerouting and rescheduling trains on the rail network. (general)

  12. The challenges of social marketing of organ donation: news and entertainment coverage of donation and transplantation.

    Science.gov (United States)

    Harrison, Tyler R; Morgan, Susan E; Chewning, Lisa V

    2008-01-01

    While great strides have been made in persuading the public to become potential organ donors, actual behavior has not yet caught up with the nearly universally favorable attitudes the public expresses toward donation. This paper explores the issue by situating the social marketing of organ donation against a broader backdrop of entertainment and news media coverage of organ donation. Organ donation storylines are featured on broadcast television in medical and legal dramas, soap operas, and other television serials approximately four times per month (not including most cable networks), and feature storylines that promote myths and fears of the organ donation process. National news and other non-fictionalized coverage of organ donation are even more common, with stories appearing over twenty times a month on average. These stories tend to be one-dimensional and highly sensationalized in their coverage. The marketing of organ donation for entertainment essentially creates a counter-campaign to organ donation, with greater resources and reach than social marketers have access to. Understanding the broader environmental context of organ donation messages highlights the issues faced by social marketing campaigns in persuading the public to become potential donors.

  13. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose.

    Science.gov (United States)

    Wang, Yan; Yu, Biting; Wang, Lei; Zu, Chen; Lalush, David S; Lin, Weili; Wu, Xi; Zhou, Jiliu; Shen, Dinggang; Zhou, Luping

    2018-07-01

    Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state

  14. A Portable Low-Cost High Density Sensor Network for Air Quality at London Heathrow Airport

    Science.gov (United States)

    Popoola, Olalekan; Mead, Iq; Bright, Vivien; Baron, Ronan; Saffell, John; Stewart, Gregor; Kaye, Paul; Jones, Roderic

    2013-04-01

    Outdoor air quality and its impact on human health and the environment have been well studied and it has been projected that poor air quality will surpass poor sanitation as the major course of environmental premature mortality by 2050 (IGAC / IGBP, release statement, 2012). Transport-related pollution has been regulated at various levels by enactment of legislations at local, national, regional and global stages. As part of the mitigation measures, routine measurements of atmospheric pollutants such as carbon monoxide (CO), nitric oxide (NO) and nitrogen dioxide (NO2) have to be established in areas where air quality problems are identified. In addition, emission inventories are also generated for different atmospheric environments including urban areas and airport environments required for air quality models. Whilst recognising that most of the existing sparse monitoring networks provide high temporal measurements, spatial data of these highly variable pollutants are not captured, making it difficult to adequately characterise the highly heterogeneous air quality. Spatial information is often obtained from model data which can only be constrained using measurements from the sparse monitoring networks. The work presented here shows the application of low-cost sensor networks aimed at addressing this missing spatial information. We have shown in previous studies the application of low-cost electrochemical sensor network instruments in monitoring road transport pollutants including CO, NO and NO2 in an urban environment (Mead et. al. 2012, accepted Atmospheric Environment). Modified versions of these instruments which include additional species such as O3, SO2, VOCs and CO2 are currently deployed at London Heathrow Airport (LHR) as part of the Sensor Network for Air Quality (SNAQ) project. Meteorology data such as temperature, relative humidity, wind speed and direction are also measured as well as size-speciated particulates (0.38 to 17.4 µm). A network of 50

  15. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  16. PUFKEY: a high-security and high-throughput hardware true random number generator for sensor networks.

    Science.gov (United States)

    Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin

    2015-10-16

    Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.

  17. PUFKEY: A High-Security and High-Throughput Hardware True Random Number Generator for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dongfang Li

    2015-10-01

    Full Text Available Random number generators (RNG play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST randomness tests and is resilient to a wide range of security attacks.

  18. How will transitioning from cytology to HPV testing change the balance between the benefits and harms of cervical cancer screening? Estimates of the impact on cervical cancer, treatment rates and adverse obstetric outcomes in Australia, a high vaccination coverage country.

    Science.gov (United States)

    Velentzis, Louiza S; Caruana, Michael; Simms, Kate T; Lew, Jie-Bin; Shi, Ju-Fang; Saville, Marion; Smith, Megan A; Lord, Sarah J; Tan, Jeffrey; Bateson, Deborah; Quinn, Michael; Canfell, Karen

    2017-12-15

    high vaccination coverage, is therefore expected to both improve the benefits (further decrease risk of cervical cancer) and reduce the harms (reduce treatments and possible obstetric complications) associated with cervical cancer screening. © 2017 UICC.

  19. A multi-scale convolutional neural network for phenotyping high-content cellular images.

    Science.gov (United States)

    Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian

    2017-07-01

    Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  20. Microalgal Metabolic Network Model Refinement through High-Throughput Functional Metabolic Profiling

    International Nuclear Information System (INIS)

    Chaiboonchoe, Amphun; Dohai, Bushra Saeed; Cai, Hong; Nelson, David R.; Jijakli, Kenan; Salehi-Ashtiani, Kourosh

    2014-01-01

    Metabolic modeling provides the means to define metabolic processes at a systems level; however, genome-scale metabolic models often remain incomplete in their description of metabolic networks and may include reactions that are experimentally unverified. This shortcoming is exacerbated in reconstructed models of newly isolated algal species, as there may be little to no biochemical evidence available for the metabolism of such isolates. The phenotype microarray (PM) technology (Biolog, Hayward, CA, USA) provides an efficient, high-throughput method to functionally define cellular metabolic activities in response to a large array of entry metabolites. The platform can experimentally verify many of the unverified reactions in a network model as well as identify missing or new reactions in the reconstructed metabolic model. The PM technology has been used for metabolic phenotyping of non-photosynthetic bacteria and fungi, but it has not been reported for the phenotyping of microalgae. Here, we introduce the use of PM assays in a systematic way to the study of microalgae, applying it specifically to the green microalgal model species Chlamydomonas reinhardtii. The results obtained in this study validate a number of existing annotated metabolic reactions and identify a number of novel and unexpected metabolites. The obtained information was used to expand and refine the existing COBRA-based C. reinhardtii metabolic network model iRC1080. Over 254 reactions were added to the network, and the effects of these additions on flux distribution within the network are described. The novel reactions include the support of metabolism by a number of d-amino acids, l-dipeptides, and l-tripeptides as nitrogen sources, as well as support of cellular respiration by cysteamine-S-phosphate as a phosphorus source. The protocol developed here can be used as a foundation to functionally profile other microalgae such as known microalgae mutants and novel isolates.